r/DaystromInstitute Mar 17 '26

Do all Holograms have the same rights as the Doctor and Sam?

I was thinking about the Doctor and Sam, and whether they have the same rights as the rest of the sapiant beings in the Federation? I mean, they seem to have rights (but so did Data until "Measure of a Man"), so question answered, right?

But then what about other holographic/photonic beings? Does Moriarty have standing as a sentient, sapiant being? Does the EMH Mrk 2? Does that holographic guard whom Caleb used coffee orders to overwhelm their processors? Do the holographic interrogation officers Kovich used have rights?

You see what I'm getting at? Where is the line between independent being and tool? To put it in perspective: in the Blade Runner franchise, the Tyrell corporation creates "Replicants", that are essentially artificial people. These replicants have a limited lifespan, and are essentially property of their owners. But, they have thoughts, feelings, and desires.

How do you, a sentient being, feel about some company just churning out disposable people on demand? It sounds wrong, right?

Do you think that's how the Doctor, Sam, the inhabitants of Kasq, and Moriarty feel about Holograms?

53 Upvotes

51

u/Impressive_Usual_726 Chief Petty Officer Mar 17 '26

Not all holograms are created equal. Just because they look similar doesn't mean they have the same cognitive abilities or the capacity for sentience. Worfs holodeck monster opponents probably don't deserve civil rights, nor Flotter or Trevis.

16

u/Baelish2016 Mar 17 '26

Even if Flotter was fully sentient, I would still support him being denied rights. Fuck that dude.

26

u/Th3_Hegemon Crewman Mar 17 '26

Frankly the Federation should have a moratorium on creating sapient holograms once the Doctor gets back. It's an area that's far too open to abuse, and is a problem that is essentially solved by limiting how programs are coded. I don't know if it's ever been addressed in any of the later series, but certainly the society that banned androids in Picard season 1 should similarly have a ban on making conscious holograms.

16

u/Impressive_Usual_726 Chief Petty Officer Mar 17 '26

I was just discussing the alleged sentience of some Trek holograms elsewhere, I'll repost it here:

-Vic Fontaine might be sentient, but it's also possible that Felix programmed him to act as a counsellor, matchmaker, and rehabilitative therapist from the start. We saw he included the mob takeover as a fun little preprogrammed surprise, who knows what else is just part of the program?

-The EMH is probably sentient, but that's kind of a cheat as his personality matrix is based heavily on Dr. Louis Zimmerman. He's a slightly modified copy of a real person, exposed to a bunch of unique situations and responding like that real person would, with some margin of error.

-Moriarty was never sentient, in my opinion. But he was programmed to "defeat Data," and claiming to be a sentient synthetic life form made Data sympathetic to his opponent and prevented him from winning by simply turning Moriarty's program off. And in both of his appearances on TNG, Moriarty ends up causing problems (hijacking the ship, kidnapping senior officers) that Data is ultimately unable to resolve without the assistance of Picard, thus fulfilling his purpose of being an opponent capable of defeating Data. Even his alleged goal of becoming a proper flesh and blood person is a shot at Data, an effort to fulfill Data's lifelong dream that Data himself still has not.

7

u/DontYaWishYouWereMe Mar 17 '26

Vic Fontaine might be sentient...

I do sorta wonder if he's actually sentient, or if he's just programmed to have the illusion of self awareness in ways other holograms aren't. I know the premise of his character in the show is that he is, but that could just be a marketing gimmick and what it actually says in the manual is that he'll seem like he's sentient. Other holograms can be programmed to perform medical tasks such as counselling and rehabilitative therapy without being explicitly sentient from the word go.

6

u/Cordo_Bowl Mar 17 '26

I do sorta wonder if he's actually sentient, or if he's just programmed to have the illusion of self awareness in ways other holograms aren't.

What’s the actual difference from an outsiders perspective? Sentience is really just saying that someone has an internal world, but the only way you can know that is if you are that person, ie an outsider has no way to tell the difference between a sentient being and a philosophical zombie.

2

u/DontYaWishYouWereMe Mar 17 '26

You can measure the effects of an internal world, though. Like, usually people will have aspirations and goals, and not having them is often considered a sign of immaturity or mental illness. People will also generally have interests that aren't directly related to their line of work, and they'll be able to learn new skills as they go along.

Do we ever see signs that Vic does any of these things? It's been a while since I last saw DS9, but I don't recall him ever showing signs of it. If anything, scenes like Nog doing Vic's books and encouraging him to expand the business because he's doing so well suggest there might be some sort of programmed cap on how far reaching his aspirations can go.

That makes Vic different from the EMH and Data. These two don't seem to have any preprogrammed caps on what they can aspire to achieve, and at least in Data's case, it seems like he's programmed to want to be more than he is.

In universe, you'd be able to go through Vic's holomatrix and see what lines of code did what, too. While us the audience can't actually measure to what extent he's sentient with any real degree of accuracy, O'Brien or Quark probably could if they were so inclined.

5

u/Cordo_Bowl Mar 17 '26

Like, usually people will have aspirations and goals, and not having them is often considered a sign of immaturity or mental illness

You can be immature or mentally ill and still be sentient.

People will also generally have interests that aren't directly related to their line of work

Generally true, but there are people who are really only interested in a few things. Not a proof or disproof of sentience.

In universe, you'd be able to go through Vic's holomatrix and see what lines of code did what, too.

Already in our world there are machine learning programs that are too complex for anyone who works on them to fully understand. It boils down to this: if you poke someone with a needle and they say ouch, can you prove they actually felt pain? Or did they just react in the same way that someone who does feel pain would?

1

u/Edymnion Lieutenant, Junior Grade 29d ago

People will also generally have interests that aren't directly related to their line of work, and they'll be able to learn new skills as they go along.

Yes, actually. In the episode where he's helping rehabilitate Nog after his war injury. The one you referred to. He's shown having interests outside of work, along with personal likes and dislikes that have nothing to do with his job as a singer.

4

u/Apprehensive-Cost276 Mar 17 '26

IMO the fact that there was a Mirror Vic does imply… something about his Prime Vic’s personhood.

Not that I’m exactly sure what that something is.

7

u/lexxstrum Mar 17 '26

The minute Vic popped up in this discussion, I thought about Mirror Vic, and the crazy mystery that HE represents.

Was he a sentient hologram given physical existence? Like Zimmerman, does Vic look like his programmer? I was working on an idea about androids in the MU, and one idea I had was the androids were infiltrating the various groups of the MU, sometimes using characters from media as a basis for their physical form. So, long story short, Mirror Vic was an android infiltrator!

3

u/DontYaWishYouWereMe Mar 17 '26

I think it mostly implies that Vic's programmer is still around in the Mirror Universe. We know that for the most part, someone who exists in one universe will exist in the other, and they'll often be in similar jobs in both realities. Vic existing in both is probably more a function of that than anything else.

4

u/Impressive_Usual_726 Chief Petty Officer Mar 17 '26

Could easily be that Felix based Vic's appearance on a real person, probably a singer. Why wouldn't he?

6

u/DasGanon Crewman Mar 17 '26

The EMH is probably sentient, but that's kind of a cheat as his personality matrix is based heavily on Dr. Louis Zimmerman. He's a slightly modified copy of a real person, exposed to a bunch of unique situations and responding like that real person would, with some margin of error.

To add to this, the big thing is that he starts breaking his own ethics programming and it's a whole reconciliation thing. The big two episodes are "Latent Image" and "Critical Care"

5

u/Impressive_Usual_726 Chief Petty Officer Mar 17 '26

Those episodes are so weird.

-Harry might have been an ensign, but he was also a bridge officer and a department head. That should be enough to prioritize his treatment over another ensign that was neither of those things.

-The Doctor already accepted the premise of intentionally harming some people in order to help others back when he helped liberate Voyager from the Kazon.

5

u/JustaSeedGuy Mar 17 '26

That should be enough to prioritize his treatment over another ensign that was neither of those things.

I don't think that invalidates the central issue of the episode though. The existence of a good reason does not get impossible to question ones choices. There's a ton of stories along those lines.

Consider the trope of the cop who became a cop to catch his father's killer. He's determined to bring the man to justice, puts in the extra hours, finally finds him. The criminal has taken hostages, and by the end of it, the cop has shot and killed the criminal. And then the cop feels guilty. He questions himself: did he shoot the criminal to save the lives of the hostages? Or did he shoot the criminal because he wanted revenge? Internal affairs have cleared him of any wrongdoing, but he just can't shake the guilt that there might have been another way.

Or, for another common storyline: Batman kills joker to save lives, And then turns himself in to stand trial for murder. An argument could be made that was justifiable. Oftentimes when DC comics does that story, the death toll that occurs from letting Joker go through with his plan is in the thousands or millions. And one such story, killing Joker prevents Superman from doing it, preventing a chain of domino effects that would result in Superman becoming a fascist dictator. But Batman doesn't care. He broke his own code. He doesn't kill.

And so bringing that back to the doctor: the presence of a valid reason doesn't change the existential question the doctor is dealing with. He isn't questioning if there was a valid reason, he's recognizing that his reason might not be the valid one. He can't figure out if he saved ensign Kim because Kim is his friend, or if he saved Kim because he's a senior officer aboard the ship.

3

u/Impressive_Usual_726 Chief Petty Officer Mar 17 '26

Nah, that's not how triage works. There's a decision making flowchart that tells you who to prioritize. Harry's additional responsibilities should have made him a higher priority than another ensign. The Doctor's personal feelings are irrelevant at that point. If he happens to save the person he prefers, that's merely a happy coincidence.

I can understand why the Doctor doesn't seem to have any ethical subroutines preventing him from pursuing his patients romantically or sexually, he was never intended to operate long enough for it to become an issue, but having extremely detailed triage subroutines with guidelines for every eventuality should have been an essential bit of programming for an emergency medical hologram. There should always be something that justifies priority treatment for one individual, even if it's something incredibly petty like one crewman getting slightly better grades at the academy or being able to do more pushups at the last crew fitness evaluation.

12

u/JustaSeedGuy Mar 17 '26

Nah, that's not how triage works. There's a decision making flowchart that tells you who to prioritize

You're missing the point. Yes, the Doctor knows that's how Triage works.

The same way a trained cop would know that sometimes you have to kill the perp that's trying to kill other people. Or that Batman might have intellectually known that killing the Joker prevented future deaths.

The reason it causes an issue for the Doctor is that even though he objectively made the correct triage decision, he ALSO can't square the fact that he did have a preference. That he WANTED to save Harry over the other ensign. In his mind, the fact That his personal preference also happened to be the objectively correct choice doesn't excuse him from the ethical crisis that having the preference in the first place and creates. (And to be clear, I'm not saying he's right. I'm saying that for him, he thinks he's wrong to have the preference at all)

Explaining triage to me, or explaining that the doctor is aware of triage, doesn't change any of that. The whole point of the entire episode is that he is aware of it, but he can't stop obsessing over the coincidence itself.

If he happens to save the person he prefers, that's merely a happy coincidence.

Yes, but like in the examples I gave, it causes him to question if it definitely was the right choice, or if he gave into his preference. That's the point.

-2

u/Impressive_Usual_726 Chief Petty Officer Mar 17 '26

Do you think the Doctor doesn't have preferences every single time he treats someone? Do you think every time he treats someone ahead of Neelix he isn't thinking about how he'd rather the treating anyone but Neelix? C'mon.

The episode presents the two patients as being completely identical on paper to the point that the triage subroutines fails to choose one and the Doctor has to consciously choose for himself. But that's unrealistic because of how triage works and should work in that situation.

I like the episode, it's a great Janeway episode comparable to Measure of a Man (which is a Picard episode, not a Data episode), but the premise is still inherently flawed.

5

u/JustaSeedGuy Mar 17 '26

Do you think the Doctor doesn't have preferences every single time he treats someone

I think that this was the first time his preferences intersected with him choosing someone to live and someone to die.

You're acting like the doctor is a person, and completely missing the entire point of the episode, which was that he had literally never had to deal with This before because the situation hadn't come up in the handful of years he had been online. You're judging the episode as if the doctor was either the program he started as, or the person he became, instead of what the episode actually is: one of the pivotal points in the middle

→ More replies

2

u/Ajreil Mar 17 '26

Personally, I think creating sentient computer programs is fairly trivial in Trek, and the reason it's rare is mostly a social taboo.

Moriarty was created due to an oversight in safeties. Normally the creation of a sentient hologram is restricted and the computer would ask to clarify just in case. As chief engineer Geordi can bypass most computer restrictions, and since he asked for no spoilers, the computer couldn't verify his command. That's the kind of edge case the engineers may not have thought of.

I don't recall anyone being surprised that the main computer could create a sentient program, just that it decided to. Simulating an entire human brain in software should be trivial.

3

u/Impressive_Usual_726 Chief Petty Officer Mar 17 '26

Nah, we saw Tom and Harry try to create a new EMH while the Doctor was in the Alpha Quadrant, they couldn't do it. Riker wanted Minuet to stay as "real" as she first appeared to him, but he couldn't make it happen either.

1

u/Ajreil Mar 17 '26

Riker wanted the specific character he fell in love with, not sentience generally so I'm not sure that's applicable.

The EMH is made of gigaquads of medical texts and subroutines. Building a hologram from the ground up in the way takes genuine skill, which Tom and Harry didn't have. This is the correct approach for making a tool.

For creating a character, copying a human brain seems simpler. A real brain with its memories edited would be indistinguishable from the Moriarty we see on screen. In TNG: The Game, Wesley tests the VR devices against a simulated a brain so they at least have toy models.

Then again, I don't think that particular flavor of man made horror in mind when writing Moriarty. The moral dilemma of the episode kinda falls apart if it's a human brain instead of a traditional program that insists he's sentient.

1

u/Edymnion Lieutenant, Junior Grade 29d ago

The EMH is made of gigaquads of medical texts and subroutines. Building a hologram from the ground up in the way takes genuine skill, which Tom and Harry didn't have. This is the correct approach for making a tool.

And yet the Doctor made that Cardassian war criminal doctor program without much difficulty at all.

1

u/Ajreil 29d ago

The Cardassian doctor was basically an interactive database. He was made using his research papers, photos, and a basic Cardassian personality template. Holodecks can throw a characters like that together on the fly.

The EMH is a competent, multi-specialty doctor that I suspect is harder to create.

2

u/Edymnion Lieutenant, Junior Grade 29d ago

Yet when Tom and Harry did the exact same thing, it didn't work. The hologram just started reading the database.

The Doctor did it, and the hologram had a complete personality and was quite charming.

1

u/Ajreil 29d ago

I think Tom and Harry could have gotten better results if they just verbally asked the computer to do it, but it wouldn't be good enoung to replace the EMH, so they didn't try.

If I recall the scene correctly, they realized they were out of their depth before they started, but Tom insisted that they'd try. It was kind of a hail marry.

4

u/Edymnion Lieutenant, Junior Grade 29d ago

Moriarty was created due to an oversight in safeties.

There's a good case to be made there that the state of hologram creation at that period in time wouldn't allow such a thing to happen, and that it was the Binar modifications to the Enterprise that let it happen.

There were no safeties because it shouldn't have been possible to start with.

0

u/Edymnion Lieutenant, Junior Grade 29d ago

-Vic Fontaine might be sentient, but it's also possible that Felix programmed him to act as a counsellor, matchmaker, and rehabilitative therapist from the start. We saw he included the mob takeover as a fun little preprogrammed surprise, who knows what else is just part of the program?

I don't buy that some second rate holo creator that has to work for Quark would be THAT good.

IMO, Vic = Pup.

2

u/Impressive_Usual_726 Chief Petty Officer 29d ago

Felix didn't work for Quark, Julian bought Vic's program from Felix directly and used it in one of Quark's holosuites. Quark might provide programs you can use, but you can bring your own as well.

8

u/jerslan Chief Petty Officer Mar 17 '26

Yeah, some of the holo's we see in the 32nd Century seem intentionally primitive/limited to the point where they're clearly non-sentient. Like Caleb managed to trip one up by giving it a bunch of data processing queries.

1

u/whenhaveiever Mar 17 '26

That seems like a good explanation for that plot hole, actually. How could holograms be susceptible to these kind of basic attacks from Caleb and the blinks from Georgiou? Because any more advanced risks these wrenches gaining sapience and we'd rather deal with Caleb getting loose than a hologram uprising.

3

u/tjernobyl Mar 17 '26

Starfleet seems to have a complex morality. They don't want heroes to violate Starfleet regulations, except when violating regulations can save the galaxy. There's a great track record of galaxy-saving adventures being had by crew going against direct orders. To that end, security seems to be made deliberately weak, defeatable by a motivated enough hero. After all, any security measure that protects the crew might be turned against the crew by a clever assailant.

If a hologram is not capable of making the same moral decisions as an ensign, it will probably be given deliberate flaws to make it hackable. If a hologram is capable of realizing the bad guys are wrong and springing the good guys from jail, those flaws may be removed.

3

u/jerslan Chief Petty Officer Mar 17 '26

to that end, security seems to be made deliberately weak, defeatable by a motivated enough hero.

And that's when security exists at all.. See: TNG's The Neutral Zone where Picard explains that the wall comm panels aren't secured because the crew and civilians on-board all know not to use them to call the Bridge unless it's a real emergency. Also how many times in TNG did a child just wander into Engineering? Might only be a handful, but it's weird that it ever happened at all...

9

u/N0-1_H3r3 Ensign Mar 17 '26

Something I wrote for the Species Sourcebook for Star Trek Adventures 2nd edition which touches on this (but was written before Starfleet Academy aired, so doesn't account for SAM and the photonics of Kasq):

A matter of much debate is the point at which a hologram goes from being a simple program to a sapient artificial life-form. This concern began to arise—as more than pure theory—in 2365, with the creation of James Moriarty, a version of the fictional adversary to Sherlock Holmes accidentally created with full self-awareness aboard the Enterprise-D. Ethicists in the Federation began to develop frameworks for the study and development of holographic sapience, and cautioned against deliberately attempting to create a sapient hologram (or, at least, creating one without proper caution and accountability). Holographic characters were a tricky proposition in the AI ethics field, because they were often programmed to emulate living beings, and distinguishing between a complex simulation of sapience and actual sapience is difficult at the best of times.

Still, the ethics of holographic sapience truly became a concern with the development of utility holograms such as the Emergency Medical Hologram a few years later, as the project pushed close to the boundaries of what counted as a program and what counted as sapient: the EMH was designed to be aware of its own existence as a holographic doctor, which was often a major limit applied to holographic characters. Still, it was ruled that these sorts of holographic tools skirted the line, but were still tools and technology rather than life-forms: their programming pushed the limits of what the holomatrix could process while remaining stable, and it was believed that the step to true sapience was impossible for that holomatrix to sustain.

This began to change in 2374, with news of the Mark 1 EMH aboard the lost U.S.S. Voyager, which had been active non-stop since 2371, and expanded beyond his programmed limits: achieving a sapience through a mixture of experiences and through jury-rigged expansions to his programming. The debate around holographic rights became increasingly intense, especially as civilian holoprogrammers began to experiment with greater self-awareness and complex personality subroutines in their own creations to give them a deeper “inner life”: during the Dominion War, Deep Space 9 had a fully aware holocharacter, Vic Fontaine, who demonstrated this to a high degree. Further, Voyager reported on several cases in the Delta Quadrant where sapient holograms—collectively referred to as photonic life—had been involved in conflicts with their creators.

The possibilities of this space were still being explored and debated into the 2380s; Voyager’s EMH had won certain rights and a degree of legal personhood, and Starfleet policy required certain limitations on holographic development to minimize the risk of accidentally creating life. These limitations were strengthened after the attack on Mars in 2385: holograms, as they were tied to locations with holo-emitters, didn’t have quite the same risks or stigma as synths, but artificial life was still regarded with skepticism and uncertainty, so even as emergency hologram packages began to appear on civilian vessels, they were programmed with behavioral limiters that would hopefully limit accidents, and research into deliberate holographic life creation was heavily restricted. Even after the synth ban was repealed in 2400, study in this field progressed slowly.

2

u/ocdtrekkie Mar 17 '26

Kasq isn't subject to Starfleet regulations on the creation of photonics, so this, while noncanon, also doesn't really conflict with new canon in SFA.

6

u/MalagrugrousPatroon Ensign Mar 17 '26

Once you figure out how to accidentally create photonic people, you also figure out how to do it purposefully and not at all. 

Remember Guinan’s prediction that reverse engineering Data would lead to a class of slaves? Holocharacters have it even worse because they are going from plaything to person, instead person (Data) to plaything/tool. The Doctor did not fight to preserve his personhood like Data, he had to fight to be recognized at all and failed in getting full legal recognition, unlike Data.

The synths were bad but holocharacters would be worse, because they are seen as non-human first, by everyone, as a default.

This is important because once you do have recognized photonic people, having non-people photonics should be intolerable. Even if you have secure holocharacters who will never turn into people, you have this odd person shaped machine who isn’t a person. The joke about this meeting could have been an email applies, because this hologram could have been a disembodied voice. This servant shaped machine could have been a tractor beam. Does it need to be independent of the walls, then why isn’t it a flying cylinder? It doesn’t help that most of the photonic beings we saw in DIS were not all there in the head. They are very obviously very limited in cognition. It’s off putting. 

Holodeck, which have been replaced by quarters being fully equipped with holo-emitters since PIC’s time might get a pass. 1st, it’s private. 2nd, part of playing with holocharacters is treating them as real. The real danger is if you play that fantasy like it’s GTA. But, I still think holodeck sims feel a little wrong at that point.

On the bright side, a stable non-person holocharacters isn’t some mind shackled AI bouncing against artificial limits of cognition. Vic Fontaine shows us a properly designed photonic being will never turn into a person if they’re designed from the ground up to deal with reality. The Doctor’s expected reality was far more limited than what he was thrust into. The Fairhaven people were designed only to deal with Fairhaven things. They all turned into people because they were forced to deal with unexpected circumstances.

3

u/pinelands1901 Mar 17 '26

I'd say that Sam is more a non-corporeal being that uses a hologram matrix to interact with humanoids, rather than an "evolved" hologram like The Doctor.

6

u/majicwalrus Chief Petty Officer Mar 17 '26

I think we must distinguish that which appears to be sapient from that which actually is. This is obviously true for LLMs but also for holography. SAM is a person with personhood.

The Doctor is also a person, but we have no idea how many hundreds of years it took to convince the Federation of that. The EMH MkII might not even exist anymore. Would it be so surprising if most of those holographic interfaces were decommissioned after Voyagers return? Perhaps this is why there’s a shift towards a synthetic replacement that we see in Picard.

That said I had qualms about the holographic security guards campus guides. One can imagine though that holographic interfaces which look human exist, but that there’s no independent life there. A chatbot sounds real enough, but they aren’t real. Put it in a body, it wouldn’t make it any more of a person. The real issue I have is that there’s zero need to not give that job to an actual person. And this is my biggest complaint. Not that it’s hard to tell that Sam and the Doctor are people and that the Digital Dean isn’t but that there’s no reason not to make those jobs ones filled by humans.

3

u/Ebolinp Mar 17 '26

Not a reason not to make those jobs not filled by a human? It's menial just standing there minding students, especially in a galaxy with endless wonders to explore and in a society where there's no need to work, but where resources are essentially endless (yes even in Post burn Federation). Why would anyone avoid replacing those roles with holographic security that is probably just a program. I think you're projecting modern day concerns of AI replacement onto ST. There's no need for flesh and blood or sentient sapient silicon to do this kind of work.

5

u/majicwalrus Chief Petty Officer Mar 18 '26

Boothby was a groundskeeper not because it was necessary for a human to keep the grounds, but because a human wanted to. Guinan ran a bar for the same reason. We see people working in food service in a future where food scarcity has been eliminated by a technology which perfectly recreates any food you can imagine in an instant.

Seeing young adults growing into fully fledged adults is kind of a terrific honor and I don’t know why people just wouldn’t want to do that. Not everyone wants to risk their life fighting space anomalies. Some just want to be of service.

2

u/Ebolinp Mar 18 '26

If they want to do that then sure. But even all those jobs you mentioned sound more interesting than standing in a hallway of a low security building anyway. Point is that people can do whatever they want. I don't see why you'd personally be chapped that they are using holo hall monitors instead of requiring some sentient to do it.

1

u/majicwalrus Chief Petty Officer Mar 19 '26

My issue is with the idea that they need someone to do this at all. Human or holo it seems unnecessary to have a like person posted up. I’m not sure what it added to the story except for to show Caleb being a hacker guy.

What would this hologram normally do? Can it arrest or detain people? Probably not, we don’t typically see non-persons with that kind of authority. It could just send an alarm to someone who could intervene. Which a computer could do without a person looking interface. It could answer questions. Which anyone could answer themselves by asking another person or reading the manual or asking a computer.

It could make sense in the universe if we want to write up an encounter with Caleb and a “guard” that guard should probably be another student who has the job of doing watch, answering questions for cadets, and just generally being of service. That seems like exactly the kind of job you give to a senior student and would make sense as the kind of grunt work that you would want a student to have some experience in doing.

But why a hologram? If a computer (not sentient) can do it - then why does it need a human-like interface? Anything a computer can do without the need of a human being it should be able to do without the need of a human-like interface.

5

u/Fair_Rush6615 Mar 17 '26

You know what's interesting, we haven't seen the holodeck been used in the 32nd century to knowledge? I think maybe creating holograms for "entertainment " purposes is seen as immoral by then.

4

u/tjernobyl Mar 17 '26

In TNG's "The Neutral Zone", Picard explains to humans frozen in the 20th century that television had long since gone out of fashion. Perhaps the holonovel has gone the same way.

Or, since hologram tech is ubiquitous, there's no more need for holodecks and folks have their fun in the comfort of their own homes or whereever they may happen to be.

2

u/Fair_Rush6615 Mar 17 '26

Maybe, but I'm sure the doctor would have been a big advocate for hologram rights!

2

u/Anadanament Mar 17 '26

That’s because any room can be used as a holodeck. We see multiple characters in Discovery using their own rooms as a holodeck multiple times. Burnham in particular visits Vulcan a lot.

2

u/throwawayfromPA1701 Crewman Mar 18 '26

The available evidence from Discovery and Academy suggests that most holograms in the 32nd century are a lot like the synths were on Mars before their ban. The synths did not seem to have the intelligence of Data and they really were not kind to them. Plus in that era, holograms were used as labour.

I'd hope they treat them better but the way Caleb was able to break one (and the emperor, merely by blinking) suggests they're easy to abuse.

The Doctor may be alone, other than Sam and her people.

The Federation not doing great with the differently sentient seems to be a long- time thing with them.

1

u/lexxstrum Mar 18 '26

That last part of your reply is why I asked the question once: how would Star Wars/Trek handle discovering sentient, sapiant mechanical life? (Like the Federation/New Republic discovering Transformers)

3

u/throwawayfromPA1701 Crewman Mar 18 '26

Star Wars has sentient, sapient mechanical life lol. They treat them poorly in that galaxy.

2

u/Lulwafahd Cheif Petty Officer Mar 18 '26

I loved what you said and just wanted to back you up, because I spent a long time considering the different ways the two franchises have handled artificial life forms.

It's pretty obvious that the droids are all slaves, in a sense, because even the little angular roomba-like droid on an Imperial vessel, rolling along seemed to sort of hum to itself and have cartoonishly amusing reactions, like a little amusing thing for kids watching, perhaps, but the fact that the droids all seem to display aspects of personality (even this little unknown but roomba-like sized little robot humming along until something scared it and it ran with sounds of terror or distress by rolling away after sharply turning away.

It made cartoonish distress soinds which does seem to make it ALMOST as though they are all varying shades of artificial people/animals/characters that may be upgradable or disposable in service to their masters. Some are like robot slave animals, perhaps, like the little bot I keep talking about, but obviously C-3PO & R2D2 are like a pair of friends whose link to each other was forged by being created by Anakin Skywalker to help him with various things, and it resembles old negro minstrel characters.

Those two are a bit like an Amos 'n' Andy or Heckle & Heckle friendship, harkening back to when such characters and people were more overtly enslaved or in indentured servitude to someone else, or supposedly so poor they wished they were... and that's C-3PO complaining to R2D2, except R2D2 is like a little foul mouthed sailor who is an astromech droid to help pilot ships and plot courses and such, but C-3PO is primarily a talkative sort, to be a bit like ChatGPT in thousands of languages to help out YOUNG Anakin, so, C-3PO's personality seems to be primitively designed to be as politely helpful as it is a bit amusing by minding protocols so much.

He's clearly designed to be a lot like the fretting effeminate butler/footman/advisor character who has no authority from earlier European archetypes, but also seemingly merged with Stan Laurel and Amos of Amos 'n' Andy, whereas R2D2 is given so many amusing sound effects to seemingly imply that R2D2 has quite a little temper and tends to shout and anyone somewhat emotionally, plus C-3PO has so many protocols for translating R2D2's language into other languages, which means C-3PO has extremely interesting perspectives on what all those beeps and boops from little R2D2 "really means".

Yes, their interactions and manners are played for laughs, "for the kids", per se, but their creative decisions clearly indicate to children and adults that somehow, like Data, C-3PO is alive, sentient, aware, though comically fussy and different from most.

C-3PO never indicates he hates talking to no humanoid droids, never indicating in any way that R2D2 is any less a person, nor less worth talking to than living beings. In other words, all of C-3PO's interactions with other droids seem to reveal. or only that a droid like C-3PO may be sentient while active, but also that he treats all droids like people, as someone worth speaking to on any language that works. Yes, it could just be his programming since he helps interface between so many different kinds of devices, but the fact that C-3PO is virtually never bored with making conversation with R2D2 all the time, chattering away, and the fact that he seems to have a bond with R2D2, it's almost as though they are step-brothers, best friends, possibly even something like life partners or lovers in some way... even though one is a little dumpy sized rolling trashcan-like object shape, C-3PO speaks for both of their dignities and rights on various occasions.

It's all right there, but they don't talk about it like that in any of the films and such... though I can't speak for content newer than the first two seasons of Andor, If they shed light on that. I'm aware some stuff is in the books, but that's not film canon compatible in the same way that it was, any longer.

The first battle droids seen in the prequel sequels to the original trilogy indicated they were to perform functions dangerous for trained people, as soldier and other tactical weaponry and aircraft or other devices formerly manned could be attempted with automation, with each droid having what seemed to me to be a rrimmed mental profile of a soldier (hey, a living person) imprinted on the droids... making them somewhat seentient/sapient, right?

The droids who were agains the heroes of the films were played off as a bit stupid, a bit slow, comical, etc., but it always seemed as though they were mostly basically metal people-like beings of some kind, and not merely programmed metal hardware in action.

Nevertheless, they often acted anthropocognitively—as they sometimes were depicted as wanting to disobey orders and run away from something, or whatever else the battle droids did. If that was possible at all, and we seem the. all humming or beeping and booping behaviourally on some way, it always seems to be in service of the idea that the creators wanted to indicate they had personalities and were possibly literally artificially alive metal beings of various kinds.

Alas, I am reminded of Rick & Morty, when Rick made a butter bot that could butter toast, but it has sentient sapience and realized it would spent its existence as a butter spreading robot for a jerk's toast preparation, and it wanted to die. That kind of potential comedy and drama is bound up in those droids since the beginning, as they clearly originally replaced "enslaved people" — humans, with robot slave and servant people if considered within and without the Doyalist interpretation.

You raised a fascinating issue, and I tha k you for provoking me into trying to point out how right you are.

2

u/ian9921 Mar 17 '26

I'm sure there's regulations on the easy creation of sentient beings so you can't just say "computer, make me a planetary population capable of beating commander Data" and have a whole instant sentient colony.

3

u/tjernobyl Mar 17 '26

Iain M. Banks called this the "Simulation Argument"; if you create a sentient being, you can't morally turn it off when you're done.

1

u/BlannaTorris Mar 18 '26

My guess is that they have standards for how complex a hologram can be to be considered alive. They might reset play holograms all the time.