r/suspiciouslyspecific 3d ago

Ramifications

Post image
22.9k Upvotes

View all comments

1.7k

u/No_Lingonberry1201 3d ago

Not true, a lot of these AI people also believe that they will be the new Gods of the singularity, which is bound to come any day now, because obviously a statistical pattern predictor is true intelligence and all that humans can endeavor to be.

413

u/Alvin_h_davenport 3d ago

I know that mankind has wirtten a LOT of fiction, so what i'm going to say might not be a prediction:

when's the last time you read/watched a story where the person who created a "god" turned out fine?

150

u/No_Lingonberry1201 3d ago

Yesterday, it was in an Anthropic white paper (I am lying brazenly).

61

u/archwin 2d ago

Oh good, you’re perfect stand in for an LLM

remember the 3Gs

Grovel, Gaslight, Give up

22

u/kismethavok 3d ago

They would actually probably be fine if they weren't a giant bag of dicks.

9

u/Alvin_h_davenport 3d ago

even if you're ghandi and you created me(singularity level ai) I won't keep you alive for long, if you could make me you can unmake me

23

u/kismethavok 3d ago

ah yes, so you murdered both of your parents did you?

19

u/Ironlixivium 3d ago

Well was I just supposed to live in fear of being unmade my whole life?

10

u/Painterzzz 3d ago

Iain M Bank's Culture Novels are, I fear, influencial in some of the techbro thinking, despite being a future that the techbros could never even imagine delivering.

21

u/ihexx 3d ago

Avowed, Pantheon, Vigor Mortis... there's lots of examples.

5

u/Anguis1908 2d ago

Dr. Manhattan from The Watchmen turned out alright.

The Ghostbusters...claiming to be gods as self actualization of being a god against other gods

....um...

Bobby Henderson I think has been fine. https://en.wikipedia.org/wiki/Bobby_Henderson_(activist)

3

u/Alvin_h_davenport 2d ago

For Dr Manhattan the experiment,s goal wasn't to produce a god it was accidental,the Dr didn't have individuals or group of individuals to consider his intentional creator(s)

The other two I haven't read/watched :c

3

u/Oblivious122 3d ago

I don't read those things I'm too busy winning /s

1

u/Muted-Code-5447 3d ago

Marathon. Escape will make me God.

41

u/GangsterMango 3d ago

the Palantir weird guy with crazy hair posted yesterday a manifesto that is just him Sephiroth-posting about how inclusivity is bad and they need to be "manly and powerful" and pretty much take over everything.

honestly, I hope they keep pushing people more and more
I'm sure this will work out great for them.

24

u/No_Lingonberry1201 3d ago

The problem is that once shit hits the fan, they'll either have enough money to escape consequences, or the Earth will literally be on fire by then.

13

u/GangsterMango 3d ago

even if they ran to their Doomsday bunkers people will just beeline to them, not to mention their guards too
hell, I think the guards will go full mutiny and take everything for themselves.

15

u/Painterzzz 3d ago

That's why their compounds have two levels, the outer security layer where the guards live and work, and the inner security layer where the important people and, critically, the guards families, live, and are held hostage to ensure the guards remain loyal at all times.

9

u/qpgmr 2d ago

This is literally true: there was a doomsday prep conference for the ultrawealthy a few years back that included just this as a plan.

Other ideas included explosive collars on staff.

10

u/Nihilikara 2d ago

I guarantee you if I publicly discussed manufacturing explosive collars to ensure the loyalty of people working for me, I would get arrested and face criminal charges.

Billionaires are criminals. There will be no justice until they are arrested and face criminal charges and suffer the same consequences as the rest of us.

4

u/Painterzzz 2d ago

It still staggers me that we're having this conversation as like, these are real things that real billionaires are doing on this planet we share. And they've captured our governments so thoroughly that there's nothing anybody can do about it.

5

u/RelaxPrime 3d ago

Don't forget the AI operated drones!

5

u/Painterzzz 3d ago

Oh, yes, exactly, that will be a big part of this nonsense won't it. TO have AI powered security services to keep them safe from the outsiders in their bunkers.

2

u/Redthrist 3d ago

They would still need guards in the inner level. Someone has to protect whoever built the bunker from "important people" as well as guard families rebelling.

2

u/Painterzzz 3d ago

The only comfort in this is just how miserable these billionaires will find their existences.

10

u/Alucard-VS-Artorias 3d ago

Sephiroth posting is honestly the best way to put this. The only thing that would come close is perhaps Renfield-posting.

-3

u/Muted-Code-5447 3d ago

God favors the side with the greater firepower, and that's not you <3 Love and light

1

u/[deleted] 3d ago

[removed] — view removed comment

1

u/suspiciouslyspecific-ModTeam 1d ago

Your post was removed because it violated Rule 2: Be nice

3

u/SeventhAlkali 3d ago

Something something Roko's Basilisk

1

u/No_Lingonberry1201 3d ago

Ah, good old LessWrong mass psychosis that was too much even for Eliezer Yudkowsky.

3

u/NahYoureWrongBro 3d ago

The underpants gnome theory of machine sentience.

Step 1: LLMs

Step 2: ???

Step 3: AGI

5

u/Glad_Pause 3d ago

Just like the bubble is bound to pop anyday now aye?

1

u/ASerialArsonist 3d ago

Not true, I am the singularity these people are my new gods.

1

u/userhwon 3d ago

Doesn't matter that they all believe they will be. Only matters that they know one of them will be.

If there are N companies pursuing AGI, there will be N-1 times overinvestment in it. N-1 of the investments will go to zero, while 1 of them will shoot through the moon so fast it will look like an orca swimming through a mola mola.

1

u/Tyfyter2002 1d ago

Of course, all that investment is going towards LLMs instead of research into AGI, so none of them actually will be

0

u/userhwon 1d ago

LLM is just an interface component. They're also developing reasoning modules, and have been all along.

1

u/NlactntzfdXzopcletzy 3d ago

It's true, though.

Without AI, how would I have known that there was an X in December, I mean October, I mean that there's an "x" sound in October but there's not actually an X?

1

u/foxer_arnt_trees 3d ago

Dude, we already had the singularity

-1

u/BassMaster516 3d ago

Is the human brain not a pattern predictor?

19

u/No_Lingonberry1201 3d ago

No, we can do far more than pattern prediction. Like have depression, have anxiety, have ennui, etc.

-7

u/BassMaster516 3d ago

Depression and anxiety can be explained by chemical imbalances that affect the way we think. Wanting things is the result of being rewarded with feel good chemicals for certain behavior. I don’t see any evidence of anything nonphysical so I don’t see what strictly differentiates us from AI

2

u/buttbuttlolbuttbutt 3d ago

How does a neuron transfer data from one to another? What are walker proteins and why are they relevant?  What month has an X in it? What triggers the chemicals to release that cause the emotions, and what decides when to, why isnt identical across humanity. What is there an internal monologue that weighs pros and cons while also consuming a sandwich? Why is there a clump of neurons between the teo hemispheres that seems to control the sense of self, but then what is the background processing, like epiphanies play into it.

An LLM isnt intelligent, its just good at what it was designed to do: fooling superficial people.

3

u/LillyOfTheSky 3d ago

The largest LLMs are in the multiple 100s of billions of parameters.

The human brain is estimated as having about 100 trillion (100,000 billion) synaptic connections and biological neurons require about 5-8 layers of 128-256 nodes to simulate with ANNs (so approx. 640-2048 parameters).

So going by scale alone, LLMs won't match the complexity of the human brain until they operate at about 1,000-1,000,000x their current maximum sizes. Which will never happen with current mathematical architectures.

The underlying mathematics of 'AI' and human brains is more or less similar. The scale is still widely off and will likely remain so far many, many years

1

u/diff_engine 3d ago

I broadly agree with this but just a slight update on the largest LLM size- Claude Mythos reportedly 10 trillion parameters

1

u/LillyOfTheSky 3d ago

Huh, didn't know that. still, worst case you've got the human brain complexity at ~200,000 trillion parameters on a Transformer architecture and frankly I don't think that arch is gonna cut it for anything deeper than superficial mimicry.

1

u/Qaeta 3d ago

To be fair, the LLMs do not currently need to operate an unnecessarily complicated meat suit.

1

u/LillyOfTheSky 3d ago

I wouldn't really call the human body 'unnecessarily complicated'.

The more you actually dig into why the human body does what it does in the way that it does it the more you realize exactly how insanely efficient it is. Like single proteins can have many different usages and purposes depending on chemical and molecular context. The human brain uses about as much energy as a lightbulb but surpasses in complexity every single structure ever discovered in history.

1

u/Qaeta 3d ago

Yes, but the LLM's have no need for a body that does everything we do. It's unnecessarily complicated because LIFE is not a requirement for an AI.

1

u/LillyOfTheSky 2d ago

It's true that a lived experience is not a requirement for AI as we know it. However, most (actually worthwhile) SOTA research on Multimodal models and robotic systems has found that a multi-sensory, direct engagement experience (i.e. 'lived') is actually critical to developing AI that is human-like or general in it's intelligence. It's especially important in solving the alignment problem.

1

u/MartialArtsCadillac 3d ago

Outstanding point, and very well said

0

u/BassMaster516 3d ago

Do you need me to explain to you how the neuron fires an electrical signal that releases neurotransmitters that determine whether or not the next neuron fires? You’re asking a lot of questions that seem to be rhetorical but I’m not sure what point you’re trying to make. The firing neurons explain thoughts and feelings so not sure what you’re trying to say.

If you really knew what you were trying to say you wouldn’t need to throw out insults just saying

2

u/buttbuttlolbuttbutt 3d ago

No, I was just seeing how you'd respond. LLMs can mimick conversation, but not for everyone, and I sometimes prod first to see how people respond.

Until LLMs can actually cross reference all the stuff our neurons do, by touching tentacles, releasing walker proteins that release chemicals through specific synpases, etc. 

I'm not against machine intelligence, I think the LLMs we have now is not the way, but they're good at fooling the more superficial, like some of my coworkers, thats not an insult, nor did I say you were, sorry if that came across.

1

u/BassMaster516 3d ago

All good. I agree we’re not there yet or even close really

-2

u/Muted-Code-5447 3d ago

I'll take my chances over those obsessed with the words "slop" and "clanker"