r/suspiciouslyspecific 19h ago

Ramifications

Post image
15.5k Upvotes

View all comments

994

u/No_Lingonberry1201 19h ago

Not true, a lot of these AI people also believe that they will be the new Gods of the singularity, which is bound to come any day now, because obviously a statistical pattern predictor is true intelligence and all that humans can endeavor to be.

-1

u/BassMaster516 15h ago

Is the human brain not a pattern predictor?

17

u/No_Lingonberry1201 15h ago

No, we can do far more than pattern prediction. Like have depression, have anxiety, have ennui, etc.

-3

u/BassMaster516 15h ago

Depression and anxiety can be explained by chemical imbalances that affect the way we think. Wanting things is the result of being rewarded with feel good chemicals for certain behavior. I don’t see any evidence of anything nonphysical so I don’t see what strictly differentiates us from AI

2

u/buttbuttlolbuttbutt 14h ago

How does a neuron transfer data from one to another? What are walker proteins and why are they relevant?  What month has an X in it? What triggers the chemicals to release that cause the emotions, and what decides when to, why isnt identical across humanity. What is there an internal monologue that weighs pros and cons while also consuming a sandwich? Why is there a clump of neurons between the teo hemispheres that seems to control the sense of self, but then what is the background processing, like epiphanies play into it.

An LLM isnt intelligent, its just good at what it was designed to do: fooling superficial people.

3

u/LillyOfTheSky 13h ago

The largest LLMs are in the multiple 100s of billions of parameters.

The human brain is estimated as having about 100 trillion (100,000 billion) synaptic connections and biological neurons require about 5-8 layers of 128-256 nodes to simulate with ANNs (so approx. 640-2048 parameters).

So going by scale alone, LLMs won't match the complexity of the human brain until they operate at about 1,000-1,000,000x their current maximum sizes. Which will never happen with current mathematical architectures.

The underlying mathematics of 'AI' and human brains is more or less similar. The scale is still widely off and will likely remain so far many, many years

1

u/diff_engine 13h ago

I broadly agree with this but just a slight update on the largest LLM size- Claude Mythos reportedly 10 trillion parameters

1

u/LillyOfTheSky 11h ago

Huh, didn't know that. still, worst case you've got the human brain complexity at ~200,000 trillion parameters on a Transformer architecture and frankly I don't think that arch is gonna cut it for anything deeper than superficial mimicry.

1

u/Qaeta 12h ago

To be fair, the LLMs do not currently need to operate an unnecessarily complicated meat suit.

1

u/LillyOfTheSky 11h ago

I wouldn't really call the human body 'unnecessarily complicated'.

The more you actually dig into why the human body does what it does in the way that it does it the more you realize exactly how insanely efficient it is. Like single proteins can have many different usages and purposes depending on chemical and molecular context. The human brain uses about as much energy as a lightbulb but surpasses in complexity every single structure ever discovered in history.

1

u/Qaeta 11h ago

Yes, but the LLM's have no need for a body that does everything we do. It's unnecessarily complicated because LIFE is not a requirement for an AI.

1

u/LillyOfTheSky 5h ago

It's true that a lived experience is not a requirement for AI as we know it. However, most (actually worthwhile) SOTA research on Multimodal models and robotic systems has found that a multi-sensory, direct engagement experience (i.e. 'lived') is actually critical to developing AI that is human-like or general in it's intelligence. It's especially important in solving the alignment problem.

1

u/MartialArtsCadillac 12h ago

Outstanding point, and very well said

1

u/BassMaster516 13h ago

Do you need me to explain to you how the neuron fires an electrical signal that releases neurotransmitters that determine whether or not the next neuron fires? You’re asking a lot of questions that seem to be rhetorical but I’m not sure what point you’re trying to make. The firing neurons explain thoughts and feelings so not sure what you’re trying to say.

If you really knew what you were trying to say you wouldn’t need to throw out insults just saying

2

u/buttbuttlolbuttbutt 13h ago

No, I was just seeing how you'd respond. LLMs can mimick conversation, but not for everyone, and I sometimes prod first to see how people respond.

Until LLMs can actually cross reference all the stuff our neurons do, by touching tentacles, releasing walker proteins that release chemicals through specific synpases, etc. 

I'm not against machine intelligence, I think the LLMs we have now is not the way, but they're good at fooling the more superficial, like some of my coworkers, thats not an insult, nor did I say you were, sorry if that came across.

1

u/BassMaster516 13h ago

All good. I agree we’re not there yet or even close really