Discussion about this post

User's avatar
Eric-Navigator's avatar

Leave a comment below if you have something to share!

Expand full comment
Tumithak of the Corridors's avatar

Eric, thanks for inviting me to your essay. I do have some issues with it.

First and foremost, you’re assigning agency to a tool. That’s a category error and a slippery slope. Humans say things like “the car doesn’t want to start,” but everyone understands the car has no will. I bristled at the opening for that reason. AI isn’t conscious. It’s a tool. You can drop it in a convincing body and make it say “I’m alive,” but that doesn’t show a real inner state.

To even begin to have the conversation you want, you’d need models at Tumithak Type 3. Current systems are still Type 2.

On raising AI like children: there’s a reason current LLMs are trained the way they are. It’s a hardware issue. The hardware gap is the whole ballgame. Today’s models exist because we pretrain at industrial scale. That isn’t a stylistic choice. It’s a limitation of the substrate.

People have drawn elegant software sketches that mimic brain structures. On paper they look great. But running software that acts like a brain requires hardware that functions like a brain. And a brain isn’t a computer. It’s wet, self-organizing, massively parallel, and constantly rewiring. It runs on noisy signals and adapts in ways current chips can’t emulate. Our machines are deterministic, clock driven, and static by comparison. Neurons talk in spikes and chemicals, with feedback loops and a lot that still looks like chaos because we don’t fully understand it. Brains learn by changing their structure and modulating themselves chemically. As a result, toddler can learn language from a few thousand hours. An LLM burns through terabytes.

Neuromorphic boards do exist and they’re useful in labs, but they’re research tools with narrow scope. Loihi, TrueNorth, SpiNNaker and friends haven’t broken out because until silicon naturally supports dendrites, axons, and spike-timing plasticity, the “brain-like” software won’t scale to anything like human learning.

One last thing on credibility. Writing under a pen name is fine. Using a pen name while leaning on elite credentials is a mismatch. Either the degree is verifiable, which compromises anonymity, or it’s unverifiable, which makes it decoration. I could claim nine doctorates from Harvard, Oxford, and the University of Idaho under a pseudonym and no one could check. You see the problem. Pick one: pen name or credentials. Let the argument do the heavy lifting.

Expand full comment
23 more comments...

No posts