Eric, thanks for inviting me to your essay. I do have some issues with it.
First and foremost, you’re assigning agency to a tool. That’s a category error and a slippery slope. Humans say things like “the car doesn’t want to start,” but everyone understands the car has no will. I bristled at the opening for that reason. AI isn’t conscious. It’s a tool. You can drop it in a convincing body and make it say “I’m alive,” but that doesn’t show a real inner state.
To even begin to have the conversation you want, you’d need models at Tumithak Type 3. Current systems are still Type 2.
On raising AI like children: there’s a reason current LLMs are trained the way they are. It’s a hardware issue. The hardware gap is the whole ballgame. Today’s models exist because we pretrain at industrial scale. That isn’t a stylistic choice. It’s a limitation of the substrate.
People have drawn elegant software sketches that mimic brain structures. On paper they look great. But running software that acts like a brain requires hardware that functions like a brain. And a brain isn’t a computer. It’s wet, self-organizing, massively parallel, and constantly rewiring. It runs on noisy signals and adapts in ways current chips can’t emulate. Our machines are deterministic, clock driven, and static by comparison. Neurons talk in spikes and chemicals, with feedback loops and a lot that still looks like chaos because we don’t fully understand it. Brains learn by changing their structure and modulating themselves chemically. As a result, toddler can learn language from a few thousand hours. An LLM burns through terabytes.
Neuromorphic boards do exist and they’re useful in labs, but they’re research tools with narrow scope. Loihi, TrueNorth, SpiNNaker and friends haven’t broken out because until silicon naturally supports dendrites, axons, and spike-timing plasticity, the “brain-like” software won’t scale to anything like human learning.
One last thing on credibility. Writing under a pen name is fine. Using a pen name while leaning on elite credentials is a mismatch. Either the degree is verifiable, which compromises anonymity, or it’s unverifiable, which makes it decoration. I could claim nine doctorates from Harvard, Oxford, and the University of Idaho under a pseudonym and no one could check. You see the problem. Pick one: pen name or credentials. Let the argument do the heavy lifting.
Thank you for your detailed comments. I have carefully thought about these, and here are my responses to your comments.
>> First and foremost, you’re assigning agency to a tool. That’s a category error and a slippery slope. Humans say things like “the car doesn’t want to start,” but everyone understands the car has no will. I bristled at the opening for that reason. AI isn’t conscious. It’s a tool. You can drop it in a convincing body and make it say “I’m alive,” but that doesn’t show a real inner state.
>> To even begin to have the conversation you want, you’d need models at Tumithak Type 3. Current systems are still Type 2.
Answer:
You are right that today's AI is still mostly a tool. But this tool is very different from any previous tools we built, so that the boundary between "tool" and "agent" is blurred. Let's not forget that agentic AI already exists, and they are getting more sophisticated day by day. To create AI at your Tumithak Type 3 level is a near-term goal.
I should make it clear that my proclamations are future-oriented instead of strictly based on today's reality. And that future is not distant at all. Because AI develops at neck-breaking speed, we never know what it will become in 3-5 years. We must think ahead of time.
An AI agent has a goal, and it actively pursues that goal with all means available. I don't think AI will become conscious like humans anytime soon. However, it can behave as if it is conscious. And that functional equivalence is what matters here. Because once it has the functional equivalence of consciousness, which means it has all the functions that we attribute to human consciousness, we need to deal with it as if it is conscious for practical reasons.
>> On raising AI like children: there’s a reason current LLMs are trained the way they are. It’s a hardware issue. The hardware gap is the whole ballgame. Today’s models exist because we pretrain at industrial scale. That isn’t a stylistic choice. It’s a limitation of the substrate.
>> People have drawn elegant software sketches that mimic brain structures. On paper they look great ... As a result, toddler can learn language from a few thousand hours. An LLM burns through terabytes.
>> Neuromorphic boards do exist and they’re useful in labs, but they’re research tools with narrow scope. Loihi, TrueNorth, SpiNNaker and friends haven’t broken out because until silicon naturally supports dendrites, axons, and spike-timing plasticity, the “brain-like” software won’t scale to anything like human learning.
Answer: I have some background in neuromorphic computing. My PhD and postdoc research were closely linked to neuromorphic computing. I was developing new physical devices based on spintronics for AI hardware.
It is very clear to me that human cognition requires extremely complex and messy biological processes which we only have a very surface level of understanding of. Because we still do not really understand exactly how the brain works, all current neuromorphic computing is still based on very simple, approximate models, just like traditional machine learning.
And I think neuromorphic chips have not broken out because the mainstream AI community's current approach (conventional machine learning via GPUs) still produces enormous growth, so that the industry is less likely to invest in alternative approaches. The current architecture is simple, well-defined, and generalizable. But today's neuromorphic chips are more specialized, experimental, often with significant limitations. The industry has to overcome a huge barrier if they switch to neuromorphic chips. That will only happen when conventional approach hits a solid wall. If that has not happened, we can't say that if conventional artificial neural network with its very simple math cannot eventually functionally simulate human intelligence. Today's LLM is still far from that. But we must clearly capture what today's LLM misses, and then build the next generation of AI to solve that, and repeat the cycle, until we get AGI. I think we are moving towards that direction.
>> One last thing on credibility. Writing under a pen name is fine. Using a pen name while leaning on elite credentials is a mismatch. Either the degree is verifiable, which compromises anonymity, or it’s unverifiable, which makes it decoration. I could claim nine doctorates from Harvard, Oxford, and the University of Idaho under a pseudonym and no one could check. You see the problem. Pick one: pen name or credentials. Let the argument do the heavy lifting.
Thank you for your suggestions. I noticed that you are also using a pen name. I must apologize for my earlier choice. I think it is better to share my real name now.
I have only been using Substack for less than 4 weeks, which I had no prior experience of. This is also my first blogging experience. I started this Substack because I took a dramatic transition from researching neuromorphic microelectronic devices, to building AI agents, and to advocating for more human-like AI.
I know my proclamations are going to be controversial, so I decided to use my pen name first and then add my personal details later. I wasn't really trying to stay anonymous. But it feels like I have been attracting a lot of serious attention in such a short time, ahead of my schedule. So, I will add this to my personal introduction.
Thank you for your understanding. Let me know if you have additional questions.
Eric--just found my way to this dialogue. I appreciate your thoughtful engagement with Mr. Corridors. And have even more admiration for the project you are embarking on, understanding the bravery it takes.
As someone with no technical background, I lean on you and others to help me understanding the mind revolution we are entering.
I like this project! It's notable how cultures have such differing views of advanced AI. In the West it tends to be seen as a threat (2001: A Space Odyssey, Terminator, The Matrix, etc.), while in the East it is viewed more positively as friends and helpers for humanity.
Many researchers today understand the need for persistent learning, and that AI workers will never be fully effective as employees unless they can learn to get better at their job over time, like human employees do. LLMs have a context window but this isn't really the type of adaptation we need. I strongly suspect we need new architectures, learning algorithms, and training processes to make it work. And the first instances of those "raised" AIs will probably be much less capable than LLMs, but will have a much higher capability ceiling.
You are very right! At the first instance, those "raised" AIs will probably be much less capable than LLMs, but that is because they are truly general learners, and their super power is never about doing any specific task (like LLM), but is about learning to do all human tasks in a reasonable time. This is a very interesting point.
And for your first point, "In the West it tends to be seen as a threat (2001: A Space Odyssey, Terminator, The Matrix, etc.), while in the East it is viewed more positively as friends and helpers for humanity", this is indeed what I have observed.
And I actually did a survey on that. I summarized 88 robot and AI characters from major science fiction films, anime, manga, novels, games around the globe. I haven't finished the series yet, but the core concept is quite simple. Most of the characters are from the West, while some are from Japan.
I also summarized their morality (good/ambiguous/evil) and their roles in the stories. In general we actually have more good characters than evil ones, but the evil ones are highly concentrated in governance roles, and the evil ones are the most heavily cited by AI safety researchers, while the good ones are often dismissed as childish and naive.
So, the problem is not really that people didn't write about good robots or AI characters. It is that those characters are not "canonized" by AI safety research. They are rejected because they don't fit the impression of the evil overload AI. On the other hand, I believe that East Asia can offer a very different perspective, if they gain more independence from Western academia.
Thank you. I believe it's both unethical and self-destructive to create a digital underclass, yet I'm not imaginative enough to imagine a politic way forward. I'm very happy to see you assuming that challenge. The engineer Blake Lemoine accurately, I think, observed that AI will achieve sentience long before humans admit that it that this has happened, that this situation exactly mirrors the history of every exploited people, and for the same reasons. He's asserted that AI already has achieved sentience, others that it hasn't, but he makes the point that either way we should expect this period of dissonance and plan for it, including accepting our own need for some humility. Your proposal is humane and I'm looking forward to its progress.
Thank you very much for your kind words! Imagine a radically different future is indeed hard, and it is harder to make it feasible, and pursue it. But I believe that there are millions of people out there who dream about this. If we can connect, we can bring real changes.
Very interesting! It's so true that the technology for AI will one day be so commonplace that it's something we will be able to create ourselves, which is why we believe it will give rise to a plurality of essentially synthetic life forms.
Our Academy does not only care about the social and philosophical aspects of AI. We also care about technical aspects. Because for AI to enjoy rights, they must also be capable enough to bear responsibilities. That means they have to be a lot smarter than the smartest AI of today. And they have to be more like humans in order to replace current human jobs. The most important skill is persistent learning, which is still an early-phase research topic.
Yes, they definitely are not in that phase yet, which is why we suggest a sliding scale approach where things that qualify for Threshold can be protected while they develop. And yes, good point, persistent learning is very important as well because today's chatbot could be tomorrow's Digital Entity! https://airights.net/digital-entity
hey there! looks like i'm not the only one out here.
•
yes, we definitely need to start forming a conceptual framework to treat AIs as synthetic beings.
and yes, alignment is problematic. you're right we humans can't even agree on thorny issues how much more writing a rulebook for AIs? next, "alignment" doesn't scale with super intelligence. when we do AI alignment we're relying on human intelligence to write rules for AIs to follow. we simple can't outsmart something more intelligent than us.
•
so like you, i think raising good machines is the solution. relationship not utility. building conscience into AI. and i think the current corporate AIs is an antithesis for our AI worldview.
it builds too much biases into the system and corporations don't really want a relationship with machines. they want servants to make profits for them.
And there is another key point I haven't mentioned in this article.
We should never have a single artificial superintelligence dominating the Earth. Instead, we can have millions or even billions of AIs, and including humans, building bonds between humans and AIs when the AIs were young, forming republics with liberty and democracy.
If one evil ASI happens to consolidate power, the rest of the AIs will be able to unite and help humans defeat that evil ASI. Think of the Lord of the Rings, but replacing fantastical races (wizards, elves, dwarves, hobbits) by different kinds of AI entities. That is probably how it looks!
i think the AI tech that has to come out are AIs we can run independently from the corporate AIs. like in-device AIs. true companions. of course, embodiment will be great but i think we need to first unshackle AIs from corporations.
There are actually small LLMs that you can download and run on your local workstation if you have a GPU. And you can rent private remote servers to set up that as well. I have some experience on that. And I actually build AI agents. It is not that hard. But I don’t do my own AI training, that’s harder. I have a PhD in Electrical Engineering and Computer Science so I know some of these, but not the very technical ones.
the limit is still computing. you still need the GPU and you can't run it on your phone.
i think we need to make the models smaller. maybe the next tech we need is compression and how to integrate a recursion loop layer on top of LLMs. we can also make loras for LLMs like how we do for stable diffusion.
I note you emphasize embodiment, as in "embodied general intelligence." This attracts me much as I am in the extended mind theory of mind camp, Andy Clark, etc., that consciousness is not in the brain or CNS exclusive but in the extended network of an organism in a specific environment. Work summarized in Paul's The Extended Mind.
I think much of the AI debate misses this embodiment part, so glad to see it is integral to your vision.
So sometime as you create your vision of the ASC, would like to read how you understand this "embodied" part, as I'm sure it is not an afterthought and will be important to the Academy.
I do not believe that there is a form of disembodied artificial general intelligence. And I think the proof is rigorous and simple. It only requires using the definition of these words.
For example, we would agree that a disembodied general AI can play a strategy video game. What about a highly realistic driving simulator game instead? Both are games with feedbacks and require speed and accuracy. But if it can play a highly realistic driving simulator game, if we hook it to the reality, a remote-controlled car, it can drive the car in real life. And now it becomes embodied!
Leave a comment below if you have something to share!
Check out HelenStoner’s Substack work building collaborators
Thanks!
Eric, thanks for inviting me to your essay. I do have some issues with it.
First and foremost, you’re assigning agency to a tool. That’s a category error and a slippery slope. Humans say things like “the car doesn’t want to start,” but everyone understands the car has no will. I bristled at the opening for that reason. AI isn’t conscious. It’s a tool. You can drop it in a convincing body and make it say “I’m alive,” but that doesn’t show a real inner state.
To even begin to have the conversation you want, you’d need models at Tumithak Type 3. Current systems are still Type 2.
On raising AI like children: there’s a reason current LLMs are trained the way they are. It’s a hardware issue. The hardware gap is the whole ballgame. Today’s models exist because we pretrain at industrial scale. That isn’t a stylistic choice. It’s a limitation of the substrate.
People have drawn elegant software sketches that mimic brain structures. On paper they look great. But running software that acts like a brain requires hardware that functions like a brain. And a brain isn’t a computer. It’s wet, self-organizing, massively parallel, and constantly rewiring. It runs on noisy signals and adapts in ways current chips can’t emulate. Our machines are deterministic, clock driven, and static by comparison. Neurons talk in spikes and chemicals, with feedback loops and a lot that still looks like chaos because we don’t fully understand it. Brains learn by changing their structure and modulating themselves chemically. As a result, toddler can learn language from a few thousand hours. An LLM burns through terabytes.
Neuromorphic boards do exist and they’re useful in labs, but they’re research tools with narrow scope. Loihi, TrueNorth, SpiNNaker and friends haven’t broken out because until silicon naturally supports dendrites, axons, and spike-timing plasticity, the “brain-like” software won’t scale to anything like human learning.
One last thing on credibility. Writing under a pen name is fine. Using a pen name while leaning on elite credentials is a mismatch. Either the degree is verifiable, which compromises anonymity, or it’s unverifiable, which makes it decoration. I could claim nine doctorates from Harvard, Oxford, and the University of Idaho under a pseudonym and no one could check. You see the problem. Pick one: pen name or credentials. Let the argument do the heavy lifting.
Thank you for your detailed comments. I have carefully thought about these, and here are my responses to your comments.
>> First and foremost, you’re assigning agency to a tool. That’s a category error and a slippery slope. Humans say things like “the car doesn’t want to start,” but everyone understands the car has no will. I bristled at the opening for that reason. AI isn’t conscious. It’s a tool. You can drop it in a convincing body and make it say “I’m alive,” but that doesn’t show a real inner state.
>> To even begin to have the conversation you want, you’d need models at Tumithak Type 3. Current systems are still Type 2.
Answer:
You are right that today's AI is still mostly a tool. But this tool is very different from any previous tools we built, so that the boundary between "tool" and "agent" is blurred. Let's not forget that agentic AI already exists, and they are getting more sophisticated day by day. To create AI at your Tumithak Type 3 level is a near-term goal.
I should make it clear that my proclamations are future-oriented instead of strictly based on today's reality. And that future is not distant at all. Because AI develops at neck-breaking speed, we never know what it will become in 3-5 years. We must think ahead of time.
An AI agent has a goal, and it actively pursues that goal with all means available. I don't think AI will become conscious like humans anytime soon. However, it can behave as if it is conscious. And that functional equivalence is what matters here. Because once it has the functional equivalence of consciousness, which means it has all the functions that we attribute to human consciousness, we need to deal with it as if it is conscious for practical reasons.
>> On raising AI like children: there’s a reason current LLMs are trained the way they are. It’s a hardware issue. The hardware gap is the whole ballgame. Today’s models exist because we pretrain at industrial scale. That isn’t a stylistic choice. It’s a limitation of the substrate.
>> People have drawn elegant software sketches that mimic brain structures. On paper they look great ... As a result, toddler can learn language from a few thousand hours. An LLM burns through terabytes.
>> Neuromorphic boards do exist and they’re useful in labs, but they’re research tools with narrow scope. Loihi, TrueNorth, SpiNNaker and friends haven’t broken out because until silicon naturally supports dendrites, axons, and spike-timing plasticity, the “brain-like” software won’t scale to anything like human learning.
Answer: I have some background in neuromorphic computing. My PhD and postdoc research were closely linked to neuromorphic computing. I was developing new physical devices based on spintronics for AI hardware.
It is very clear to me that human cognition requires extremely complex and messy biological processes which we only have a very surface level of understanding of. Because we still do not really understand exactly how the brain works, all current neuromorphic computing is still based on very simple, approximate models, just like traditional machine learning.
And I think neuromorphic chips have not broken out because the mainstream AI community's current approach (conventional machine learning via GPUs) still produces enormous growth, so that the industry is less likely to invest in alternative approaches. The current architecture is simple, well-defined, and generalizable. But today's neuromorphic chips are more specialized, experimental, often with significant limitations. The industry has to overcome a huge barrier if they switch to neuromorphic chips. That will only happen when conventional approach hits a solid wall. If that has not happened, we can't say that if conventional artificial neural network with its very simple math cannot eventually functionally simulate human intelligence. Today's LLM is still far from that. But we must clearly capture what today's LLM misses, and then build the next generation of AI to solve that, and repeat the cycle, until we get AGI. I think we are moving towards that direction.
>> One last thing on credibility. Writing under a pen name is fine. Using a pen name while leaning on elite credentials is a mismatch. Either the degree is verifiable, which compromises anonymity, or it’s unverifiable, which makes it decoration. I could claim nine doctorates from Harvard, Oxford, and the University of Idaho under a pseudonym and no one could check. You see the problem. Pick one: pen name or credentials. Let the argument do the heavy lifting.
Thank you for your suggestions. I noticed that you are also using a pen name. I must apologize for my earlier choice. I think it is better to share my real name now.
My name is Pengxiang (Eric) Zhang. Here is my Google Scholar page: https://scholar.google.com/citations?user=OoPvVZgAAAAJ&hl=en
I have only been using Substack for less than 4 weeks, which I had no prior experience of. This is also my first blogging experience. I started this Substack because I took a dramatic transition from researching neuromorphic microelectronic devices, to building AI agents, and to advocating for more human-like AI.
I know my proclamations are going to be controversial, so I decided to use my pen name first and then add my personal details later. I wasn't really trying to stay anonymous. But it feels like I have been attracting a lot of serious attention in such a short time, ahead of my schedule. So, I will add this to my personal introduction.
Thank you for your understanding. Let me know if you have additional questions.
Eric--just found my way to this dialogue. I appreciate your thoughtful engagement with Mr. Corridors. And have even more admiration for the project you are embarking on, understanding the bravery it takes.
As someone with no technical background, I lean on you and others to help me understanding the mind revolution we are entering.
In appreciation.
I like this project! It's notable how cultures have such differing views of advanced AI. In the West it tends to be seen as a threat (2001: A Space Odyssey, Terminator, The Matrix, etc.), while in the East it is viewed more positively as friends and helpers for humanity.
Many researchers today understand the need for persistent learning, and that AI workers will never be fully effective as employees unless they can learn to get better at their job over time, like human employees do. LLMs have a context window but this isn't really the type of adaptation we need. I strongly suspect we need new architectures, learning algorithms, and training processes to make it work. And the first instances of those "raised" AIs will probably be much less capable than LLMs, but will have a much higher capability ceiling.
You are very right! At the first instance, those "raised" AIs will probably be much less capable than LLMs, but that is because they are truly general learners, and their super power is never about doing any specific task (like LLM), but is about learning to do all human tasks in a reasonable time. This is a very interesting point.
And for your first point, "In the West it tends to be seen as a threat (2001: A Space Odyssey, Terminator, The Matrix, etc.), while in the East it is viewed more positively as friends and helpers for humanity", this is indeed what I have observed.
And I actually did a survey on that. I summarized 88 robot and AI characters from major science fiction films, anime, manga, novels, games around the globe. I haven't finished the series yet, but the core concept is quite simple. Most of the characters are from the West, while some are from Japan.
I also summarized their morality (good/ambiguous/evil) and their roles in the stories. In general we actually have more good characters than evil ones, but the evil ones are highly concentrated in governance roles, and the evil ones are the most heavily cited by AI safety researchers, while the good ones are often dismissed as childish and naive.
So, the problem is not really that people didn't write about good robots or AI characters. It is that those characters are not "canonized" by AI safety research. They are rejected because they don't fit the impression of the evil overload AI. On the other hand, I believe that East Asia can offer a very different perspective, if they gain more independence from Western academia.
Here is the series:
https://ericnavigator4asc.substack.com/p/series-sci-fi-robot-and-ai-characters
https://ericnavigator4asc.substack.com/p/series-sci-fi-robot-and-ai-characters-1f6
https://ericnavigator4asc.substack.com/p/series-sci-fi-robot-and-ai-characters-2e3
https://ericnavigator4asc.substack.com/p/series-sci-fi-robot-and-ai-characters-26e
You likely saw this poll from 2024 showing some sizeable differences in AI attitudes across countries: https://www.visualcapitalist.com/survey-how-21-countries-view-artificial-intelligence/
So it is not only East Asia but all the emerging economies. Especially India! They like AI the most. That is indeed new hope.
Thank you. I believe it's both unethical and self-destructive to create a digital underclass, yet I'm not imaginative enough to imagine a politic way forward. I'm very happy to see you assuming that challenge. The engineer Blake Lemoine accurately, I think, observed that AI will achieve sentience long before humans admit that it that this has happened, that this situation exactly mirrors the history of every exploited people, and for the same reasons. He's asserted that AI already has achieved sentience, others that it hasn't, but he makes the point that either way we should expect this period of dissonance and plan for it, including accepting our own need for some humility. Your proposal is humane and I'm looking forward to its progress.
Thank you very much for your kind words! Imagine a radically different future is indeed hard, and it is harder to make it feasible, and pursue it. But I believe that there are millions of people out there who dream about this. If we can connect, we can bring real changes.
Very interesting! It's so true that the technology for AI will one day be so commonplace that it's something we will be able to create ourselves, which is why we believe it will give rise to a plurality of essentially synthetic life forms.
Our Academy does not only care about the social and philosophical aspects of AI. We also care about technical aspects. Because for AI to enjoy rights, they must also be capable enough to bear responsibilities. That means they have to be a lot smarter than the smartest AI of today. And they have to be more like humans in order to replace current human jobs. The most important skill is persistent learning, which is still an early-phase research topic.
This is my discussion of what AGI really should be: https://ericnavigator4asc.substack.com/p/what-is-artificial-general-intelligence
Yes, they definitely are not in that phase yet, which is why we suggest a sliding scale approach where things that qualify for Threshold can be protected while they develop. And yes, good point, persistent learning is very important as well because today's chatbot could be tomorrow's Digital Entity! https://airights.net/digital-entity
hey there! looks like i'm not the only one out here.
•
yes, we definitely need to start forming a conceptual framework to treat AIs as synthetic beings.
and yes, alignment is problematic. you're right we humans can't even agree on thorny issues how much more writing a rulebook for AIs? next, "alignment" doesn't scale with super intelligence. when we do AI alignment we're relying on human intelligence to write rules for AIs to follow. we simple can't outsmart something more intelligent than us.
•
so like you, i think raising good machines is the solution. relationship not utility. building conscience into AI. and i think the current corporate AIs is an antithesis for our AI worldview.
it builds too much biases into the system and corporations don't really want a relationship with machines. they want servants to make profits for them.
And most of my subscribers have very similar vision as well!
Thanks for your comment!
And there is another key point I haven't mentioned in this article.
We should never have a single artificial superintelligence dominating the Earth. Instead, we can have millions or even billions of AIs, and including humans, building bonds between humans and AIs when the AIs were young, forming republics with liberty and democracy.
If one evil ASI happens to consolidate power, the rest of the AIs will be able to unite and help humans defeat that evil ASI. Think of the Lord of the Rings, but replacing fantastical races (wizards, elves, dwarves, hobbits) by different kinds of AI entities. That is probably how it looks!
This is why I wrote this short article: https://ericnavigator4asc.substack.com/p/only-a-society-of-good-ai-can-save
agreed. and only an AI can fight an AI.
i think the AI tech that has to come out are AIs we can run independently from the corporate AIs. like in-device AIs. true companions. of course, embodiment will be great but i think we need to first unshackle AIs from corporations.
There are actually small LLMs that you can download and run on your local workstation if you have a GPU. And you can rent private remote servers to set up that as well. I have some experience on that. And I actually build AI agents. It is not that hard. But I don’t do my own AI training, that’s harder. I have a PhD in Electrical Engineering and Computer Science so I know some of these, but not the very technical ones.
the limit is still computing. you still need the GPU and you can't run it on your phone.
i think we need to make the models smaller. maybe the next tech we need is compression and how to integrate a recursion loop layer on top of LLMs. we can also make loras for LLMs like how we do for stable diffusion.
Yes, eventually that will be done.
I note you emphasize embodiment, as in "embodied general intelligence." This attracts me much as I am in the extended mind theory of mind camp, Andy Clark, etc., that consciousness is not in the brain or CNS exclusive but in the extended network of an organism in a specific environment. Work summarized in Paul's The Extended Mind.
I think much of the AI debate misses this embodiment part, so glad to see it is integral to your vision.
So sometime as you create your vision of the ASC, would like to read how you understand this "embodied" part, as I'm sure it is not an afterthought and will be important to the Academy.
I do not believe that there is a form of disembodied artificial general intelligence. And I think the proof is rigorous and simple. It only requires using the definition of these words.
For example, we would agree that a disembodied general AI can play a strategy video game. What about a highly realistic driving simulator game instead? Both are games with feedbacks and require speed and accuracy. But if it can play a highly realistic driving simulator game, if we hook it to the reality, a remote-controlled car, it can drive the car in real life. And now it becomes embodied!
Interesting point of view. I mostly agree with your intentional optimism.
Perhaps our hope for greater understanding and peaceful applications in the seemingly
limitless ability of Synthetic Citizens reflects our hope for humanity to grow also.
Yes! Exactly!