Artificial Intelligence thread

9dashline

Senior Member
Registered Member
Artificial General Intelligence (AGI) is the hypothetical ability of a machine to perform any intellectual task that a human can do. Large language models, such as OpenAI's GPT-3, have demonstrated impressive capabilities in natural language understanding and generation, leading some to speculate about their potential for achieving AGI. However, despite their notable achievements, large language models have inherent limitations that make them unlikely candidates for true AGI. This essay will discuss these limitations, including their lack of reasoning abilities, reliance on massive amounts of data, absence of common sense understanding, and ethical concerns.

A key aspect of AGI is the ability to engage in complex reasoning and problem-solving. While large language models can generate coherent text and answer questions based on the patterns they have learned from their training data, they lack the ability to engage in deductive or inductive reasoning that is essential for AGI. This is because these models primarily rely on pattern matching and statistical associations rather than understanding the underlying logic or principles behind the information they process. Consequently, they are prone to making errors when faced with novel situations or questions that require logical reasoning.

Large language models depend on vast amounts of data for their training, which presents several challenges for achieving AGI. First, the need for extensive data limits the applicability of these models in domains where data is scarce or expensive to acquire. Second, the sheer scale of computational resources required for training large models raises questions about their efficiency and ecological impact. In contrast, humans can learn and generalize from a relatively small number of examples, highlighting the difference between the learning mechanisms of large language models and true AGI.

Common sense understanding is a fundamental aspect of human intelligence, allowing us to make inferences and predictions about the world based on our background knowledge. However, large language models often lack this basic understanding. Despite being trained on vast amounts of text, these models still make mistakes that a human with common sense would not. This is partly because large language models learn from text data alone, which may not fully capture the richness of human experience and understanding. True AGI would require the integration of various types of knowledge, including visual, auditory, and tactile, as well as an understanding of the underlying structure of the world.

While large language models have undoubtedly advanced the field of AI and demonstrated impressive natural language capabilities, they fall short of achieving true AGI due to their lack of reasoning abilities, reliance on massive amounts of data, absence of common sense understanding, and ethical concerns. To reach AGI, researchers must explore alternative approaches that move beyond the limitations of current large language models, incorporating reasoning, efficient learning mechanisms, and a more comprehensive understanding of the world. Addressing the ethical challenges associated with AI development is also crucial to ensure that AGI benefits all of humanity and avoids causing harm.
 

9dashline

Senior Member
Registered Member
In reply to tacoburger from moderator redirected thread....

-----

While I understand your points regarding the potential of large language models (LLMs) and the importance of neuron scaling, I believe that there are still limitations that prevent LLMs from achieving true AGI in their current form.

Firstly, I acknowledge the progress being made by companies like Google, OpenAI, and DeepMind in developing increasingly larger models. It is undeniable that these models have shown remarkable improvements in various tasks, and it is natural to assume that continued scaling will result in even more impressive performance. However, it is important to consider the diminishing returns that may come with further scaling. While GPT-4 may indeed be a significant step up from ChatGPT, it is not guaranteed that the same rate of improvement will be maintained as we continue to increase the model size.

Regarding neuron count and synaptic connections, I agree that these factors are crucial in biological intelligence. However, the architecture and functioning of artificial neural networks, while inspired by biological brains, are still fundamentally different from their biological counterparts. It is not clear whether merely increasing the number of parameters in an LLM will necessarily lead to a more intelligent system, especially when considering aspects of intelligence like reasoning, understanding, and goal-driven behavior.

It is also worth mentioning that the focus on parameter scaling may distract from other important research directions. For example, investigating novel learning mechanisms, incorporating external knowledge bases, or developing architectures that are more biologically plausible could potentially lead to more efficient and capable models.

Regarding the Chinese AI industry, it is essential to recognize that different approaches and perspectives can contribute to the overall progress of the field. While some researchers and organizations might focus on scaling laws, others may prioritize alternative research directions. The diversity in approaches may ultimately lead to a more robust understanding of AGI and the development of more effective systems.

So, while the progress made by LLMs is undoubtedly impressive, it is crucial to maintain a balanced perspective on their potential for achieving AGI. Scaling laws may play a role in improving model performance, but it is also essential to consider other factors and approaches that may contribute to the development of true AGI. The path to AGI is likely to be a multifaceted endeavor, and considering different perspectives and research directions will be vital for its realization.
 

BlackWindMnt

Captain
Registered Member
I think the initial shock of gpt-3 compared to previous attempts is what got the hype train rolling, somehow with gpt-4 theres less of a shock value in my anecdotal observation.

Don't think gpt-6 with even trillions of input parameter will get you a better tool. Its just an order of magnitude more garbage being fed to the machine.

What will they do at gpt-8 when they have fed the machine all of collective human data. Will they claim it's a dead end because there's no more data to be exploited? It seems like a classic way to get more investor money please give us more money we just need a trillion more parameter to achieve proto-AGI. This just sounds like a badly scaling method.

This is just my observation on the gpt hype I'm not a AI researcher or engineer. So take it with a grain of salt. I find object recognization, self monitoring in machinery for predictive maintainance, AI based magnetic field correction/prediction in fusion power etc way more interesting it allows one to think about more possibility and interesting futures.
 

tacoburger

Junior Member
Registered Member
1680202991315.png


LLM already spontaneously develop emergent abilities that nobody trained them for, all just by scaling up the parameter count. Why is it that you have to scale a model to a certain size to get a particular capability? Nobody knows. It's an emergent property, emergent systems like that are hard to predict or study. Human intelligence is an emergent system born from a hundred billion neurons all talking to one another.

Just look at humans, we have the same meat brains as all other animals, why are we smart and have self awareness? We don't have that different brain compared to the other great apes, we have the same neuron, neurochemicals, synaptic connections, general brain structure etc etc There's a popular theory that it's due to scale. That due to the large 100 billion neurons and over 100 trillion synaptic connections that humans have, emergent properties like consciousness, internal logic, a theory of mind just comes out of this, and that if the number of neurons or synaptic connections were an order of magnitude smaller, you wouldn't be sentient. So sentience and human level intelligence comes from massively scaling an Great Ape's brain up a few OoM, just like how a dog is gonna be smarter and more self aware than a worm with brain with a few thousand neurons. Note: we still don't really understand intelligence or consciousness, we just tend to know, bigger brain with more neurons=smarter. Something that LLMs mirror perfectly.

If this theory is true. Then it would stand to reason that a massively scaled up neural net could develop a true intelligence, sense of self and internal logic, they probably wouldn't think anywhere the same as a human does, and how accurate they are would still depend on their training data, but they would be a kind of proto-AGI, smarter than a human and able to self reason, have a internal worldview and have self consistent logic. Work would still have to be done to make it a true AGI, allowing it to retrain itself however it likes, retain data/memories etc etc, but even proto-AGI would be a massive game changer.

So yeah, entirely possible that larger versions of LLM or other neural nets could get you AGI. There's already a biological equivalent in our brain of how simple scaling laws can turn a great Ape's brain into a human brain. That's what Deepmind and OpenA.I have been saying for years, and why I find the whole "CHATGTP CHATBOT HUR DUR" to be premature. It's like asking why a one month old baby doesn't have a sense of self or logic yet. Scale a LLM to a few trillion parameters or a few hundred trillion parameters and then we'll see. That's OpenAI plan all along, have a model with a parameter count in the trillions, that might get them true AGI. We might see them in the next few years.

And that's why I find chinese A.I sector stupid for not investing heavily into LLM when Deepmind and OpenA.I have been saying for years since 2020, that simple scaling laws are all you need for AGI. And we have biological proof, unless you think that human brains were hand crafted by Gods or Aliens and are completely different from the brain structure of other great apes instead of a evolution towards larger and denser brains with more neurons over a few hundred thousand years from our Great Ape ancestors. Few hundred neuron=worm, few hundred billion neuron=human level intelligence. And all sorts of varying intelligence in-between. All with roughly the same hardware.

Clearly nobody here is following the A.I scene if they can't even follow this basic line of logic that the major A.I players have been using for years for why they think that simple scaling laws will work to get our AGI or proto-AGI.

GTP-5 and next gen LLMs should have the trillion+ parameters that some experts think that true human intelligence, logic and reasoning will emerge, so I guess America will conduct this grand experiment a few years before China see the results. Of course it's entirely possible that scaling laws simply don't work with this particular kind of A.I model and that you can train into to a few hundred trillion parameters and not see any improvements... but I guess we'll see.

Please, Log in or Register to view URLs content!

Theory of mind (ToM), or the ability to impute unobservable mental states to others, is central to human social interactions, communication, empathy, self-consciousness, and morality. We tested several language models using 40 classic false-belief tasks widely used to test ToM in humans. The models published before 2020 showed virtually no ability to solve ToM tasks. Yet, the first version of GPT-3 ("davinci-001"), published in May 2020, solved about 40% of false-belief tasks-performance comparable with 3.5-year-old children. Its second version ("davinci-002"; January 2022) solved 70% of false-belief tasks, performance comparable with six-year-olds. Its most recent version, GPT-3.5 ("davinci-003"; November 2022), solved 90% of false-belief tasks, at the level of seven-year-olds. GPT-4 published in March 2023 solved nearly all the tasks (95%). These findings suggest that ToM-like ability (thus far considered to be uniquely human) may have spontaneously emerged as a byproduct of language models' improving language skills.
It's not a secret that as A.I models grow and grow, they seem to be more sentient. Again, going by biological laws of neuron count, this makes perfect sense.
 

9dashline

Senior Member
Registered Member
I think the initial shock of gpt-3 compared to previous attempts is what got the hype train rolling, somehow with gpt-4 theres less of a shock value in my anecdotal observation.

Don't think gpt-6 with even trillions of input parameter will get you a better tool. Its just an order of magnitude more garbage being fed to the machine.

What will they do at gpt-8 when they have fed the machine all of collective human data. Will they claim it's a dead end because there's no more data to be exploited? It seems like a classic way to get more investor money please give us more money we just need a trillion more parameter to achieve proto-AGI. This just sounds like a badly scaling method.

This is just my observation on the gpt hype I'm not a AI researcher or engineer. So take it with a grain of salt. I find object recognization, self monitoring in machinery for predictive maintainance, AI based magnetic field correction/prediction in fusion power etc way more interesting it allows one to think about more possibility and interesting futures.
It is true that the initial shock of GPT-3's capabilities garnered significant attention, and the subsequent iterations may seem less impressive due to the familiarity with the technology. With each new iteration of GPT, the improvements might not appear as groundbreaking, despite the increases in model size and computational requirements.

The case of AlphaGo Zero is an interesting example of an AI system that experienced rapid initial progress followed by a plateau. During the first few days of its training, AlphaGo Zero quickly surpassed the performance of previous versions of AlphaGo. However, after about a week, its progress slowed down considerably, and it made only marginal improvements in Elo rating over the remaining 40 days of training. This example highlights the potential diminishing returns that can be observed when scaling AI systems.

Your concerns about the potential diminishing returns of scaling models and the possibility of reaching a dead end are valid. Indeed, simply adding more parameters might not be the most effective approach to achieving AGI or even more advanced AI systems. As you pointed out, there may come a point when we exhaust the available data, and further scaling of the model will not yield any significant improvements.

It is essential to recognize that AI research is not limited to large language models, and many other exciting developments are taking place in different domains. You mentioned object recognition, predictive maintenance, and magnetic field prediction in fusion power as examples. These applications are indeed fascinating and hold great potential for shaping our future. AI research is a vast field, and the pursuit of AGI should involve exploring multiple avenues, rather than focusing solely on scaling language models.
 

9dashline

Senior Member
Registered Member
View attachment 110147


LLM already spontaneously develop emergent abilities that nobody trained them for, all just by scaling up the parameter count. Why is it that you have to scale a model to a certain size to get a particular capability? Nobody knows. It's an emergent property, emergent systems like that are hard to predict or study. Human intelligence is an emergent system born from a hundred billion neurons all talking to one another.

Just look at humans, we have the same meat brains as all other animals, why are we smart and have self awareness? We don't have that different brain compared to the other great apes, we have the same neuron, neurochemicals, synaptic connections, general brain structure etc etc There's a popular theory that it's due to scale. That due to the large 100 billion neurons and over 100 trillion synaptic connections that humans have, emergent properties like consciousness, internal logic, a theory of mind just comes out of this, and that if the number of neurons or synaptic connections were an order of magnitude smaller, you wouldn't be sentient. So sentience and human level intelligence comes from massively scaling an Great Ape's brain up a few OoM, just like how a dog is gonna be smarter and more self aware than a worm with brain with a few thousand neurons. Note: we still don't really understand intelligence or consciousness, we just tend to know, bigger brain with more neurons=smarter. Something that LLMs mirror perfectly.

If this theory is true. Then it would stand to reason that a massively scaled up neural net could develop a true intelligence, sense of self and internal logic, they probably wouldn't think anywhere the same as a human does, and how accurate they are would still depend on their training data, but they would be a kind of proto-AGI, smarter than a human and able to self reason, have a internal worldview and have self consistent logic. Work would still have to be done to make it a true AGI, allowing it to retrain itself however it likes, retain data/memories etc etc, but even proto-AGI would be a massive game changer.

So yeah, entirely possible that larger versions of LLM or other neural nets could get you AGI. There's already a biological equivalent in our brain of how simple scaling laws can turn a great Ape's brain into a human brain. That's what Deepmind and OpenA.I have been saying for years, and why I find the whole "CHATGTP CHATBOT HUR DUR" to be premature. It's like asking why a one month old baby doesn't have a sense of self or logic yet. Scale a LLM to a few trillion parameters or a few hundred trillion parameters and then we'll see. That's OpenAI plan all along, have a model with a parameter count in the trillions, that might get them true AGI. We might see them in the next few years.

And that's why I find chinese A.I sector stupid for not investing heavily into LLM when Deepmind and OpenA.I have been saying for years since 2020, that simple scaling laws are all you need for AGI. And we have biological proof, unless you think that human brains were hand crafted by Gods or Aliens and are completely different from the brain structure of other great apes instead of a evolution towards larger and denser brains with more neurons over a few hundred thousand years from our Great Ape ancestors. Few hundred neuron=worm, few hundred billion neuron=human level intelligence. And all sorts of varying intelligence in-between. All with roughly the same hardware.

Clearly nobody here is following the A.I scene if they can't even follow this basic line of logic that the major A.I players have been using for years for why they think that simple scaling laws will work to get our AGI or proto-AGI.

GTP-5 and next gen LLMs should have the trillion+ parameters that some experts think that true human intelligence, logic and reasoning will emerge, so I guess America will conduct this grand experiment a few years before China see the results. Of course it's entirely possible that scaling laws simply don't work with this particular kind of A.I model and that you can train into to a few hundred trillion parameters and not see any improvements... but I guess we'll see.

Please, Log in or Register to view URLs content!


It's not a secret that as A.I models grow and grow, they seem to be more sentient. Again, going by biological laws of neuron count, this makes perfect sense.
I understand your perspective and the rationale behind the belief that scaling could lead to emergent properties, such as consciousness and intelligence, similar to what we observe in the human brain. However, there are still concerns and uncertainties that need to be addressed.

First, while it is true that LLMs have displayed emergent abilities, these capabilities are limited by the underlying architecture, training methods, and data they are exposed to. While emergent properties are difficult to predict or study, it is important to recognize that artificial neural networks, despite being inspired by biological brains, differ significantly from their biological counterparts in terms of structure, functioning, and learning processes.

Second, the comparison between human brains and LLMs has its limitations. Yes, humans have a large number of neurons and synaptic connections, but it is an oversimplification to attribute human-level intelligence solely to the scale of our brains. Factors such as the brain's modularity, plasticity, and energy efficiency also contribute to our cognitive abilities. Moreover, the human brain is not just a product of scaling up a great ape's brain; it has undergone numerous evolutionary adaptations that make it unique.

Third, assuming that reaching a certain parameter count will automatically result in AGI or proto-AGI may overlook the importance of other aspects of AI research. For instance, exploring novel learning mechanisms, incorporating external knowledge bases, and developing more biologically plausible architectures could potentially lead to more efficient and capable models. Focusing solely on scaling laws might not be the most effective approach to achieving AGI.

Regarding the Chinese AI sector, it is essential to recognize that different organizations and researchers may prioritize alternative research directions. While some focus on scaling laws, others may explore different approaches. The diversity in approaches may ultimately lead to a more robust understanding of AGI and the development of more effective systems.

While the potential of LLMs and the role of scaling in achieving AGI cannot be discounted, it is crucial to maintain a balanced perspective on their capabilities and limitations. The development of AGI is likely to be a multifaceted endeavor, requiring a diverse array of approaches and applications. We should be open to considering different perspectives and research directions, rather than relying solely on scaling laws as the definitive path to AGI.
 

9dashline

Senior Member
Registered Member
Concurring opinion to latentlazy from migrated discussion:

I agree with your thoughts about the limitations of Chat-GPT and the importance of considering other aspects of AI research beyond scaling and human mimicry. It is important to remember that AGI as a concept goes beyond simply mimicking human-like behaviors and responses. Instead, it should include a dynamic, open-ended ability to learn, reason, and adapt to different situations.

As you mentioned, the debate between data-focused AI proponents and concept-level AI researchers has been going on for decades. While advances in deep neural networks (DNNs) and large language models (LLMs) have led to significant progress, they should not be seen as the definitive path to AGI. The pursuit of AGI requires a more comprehensive approach that incorporates different research directions and methodologies.

In addition, it is important to recognize the value and impact of AI research in other domains that may not be as well publicized as Chat-GPT. These practical applications, driven by specific needs and competitive advantages, could lead to more meaningful advances in AI.

Their insights encourage a more grounded and realistic perspective on the current state of AI research, focusing not only on attention-grabbing applications like Chat-GPT, but also on the broader implications and potential advances in the field. This balanced perspective is critical to fostering a better understanding of AI's capabilities, limitations, and future potential.
 

9dashline

Senior Member
Registered Member
It's not a secret that as A.I models grow and grow, they seem to be more sentient.
The Cognitive-Theoretic Model of the Universe (CTMU) is a philosophical and mathematical framework created by Christopher Langan. It is an attempt to create a unified theory that addresses the relationship between mind and reality. The CTMU incorporates elements from various fields, including mathematics, philosophy, cognitive science, and information theory, and has been proposed as a comprehensive framework for understanding the nature of reality and consciousness.
The CTMU's relation to the theory of mind is rooted in its core ideas, which propose that reality is essentially a self-processing, self-referential system that evolves and generates itself. In this model, consciousness is an intrinsic aspect of reality, and the mental processes of intelligent agents are fundamental to the structure of the universe.
When considering large language models (LLMs) like Chat-GPT, it is essential to examine how these models relate to the theory of mind and frameworks like the CTMU. While LLMs are powerful tools for generating human-like text outputs and mimicking human thought processes, they currently do not possess a true understanding of reality or the ability to reason, as proposed by the CTMU and the theory of mind.
In the context of the CTMU, LLMs can be seen as a step towards creating artificial agents capable of participating in the self-processing and self-referential aspects of reality. However, to achieve this level of understanding and integration, LLMs would need to evolve significantly, incorporating more advanced cognitive abilities such as self-awareness, learning, reasoning, and adaptability.
In summary, while LLMs exhibit impressive human-like text generation and pattern recognition, they have not yet achieved a level of sophistication that would allow them to participate meaningfully in the self-processing and self-referential aspects of reality, as proposed by the CTMU. To better understand the relationship between LLMs, the theory of mind, and frameworks like the CTMU, further research and development are necessary, focusing on enhancing the cognitive abilities of artificial agents beyond mere human mimicry.
 

tacoburger

Junior Member
Registered Member
. It is not clear whether merely increasing the number of parameters in an LLM will necessarily lead to a more intelligent system, especially when considering aspects of intelligence like reasoning, understanding, and goal-driven behavior.
It literally already does. Reasoning, mathematics, jokes etc etc. All this emerge naturally from larger parameter models. Of course it's entirely possible that scaling laws simply don't work with this particular kind of A.I model for AGI and that you can train into to a few hundred trillion parameters and not see any improvements... but I guess we'll see. The important thing is to keep testing and pushing. We're nowhere near the limit yet, and there's always minor improvements in the datasets and training to be made. Even if you made a 100 trillion LLM and it doesn't get AGI, you can forever rule out LLM as a route to AGI and just go back to the drawing board to try completely new models.

However, it is important to consider the diminishing returns that may come with further scaling. While GPT-4 may indeed be a significant step up from ChatGPT, it is not guaranteed that the same rate of improvement will be maintained as we continue to increase the model size.
When the potential reward is AGI, you got to try. Even if the chance is tiny. And if it fails, you got a extremely powerful LLM that can greatly increase productivity anyway. Win-Win. It's not like the training costs are that huge, you aren't spending hundred of billions on a trillion parameter A.I.
Regarding the Chinese AI industry, it is essential to recognize that different approaches and perspectives can contribute to the overall progress of the field. While some researchers and organizations might focus on scaling laws, others may prioritize alternative research directions. The diversity in approaches may ultimately lead to a more robust understanding of AGI and the development of more effective systems.
I might have believed you if ChatGTP didn't completely set the room on fire and send dozens of startup and companies looking for a ChatGTP clone, all at the same time and in a matter of months. PaLM was impressive. Lamda was so good that it made a google researcher claim that it was sentient. GTP-3 made big waves. Deepmind/OpenA.I were openly saying that trillion parameter scale A.I models were how you got to AGI for 2 years now.

So why was the only A.I model that finally shook the chinese A.I sector the one that got so much mainstream media attention that my 63 year old mother who can barely use her smartphone was asking me about it? It's like they only got their A.I news from mainstream media. I'm not working in A.I fields and even I took note of every openA.I/google/deepmind A.I model release or showcase.

It is not clear whether merely increasing the number of parameters in an LLM will necessarily lead to a more intelligent system, especially when considering aspects of intelligence like reasoning, understanding, and goal-driven behavior.
One thing to note is that language and intelligence are deeply interlinked. As babies our brain are wired to learn a language, and "wild" children that don't learn proper language as babies are stunted developmentally, maybe permanently. Which makes sense, language is how we describe and define our world, and each word has a clear concise meaning, a strict structure to it. I can't imagine how my train of thought, self of sense or thinking would be like without language. My thoughts are a voice in my head. How would I think or self conceptualise without language?

It would make sense that a LLM would have human like intelligence because of it, more so than any other A.I model, like text to image etc etc. Even if current LLM architectures weren't the way to AGI, I guarantee you that whatever future A.I model/architecture gets us to AGI will have language as the majority of it's training data, be it though text or speech.
Don't think gpt-6 with even trillions of input parameter will get you a better tool. Its just an order of magnitude more garbage being fed to the machine.
Well, you got to try at least. Like I said, language and intelligence are deeply interlinked, of all the A.I models that could work, a LLM will have a much better chance than trillion parameter neural network made for folding proteins. It's not like there's any other options, there's no new novel A.I architecture or transformer model to use. LLM is our best chance right now, the one with the most research and funding, slim as it is. Get a LLM to few trillion parameters, prod and test it, and than if it doesn't work, at least you can go back to the drawing board and start again, from scratch, instead of this forever "maybe maybe with enough parameters" waiting game that the A.I sector has turned into. You get a pretty useful LLM out of it too.
I find object recognization, self monitoring in machinery for predictive maintainance, AI based magnetic field correction/prediction in fusion power etc way more interesting it allows one to think about more possibility and interesting futures.
It's not like China's A.I sector can't do all those and get a few LLM, just in case the scaling laws turns out to be true.
 

9dashline

Senior Member
Registered Member
One thing to note is that language and intelligence are deeply interlinked. As babies our brain are wired to learn a language, and "wild" children that don't learn proper language as babies are stunted developmentally, maybe permanently. Which makes sense, language is how we describe and define our world, and each word has a clear concise meaning, a strict structure to it. I can't imagine how my train of thought, self of sense or thinking would be like without language. My thoughts are a voice in my head. How would I think or self conceptualise without language?

It would make sense that a LLM would have human like intelligence because of it, more so than any other A.I model, like text to image etc etc. Even if current LLM architectures weren't the way to AGI, I guarantee you that whatever future A.I model/architecture gets us to AGI will have language as the majority of it's training data, be it though text or speech.
It's quite naive to assume that language and intelligence are the only factors at play when it comes to true artificial general intelligence (AGI). Sure, language is important, but let's not overlook the fact that human intelligence goes far beyond language. You're oversimplifying the matter.

Large language models like Chat-GPT are, quite frankly, limited in their scope. They can spit out human-like responses, but when it comes to genuine reasoning and understanding, they fall short. These models are just glorified pattern matchers, lacking the depth of understanding that defines real intelligence.

And let's not forget that intelligence is multi-faceted. It's not all about linguistic capabilities. Emotional, social, and spatial intelligence also play a significant role. So even if an LLM can mimic human language, it doesn't mean it has achieved true intelligence.

Future AGI architectures may indeed be based on language, but don't kid yourself into thinking that language is the be-all and end-all. Developing AGI that can genuinely reason, learn, and adapt like a human requires a far more comprehensive and sophisticated approach. It's high time we stopped overhyping LLMs and started recognizing the true complexity of achieving AGI.
 
Top