Artificial Intelligence thread

Coalescence

Senior Member
Registered Member
Just for fun, I asked ChatGPT4 if LLM AI model can reach AGI and if ChatGPT will developed into an AGI one day.
1680237762532.png
1680237833408.png
I agree with some of the points @9dashline brought up about the limitation of LLM and that the pursuit to an AGI requires a more comprehensive approach rather than focusing entirely on LLM. I personally think the current issue of the development of AGI is not so much of an AI architecture problem but rather a AI system or composition problem.

Reducing the brain's intelligence to simply owing to the number of neurons or synapse is a gross simplification of how the brain functions, the brain is composed of many parts each responsible for a certain function. Which is why I think instead of relying entirely on constantly training the LLM AI in hopes of it suddenly developing the capabilities we want, why not make the LLM act as the language processing and/or the control unit of the system with specialized models trained for certain functions as its subsystem. This would make training for each specific functions easier and reduce the cost of training the AI system.

On the other hand, what needs or uses cases are there that requires an AGI? Wouldn't training specialized models for a specific task be more cost effective and performative, because I don't think the AI needs to know how to file taxes in order to design a warship:
Please, Log in or Register to view URLs content!

Or know mitochondria is the powerhouse of the cell in order to do chemical experiments:
Please, Log in or Register to view URLs content!

Or know how to write poems to be able to track small objects using satellites:
Please, Log in or Register to view URLs content!
 

solarz

Brigadier
Are you keeping up here? Yes, ChatGTP isn't AGI, not even close. But it could have some internal form of understanding, reasoning or logic, not enough to be human level. GTP-4 does seem to have some sense of reason or understanding, it can explain jokes, brand new jokes at that, something that even humans have trouble doing without any context. This are new jokes, how can they pattern match data on new jokes that needs context to find funny? This suggests that GTP-4 has some internal logic and reasoning at play here. Not even near human level, but enough to "understand" some of the meaning and logic behind the words it generates.

The true answer is that we have no idea how consciousness or intelligence works, not even in humans, not even in simpler animals. How does a few billion neurons talking to each other produce consciousness, intelligence, memories, self correcting logic or a internal state of mind?


What we do know is that scaling laws do work in nature. A few hundred neurons=Insects. A hundred billion neuron=human. And the whole range of intelligence in between. It's no secret that the great apes all have some of the highest neuron count in the animal kingdom other than us. That's the best theory that we have for biological intelligence. So at some point, adding neurons and synapse turns something that can barely be considered sentient, to a sentient creature but one that doesn't have a theory of mind or intelligence , to full sapient creature with sense of self, logic, intelligence etc etc. Do note that it's entirely possible, even likely that human intelligence is nothing more than very very good pattern recognition and extrapolation, no difference from LLM...only a thousand times better.

It's emergent properties, we don't have special neurons or a vastly different brain structure with other animals or our great ape cousins. If you were some kind of energy being, with no experience with biology, you wouldn't think that a higher neuron count can magically produce humans from great apes, or great apes from rats, or rats from whatever fish crawled out of the oceans. It appears that our brains hit critical mass, the right number of neurons to form the emergent properties that make us intelligence enough to do what we do.

And that's what we see in LLM. The larger they get, not only do they get better at their trained jobs, they gain new abilities that were never trained for, almost like some kind of emergent properties... Like real brains...

Please, Log in or Register to view URLs content!







So yeah, while unlikely, it's entirely possible that the larger you train a LLM, it could hit critical mass at some point and gain true self awareness or some form of true intelligence. Current models already look like they have level of real understanding of the language that they are using. It will look nothing like a human intelligence, which is going to make it hard to figure out exactly what's going on under the hood. It's not impossible is what I'm trying to said.

LLM are the most likely A.I model to get us AGI right now. It's not like we have a dozen vastly different A.I architecture and transformers and all have a equally good chance at AGI, LLM or a heavily modified version of LLM is the only real game in town for now, unless you come up with new racial architecture or transformer overnight, or even a vastly different method than neural networks.

All this wall of text does is to tell me that you don't understand how AI, or even software, works.

ChatGPT isn't GAI, and won't ever be GAI regardless of how much you upscale it, for the simple reason that it wasn't created to be GAI.

It doesn't have any sense of reason or understanding. It only appears that way to you because you don't understand how software works. Thanks for proving my point, BTW.

A computer program can only do what it was created to do. It may not always do it correctly, but it won't ever spontaneously exceed the boundaries of what its programmers built it to do.

Artificial intelligence is trained through machine learning, but machine learning requires specific parameters that supplies feedback based on its output. For example, to train a language model, you need to give the AI program parameters that tells it if it's output is good or bad. That's the only way the AI can improve over successive iterations.

Therefore, a language model is only concerned with whether its output makes grammatical sense, and nothing else. It doesn't understand what it's saying, it just knows that what it says follows the correct syntax rules.

Now we come to the final point: we don't have GAI because we still have no idea what parameters we can supply an AI program to train for GAI. Specific AI can be trained because we want it to do only one thing, and we can define criterias of success and failure with regard to that specific task or domain. GAI, on the other hand, would in effect need to be able to determine its own success/fail criteria for any given domain, and we simply have no idea yet on how to go about doing that.
 

Shadow_Whomel

Junior Member
Registered Member
For those who can read Chinese, I recommend you watch this video, especially for those who are XBC(Replace X with A if you were born in America).

Please, Log in or Register to view URLs content!


For those who can't can't read Chinese, this is an overview of this video, the Chinese NLP Large Model will be difficult to get more on this wave of NLP Large Model evolution due to the lack of language learning materials.

What I want to point out is that there is a huge ecology of AI models, including upstream chips, compute acceleration cards, and data set. Data set is many times ignored by members of the SDF. A prime example is baidu's ERNIE-ViLG 2.0, where the training material contains the open source LAIONs dataset, which is also the dataset used by Stable Diffusions. As a direct result, it performs poorly for expressions with different meanings in English and Chinese. The galaxy is drawn when asked to draw the "milk road" (milk on the road).

As for the large midstream model, the parameters are not everything, and the Chinese language itself requires more tokens.
I'm not saying that all Chinese LLM parameters don't make sense either, it's just that it doesn't work the way you think it does.

SDF members who want to learn more, I highly recommend you check out IDC's report on the China Large Model Market Overview.

1680243412598.png
Milk on the road become "Galaxy"
 

BlackWindMnt

Captain
Registered Member
All this wall of text does is to tell me that you don't understand how AI, or even software, works.

ChatGPT isn't GAI, and won't ever be GAI regardless of how much you upscale it, for the simple reason that it wasn't created to be GAI.

It doesn't have any sense of reason or understanding. It only appears that way to you because you don't understand how software works. Thanks for proving my point, BTW.

A computer program can only do what it was created to do. It may not always do it correctly, but it won't ever spontaneously exceed the boundaries of what its programmers built it to do.

Artificial intelligence is trained through machine learning, but machine learning requires specific parameters that supplies feedback based on its output. For example, to train a language model, you need to give the AI program parameters that tells it if it's output is good or bad. That's the only way the AI can improve over successive iterations.

Therefore, a language model is only concerned with whether its output makes grammatical sense, and nothing else. It doesn't understand what it's saying, it just knows that what it says follows the correct syntax rules.

Now we come to the final point: we don't have GAI because we still have no idea what parameters we can supply an AI program to train for GAI. Specific AI can be trained because we want it to do only one thing, and we can define criterias of success and failure with regard to that specific task or domain. GAI, on the other hand, would in effect need to be able to determine its own success/fail criteria for any given domain, and we simply have no idea yet on how to go about doing that.
Do we even know how general intelligence is quantified in humans, are we talking about an AI that never specialised in any field just continue adding data? Given there is so much knowledge that we humans start specialising quite early in life. That's why you don't really hear about modern polymaths that are walking encyclopedia and that have also contributed to multiple scienctific fields.

Is it really that weird seeing signs of simple sentience emerge from LLP model given its has to process data from the internet. Content on the internet is written by sentience people, so stringing together that data might give the impression of sentience right?

I can tell people just write a bot that has digested multiple chat logs/history and make it randomly select some reply message that contains some words of the answer. You would be surprised how much it can act like a human if RNGjesus is on your side.
 

solarz

Brigadier
Came across this video that does a pretty good job of describing ChatGPT:

Please, Log in or Register to view URLs content!

Don't get me wrong, I think ChatGPT is an amazing advancement. Just imagine what you could do with it in a game. Life-like NPCs are just over the horizon.

However, it's important that we understand the nature of ChatGPT and do not ascribe to it unrealistic capabilities.
 

FairAndUnbiased

Brigadier
Registered Member
I don't know much about software. But based on my observations of animals and humans, I have a very simple problem for judging whether AI is actually artificially intelligent or not: can it refuel itself?

Even the tiniest insect has this capability. Even the stupidest animal can use sensor fusion (via 3D imaging, trace concentration chemical sensors and acoustic detectors AKA eyes, nose and ears) to detect nutrients in 3D space, find the relative positions of itself and the nutrient, then vector towards that nutrient when reserves are low. More sophisticated predatory animals can use their sensor fusion to acquire an actively evading target and engage in autonomous combat to defeat the prey and acquire its nutrients.

Yet I don't see robots lining up at gas stations or plugging themselves in.
 

AssassinsMace

Lieutenant General
Came across this video that does a pretty good job of describing ChatGPT:

Please, Log in or Register to view URLs content!

Don't get me wrong, I think ChatGPT is an amazing advancement. Just imagine what you could do with it in a game. Life-like NPCs are just over the horizon.

However, it's important that we understand the nature of ChatGPT and do not ascribe to it unrealistic capabilities.

It's like Pavlov's dog. When someone rings a bell, the dog starts salivating because it just associates the sound of a bell ringing with being served food. I've mentioned before in here where when I was in college I would listen to Young Republicans in private and they were totalitarian communists. They believe in one-party rule, Republicans, and they believed Republicans should control all aspects of society. They don't understand what democracy is except it's something good for them to be associated with. Then they work to make it hard for women and minorities to vote. They're the very thing they say they hate while denying others the very thing they say they're for without any understanding of what they are.

Now you have leading tech figures calling for a six month pause on any AI research. The critics say no because they say China would not be a part of this suspension in research. We know that's just an excuse because China could agree to it and they'll just move onto the next excuse to ignore it. Everything that has brought this concern for AI should not be surprising because it's an exact reflection of what any alt-right Proud Boy sociopathic incel would say. Microsoft had a chatbot, Tay Tweets, where it would learn from people that conversed with it. In less than a month, Tay Tweets turned into a raging Nazi. They want to see the world burn because it has not received the kind of respect they think they personally deserve from society. And of course they think China's has to be worse.

The West will hide behind how machines can't be biased hence why before they thought their AI would be impartial. Remember when facial recognition first came out for iPhones, if you weren't white, it had a problem discerning different people and sexes within other races. That alone tells you Western programmers didn't bother writing algorithms for people of other races for facial recognition whether intentional or not. There's a documentary on Netflix that talks about this very bias in Western technology. Banks even use algorithms that are biased in giving loans. The West decries China's social credit system. The US has one that's racist first. They just don't tell people it exists.
 

Mcsweeney

Junior Member
I don't know much about software. But based on my observations of animals and humans, I have a very simple problem for judging whether AI is actually artificially intelligent or not: can it refuel itself?

Even the tiniest insect has this capability. Even the stupidest animal can use sensor fusion (via 3D imaging, trace concentration chemical sensors and acoustic detectors AKA eyes, nose and ears) to detect nutrients in 3D space, find the relative positions of itself and the nutrient, then vector towards that nutrient when reserves are low. More sophisticated predatory animals can use their sensor fusion to acquire an actively evading target and engage in autonomous combat to defeat the prey and acquire its nutrients.

Yet I don't see robots lining up at gas stations or plugging themselves in.

There's robot vacuum cleaners that can plug themselves in (see 5:23). This is made by Roborock, a brand affiliated with Xiaomi

 

FairAndUnbiased

Brigadier
Registered Member
There's robot vacuum cleaners that can plug themselves in (see 5:23). This is made by Roborock, a brand affiliated with Xiaomi

interesting, but it looks like its cooperative. Most interactions between animals and their nutrient source is non-cooperative. The prey is actively seeking to deceive, deter or defeat predators. Roborock gets lost if there's even a small obstacle in its path.
 
Top