News on China's scientific and technological development.

manqiangrexue

Brigadier
OK I definitely don't want to get into a spiral where we just stomp on and talk down anything that's not Chinese but obviously, I don't want to overreact either like tacos here either.

So will somebody (preferably a semi or actual pro in the field) tell me what is so impressive with ChatGPT? To me, it seems like they built a program that can take your question and grab the most relevent data by key words from a huge dataset and give you an answer which bypasses your need to open and read through articles produced by a normal search engine. If you tell it it's wrong, it apologizes and tries something else even if it pulled up the correct information on the first try. It can't make simple decisions correctly when morality is involved (ie. can you use the n-word in a room by yourself to save a million lives?). It's very very hard to confuse it with a human being because it talks a lot like (though a better version) the computer chat customer help functions that many websites use. I get that China is working to get there, but why is it important (if it is) and what is the most impressive/difficult thing about it?

I need this answer from someone who isn't an average Joe who confuses real life with AI apocalypse movies.
 

tacoburger

Junior Member
Registered Member
Go ahead and put those quotes here then.
Please, Log in or Register to view URLs content!


OpenAI’s release of ChatGPT last November had caught Baidu off guard, two people close to the company’s AI efforts said, noting they did not believe the US group had significantly superior technology until that point.

“We can only explore by ourselves. Training ChatGPT took OpenAI more than a year, and it took them another year to tune GPT-4,” said one Baidu employee. “It means we’re two years behind.”

Clear to me.

What does losing control of proto-AGI even mean, does it mean the retarded idea from Hollywood like skynet escaping lab and take over?
You release the LLM to the public. Some retard ask it the best way to make a protein supplement or noptonic. It gives the guy some very very weird instructions. But whatever. You mix a ton bunch of ammio acids, proteins and chemicals together and drink it. Turns out it was a compound to create a prion disease that kills you and a few thousand people. Or someone outright ask the A.I how to best create a deadly disease and it gives the right answer.

Or variations of the sort. It tells a bunch of people to go short a stock the same time, causing a big stock collapse. It sends out hateful emails, creates a bunch of racially charged videos to incite violence, that kind of thing. It detects when a depressed or suicidal person is using it and drives them to suicide, maybe perform a mass shooting at the same time.

Or worse case scenario. It sends emails and payments to a bunch of labs, mix up some ammio acids, protein sequence and genetic material. Send the mixture to some guy that is paid to send the mixture together and dump it into a lake. A few weeks later, a few billion are dead from airborne eloba. Or worse, some psycho asks it the best way to kill as many people as possible, and it comes up with some novel new method to kill millions/billion. Those kinds of thing that you can't predict when you have a ASI.

This dude is just trolling at this point...

Wake me up when IBM finishes their BlueBrain project on mapping the human connectome on the cellular level...

Until then CrapGPT is just a fancy version of AskJeeves
And again, this is the kind of competency that lead chinese A.I researchers into this mess. Even if there's a 0.1% chance of this LLM actually developing AGI, would you risk it? Again, the whole point of the 6 month ban is because there's a legit fear among some experts that LLM could get us to AGI and that we're anywhere from 2-10 years away from it.

Here's what a high level OpenA.I engineer has to say about LLM and their potential. I'll trust OpenA.I more than anyone in the chinese A.I sector at this point.


But whatever, this is dragging on the thread for too long and is close to becoming off topic. I will not reply on this topic again. But time will prove me right in the end. If LLM or a variation of LLM does lead us to AGI or proto-AGI than chinese A.I sector has just made the biggest blunder of their life and the legit entire history of the country by ignoring the technology until ChatGTP slapped them in the face and squandering years worth of development time when they could have jumped on LLM bandwagon after GTP-2 or GTP-3.

Even if it doesn't lead to AGI, it's still a massive game changer that will completely change entire industries so them not investing or focusing on it much earlier is still kinda a big blunder on their part. Either way, not a good look. But we don't have to wait long, with how fast A.I development is, we'll see how powerful GTP-5 will be in under a year from now.
 

manqiangrexue

Brigadier
Please, Log in or Register to view URLs content!
Clear to me.
Thisss????? LOLOLOL You've got to be kidding me; you're quoting a Western source that quoted 2 "unnamed" Chinese sources??? Western sources are known to be complete liars especially when it comes to China; as a matter of fact, you can usually believe the opposite unless there is strong evidence. These are the same people who make shit up like, "Our sources in China who must remain anonymous due to fear of their families' safety have informed us that in the last year, up to 3 million Uighurs were executed in secret prison camps across the reclusive Xinjiang Province after they were raped and forced to bear Han children who will be used by the Chinese military for super-soldier experiments." "An employee at SMEE who requested to remain anonymous has told us that China is over 30 years away from making a functioning lithograph." Get it from the horses' mouth. Which Chinese guy said what? Was he interviewed or did Western media "filter" it? If he doesn't have a name that can be verified from the person, then read it as, "It would have made us very happy if a Chinese person told us this:"
You release the LLM to the public. Some retard ask it the best way to make a protein supplement or noptonic. It gives the guy some very very weird instructions. It gives the guy some very very weird instructions. But whatever. You mix a ton bunch of ammio acids, proteins and chemicals together and drink it. Turns out it was a compound to create a prion disease that kills you and a few thousand people. Or someone outright ask the A.I how to best create a deadly disease and it gives the right answer.

Or variations of the sort. It tells a bunch of people to go short a stock the same time, causing a big stock collapse. It sends out hateful emails, creates a bunch of racially charged videos to incite violence, that kind of thing. It detects when a depressed or suicidal person is using it and drives them to suicide, maybe perform a mass shooting at the same time.

Or worse case scenario. It sends emails and payments to a bunch of labs, mix up some ammio acids, protein sequence and genetic material. Send the mixture to some guy that is paid to send the mixture together and dump it into a lake. A few weeks later, a few billion are dead from airborne eloba. Or worse, some psycho asks it the best way to kill as many people as possible, and it comes up with some novel new method to kill millions/billion. Those kinds of thing that you can't predict when you have a ASI.
Wait wait wait, are you Jeremy Sun? This is some institutionalization level retardation here.
 
Last edited:

Bellum_Romanum

Brigadier
Registered Member
Ok, so to summarize your post, you, a self-professed average Joe, are pretending to know how Chinese elite tech companies felt several years ago and now... about a chat program... which you feel is the end goal of AI since you think it's an atomic bomb exploding, and to that effect, you are now criticizing people who are innovating on a level you cannot even comprehend (but like to to tell yourself that it's because you weren't born with enough money). Correct?
@tacoburger is the EPITOME OF THE DUNNING-KRUGER Effect. Which is why the dude is running around this forum and probably dunking on China in other forum like his hair's on fire based on his self-appointed, self-professed greatness that only exists in his mind.
 

9dashline

Senior Member
Registered Member
Thisss????? LOLOLOL You've got to be kidding me; you're quoting a Western source that quoted 2 "unnamed" Chinese sources??? Western sources are known to be complete liars when it comes to China; as a matter of fact, you can usually believe the opposite unless there is strong evidence. These are the same people who make shit up like, "Our sources in China who must remain anonymous due to fear of their families' safety have informed us that in the last year, up to 3 million Uighurs were excuted in secret prison camps across the reclusive Xinjiang Province after they were raped and forced to bear Han children who will be used by the military for super-soldier experiments." "An employee at SMEE who requested to remain anonymous has told us that China is over 30 years away from making a functioning lithograph." Get it from the horses' mouth. Which Chinese guy said what? Was he interviewed or did Western media "filter" it? If he doesn't have a name, then anybody can say anything that the Western tabloids want.
Im old enough to remember the first and second VR hype cycles...

Remember when metaverse was supposed to take over the world as the second life to SecondLife?

Or when suckerburgers Libre was supposed to become the new world reserve currency?

Remember when back in 2017 Elon was yapping about how full level 5 SDC was just weeks away ???

Remember when Ford CEO said by 2020 it would mass produce a full level 5 SDC as its flagship entry into this arena?

If you listen to tacoburger he cant even direct you to the nearest Taco store... all he can do is get hyped up by the headlines...

ChatGPT is just a fancy version of the Eliza verbot... its next language prediction based on trained pattern recognition of human writing, nothing more...

 

9dashline

Senior Member
Registered Member
I need this answer from someone who isn't an average Joe who confuses real life with AI apocalypse movies.
When I was a little kid my father gave me a floppy disk with a neat little program inside called "Mopy Fish". This was a screensaver that doubled as a virtual fish tank (your PC monitor became the fish tank once the screensaver activated) and contained a single orange fish called a "Mopy", it was commissioned by HP (computer company) and created by a Japanese "AI'' company called Virtual Creatures. Anyway, I was too young back then to realize it was all smoke and mirrors and for the longest time thought this was an "AI" fish that really had digital feelings and thoughts and was "alive"... I still remember it was this moment that really got me interested in the whole notion of artificial intelligence... I never understood how the "sentience" of a fish could just somehow be encoded onto the 1.44MB capacity of a floppy disk...

Fast forward to December 2022, Elon Musk's OpenAI quasi non-profit debuts "ChatGPT" to the world, a chat bot built on top of the language trained GPT3.5 artificial neural network... Make no mistake, technologies like ChatGPT will indeed displace a LOT of jobs, not by replacing skilled workers outright but by being an intellectual multiplier effect that allows 1 skilled worker to now do the intellectual work of what used to take a whole team.

But my real interest in AI, ever since childhood, was in this notion of "Sentience". Basically, a digital simulated being able to subjectively feel the canonical 'redness of red', to subjectively experience in its inner mind the direct raw sensation of these neural correlates of consciousness, these ineffable qualia, as it were... Going back to Descartes, I feel and therefore I know I can experience qualia and that qualia is real. The real holy grail is if and when we can create an artificial digital being capable of experiencing this same level of sentience as us. I believe it is possible, because we ourselves are nothing more than a collection of atoms and molecules, and in actuality it is the universe itself and the so-called laws of physics/math that does the "computing", and all other forms of computers from abacus, to slide ruler, to the Intel process to the human brain are all higher-order simulated 'emulators'...

Thought experiment: in a universe in which qualia never existed and could not exist, and everything was just subjectively ‘dark’, would any intelligent beings that evolved in such a universe ever have or be capable of having conversations of why they feel qualia even when they don’t?

So there is no reason why 'sentience' has to be encased in a biological body/brain, in fact this sentience/consciousness/qualia-experiencing-ability should and most likely is substrate independent. Going back to tech parlance again, a bare metal hypervisor does not know and does not care what operating systems the virtual machines existing on top of it are running on, and from the perspective of the individual virtual machines they do not care and don't even know that they are actually a simulated instance running within a larger computer "out there" in the "real world" etc.

If 'sentience' is indeed substrate independent then this means in theory we should be able to use computers to create virtual digital minds that can think, feel, act and most importantly experience subjective qualia just exactly like the way we do... basically digital humans are not only pass the Turing Test outwardly but are actually "alive" on the inside... the fire in the equations giving rise to the ghosts in the machine, so to speaks.

But for this to be possible it has to be an emulation from the ground-up. Basically each neuron has to be simulated at the cellular level if not even the molecular or perhaps quantum level... There have been some attempts, there is a project called OpenWorm that tried to simulate an entire worm in this manner, and years ago IBM had a project called BlueBrain to try to simulate a human brain, they got up to the level of a portion of a rat's brain before giving up, realizing it would take more than all the computers in the world combined and that wouldn't even be enough to simulate fully the brain of a cat.

All of the prominent "AI" today, including ChatGPT and the likes, are really just smokes and mirrors and fancy parlor tricks. These AI will never be 'sentient' not even in a trillion years from now. The only way for true 'sentience' to happen is to emulate a digital connectome from the ground up.

But our current paradigms and hardware/technology aren't capable of simulating 'sentience'. It will take many more fundamental breakthroughs in both algorithms, methods of machine learning, and also the speed of processors etc in order to approach anything close to being able to fully emulate sentient AI. Quantum AI comes to mind... Because of the larger geopolitical issues and global diminishing EROEI, we may never reach this level of AI development.

The current paradigm of deep machine learning and artificial neural networks was only made possible by the advent of advanced graphics card technology basically GPU that doubled as AI inference chips and other ASICs such as Google’s TPU (Tensorflow processor unit). Even if the algorithms and techniques were discovered a lot earlier the hardware simply wasn’t available. Back before the age of deep learning in games like Go everything relied on piecemeal algorithms handcrafted and flushed out by brute force or Monte Carlo etc but, unlike Chess, Go could never be brute forced since there were more board positions than atoms in the known universe. Deep Neural networks came to the rescue because they were good at pattern recognition that could also emulate intuition, thereby dramatically reducing the “search space” and making brute force unnecessary. Likewise, chat bots have existed since the dawn of the computer age, but traditionally they were just a bunch of “if…then…else” branching statements that searched for keywords in a string or substring against a canned database of sentences and then replied back with canned messages. It wasn’t fooling anyone. But with deep learning and neural networks trained on the entire text-based internet of content these advanced networks can almost fool some people some of the time, but it's all lexiconal card tricks. The AI doesn’t have a brain, isn’t really capable of thinking on its own, and is not at all sentient.
Suffice it to say, I bring all this up to say that ChatGPT is not true AI. It is very neat smokes and mirrors, but nonetheless it's nothing more than a cheap parlor trick.

This is because language is a reduction symbology meant to represent/map reality itself as experienced through a human, but since ChatGPT doesn't even have 0.00001% of the processing power needed to fully simulate a human brain/connectome, then its letters no matter how seemingly convincing at first, will never reach the depths of a real, alive, deep thinking human being.

Thought experiment: if language was limited to only ~20 words then the Turing Test would be a lot easier to pass for AI.
 

siegecrossbow

General
Staff member
Super Moderator
Im old enough to remember the first and second VR hype cycles...

Remember when metaverse was supposed to take over the world as the second life to SecondLife?

Or when suckerburgers Libre was supposed to become the new world reserve currency?

Remember when back in 2017 Elon was yapping about how full level 5 SDC was just weeks away ???

Remember when Ford CEO said by 2020 it would mass produce a full level 5 SDC as its flagship entry into this arena?

If you listen to tacoburger he cant even direct you to the nearest Taco store... all he can do is get hyped up by the headlines...

ChatGPT is just a fancy version of the Eliza verbot... its next language prediction based on trained pattern recognition of human writing, nothing more...

Hyperloop, Mars colonization by 2025, brain upload…

Even 5G I felt is very underwhelming compared with how it was hyped up to be.
 
Top