News on China's scientific and technological development.

dingyibvs

Junior Member
It looks as if you don't understand how computers works at all.
Basically von Neumann architecture is the basis of how all computer can work meaning it is also the limitation of software since it is limited by the hardware.
Have you heard of "and", "or", "nand", "nor"?
These are the only logic command within any software since they are confined by the architecture.
Do you think humans create our logic based on only these four commands?

Pretty sure human logic is based on exactly those four commands. Well, technically just 3: "and", "or", and "not". Can you give me an example of human logic that cannot be expressed in terms of those 3 logical operators?

Trust me, I've been a human all my life ;)
 

SamuraiBlue

Captain
Pretty sure human logic is based on exactly those four commands. Well, technically just 3: "and", "or", and "not". Can you give me an example of human logic that cannot be expressed in terms of those 3 logical operators?

Trust me, I've been a human all my life ;)

So tell me how do you keep your emotion in check?
Basically whether you are aware or not human emotion always takes part in your decision making.
How are you going to calculate emotion let alone include it into a program?
Abstract concepts are based on emotion as well as moral decisions. That is why there are judges at court and not a computer.
 
Last edited:

solarz

Brigadier
So tell me how do you keep your emotion in check?
Basically whether you are aware or not human emotion always takes part in your decision making.
How are you going to calculate emotion let alone include it into a program?
Abstract concepts are based on emotion as well as moral decisions. That is why there are judges at court and not a computer.

So it sounds like you don't even understand what Logic is.

Logic does not equate to decision making. Yes, humans are emotional creatures, but that has nothing to do with AI. We were talking about the dangers of AI when you chimed in claiming we didn't understand what AI was, now all you can say is that AI can't have emotions???

AI doesn't need emotions to be dangerous. The Singularity does not require emotions.
 

solarz

Brigadier
LOL now I had plenty of fun reading from you two, in a second I'll go back to give you Likes ROFL
and this:

calls for fuzzy logic to be introduced here

Fuzzy logic is still logic, and computers are quite capable of implementing it. Google search is one example of fuzzy logic.
 

vesicles

Colonel
So it sounds like you don't even understand what Logic is.

Logic does not equate to decision making. Yes, humans are emotional creatures, but that has nothing to do with AI. We were talking about the dangers of AI when you chimed in claiming we didn't understand what AI was, now all you can say is that AI can't have emotions???

AI doesn't need emotions to be dangerous. The Singularity does not require emotions.

In fact, I think what makes AI dangerous is exactly that it doesn't have emotion. It uses logic and logic only. That's why AI could potentially yield logical but cruel decisions unacceptable to human.

That has been a biggest challenge in using AI vs human pilots in future fighter planes.
 

solarz

Brigadier
In fact, I think what makes AI dangerous is exactly that it doesn't have emotion. It uses logic and logic only. That's why AI could potentially yield logical but cruel decisions unacceptable to human.

That has been a biggest challenge in using AI vs human pilots in future fighter planes.

Completely agreed.

I remeber the "I, Robot" movie (not the book). Humans programmed into androids that they cannot harm humans, or allow humans to come to harm through inaction. The androids then reached the conclusion that, since humans left on their own are always hurting each other, therefore in order for androids to obey their directive, they must place all humans under captivity.

That was a great example of the unintended consequences of AI: human programmers create a set of premises intended to control the behavior of AI. However, those premises are based on some flawed human assumptions, which AI do not share. Thus, the AI reaches some conclusions that are completely unforeseen by its human creators.
 

solarz

Brigadier
Some further thoughts on AI:

The danger with AI is not so much that it does not have emotions, but rather that it does not have morality.

Ethics and morality is extremely difficult to program, mainly because we are incapable of defining them in the first place. Morality is definitely not logical.

Asimov's Three Laws is an example of deontology: attempting to define morality as a set of rules that one must follow. All deontological morality is problematic, because there are always flaws and loopholes in the rules.

Unfortunately, the only way we can conceive of programming morality (or anything really) into AI is through rules (commands and conditions). This means that whatever we do, AI will never have the kind of ethical framework that we humans can be comfortable with.
 

shen

Senior Member
The way I see it AI is dangerous simply because it is going to be smarter and superior in general to human. So in order to act in benefit of humanity, AI would have to take the freedom of choice away from humanity. At best, we become pets.
 
Top