I am trying to think up an intelligence test for a computer.
I am discounting things like computer vision and computer hearing since a human who is blind or deaf can still be intelligent. What I thought was would a computer be considered intelligent if it could play a text adventure like Monkey Island?
For example consider this intelligence test: “You are in a room. You see a cat, a mouse and some cheese. What happens next?”
or “You are in a room. You see a river and a boat. On the other side is a picnic. You are hungry. What do you do?”
or “A woman drops $100 on the floor. What do you do?”
or “You wake up on the street. You have lost your memory. What happens next?”
Simple questions for a human. But is there a computer system in the world that could answer these? Not even Cortana or Siri could answer these. But I don’t think it would necessarily be too difficult to program. Just add a lot of general knowledge. Like scribblenauts++.
BTW. If you are a human you can answer these questions to see what the answers should be. (Also try to explain your thought processes) I just realised that these are the sort of questions they asked the androids in Blade Runner.
This is not a Turing test because it is not important if the computer sounds like a human only if the computer is intelligent or not. As a second test you could allow the computer to ask questions to get more information.
Here is a more tricky one that requires generalisation:
“You see three apples. A monkey eats the first apple. It dies. A hippo eats the second apple. It dies. You are hungry. Do you eat the last apple?”
The standard bar for machine intelligence is called “The Turing Test”.
The fictional Voight-Kampff test in Blade Runner is actually based on the Turing Test.
The history of AI is full of skeptics constantly “moving the goal posts” though, every time a computer passes a test. For a while, people said a computer would be considered intelligent if it could play chess well. People invented a computer that could play chess well, and everyone changed their mind and said that it wasn’t intelligent; a truly intelligent computer would have to play chess well enough to beat a master. People invented a computer that beat the greatest chess player in the world. Everyone changed their mind and said that the whole chess thing was just a red herring, and that a truly intelligent computer would have to be able to win at a more social game with natural language, like Jeopardy. People invented a computer that beat the best humans at Jeopardy. Everyone changed their mind and said that Jeopardy didn’t really count after all, and that a truly intelligent computer had to be able to write music better than Mozart, paint paintings better than Picasso, and write prose better than Shakespeare. The Turing test is considered the current bar because it’s pretty difficult to pass, but a few computers have passed it, and of course, as soon as they did, everyone said the Turing Test wasn’t actually good enough. I’m pretty sure that most of the world will never be convinced that machines can be intelligent, even if we reach the point that no one can tell who is a Cylon and who isn’t.
phew… maybe that one super computer that took part on Jeopardy, IBM’s Watson.
‘He’ maybe could search through databases and check for the most likely thing to happen and give that as an answer.
Otherwise that computer needs a whole lot of knowledge an average human just gained in his lifetime but probably isn’t aware of all the details.
Like what is a cat? what do cats do? what animals do they hunt and under what circumstances? what do they eat? what are they afraid of? what’s a threat to them? Same for the mouse, and similar for the cheese. And that’s becoming a huge information network for just these 3 things.
Now “You are in a room” - who is ‘you’, what defines a room, what means being inside of it. For just that already.
But yeah, as IBM’s Watson can beat humans in Jeopardy, even though Jeopardy is admittedly more a knowledge test than intelligence test, I think this supercomputer could give pretty good answers to such questions.
And thinking about this, how would you go about testing a computers intelligence? I think you would be more testing a softwares intelligence.
I don’t really like the Turing Test. Since this can be fooled easily. For example you can create a computer program that behaves like a drunk human. The test shouldn’t be weather the person is a human or not but if it is intelligent or not. Thus instead of “Is the person you’re talking to human?” the question should be “Is the person you’re talking to intelligent?”
It wouldn’t need that much knowledge. Think of the game Scribblenauts. It had quite a lot of knowledge in it.
Cat: Animal. Likes to chase mice. Furry. Enemy of dogs.
Mouse: Small animal. Likes to eat cheese.
Boat: Transport. Floats on water.
River: Long strip of water.
Stealing: Taking things that don’t belong to you: Bad
That’s about all you need for the first question. The funny thing is we all know “Dogs chase cats”. Except that isn’t actually true! It’s just a piece of false knowledge we all learn as children!
The IBM computer kind of cheated since it was just data mining.
If one day a computer could be programmed to resent that an opinion like “it was wonderful the waiting line lasted just 15 minutes” is negative for a medical urgency and positive for a plane travel well that would be very good news^^'. The thing is IMHO that it is is very difficult to formalize this “general knowledge” in a way that can be exploited by computers, most often it makes sense only from human experience.
Well that’s kind of 20th century AI. But it’s not imagining what happens next in the situation. That’s what humans can do. Like half way through the story you should ask “What happens next?” and the computer should say “Bilbo has to get rid of the ring so maybe he goes to Mount Doom. Or maybe he pawns the ring and becomes a millionaire. Or what I would do is put the ring on and become invisible and go and spy on lady hobbits.”
A lot of human interaction is all ready formalised in games like the Sims.
If you can get a computer to imagine what happens next it could also think about how what it’s saying will be received. i.e. it becomes self conscious.
Well as you say, Scribblenauts itself would successfully answer your first question. Does that mean Scribblenauts is intelligent? Or is it “just” data mining? If I pass a university exam by memorizing facts in a book, did I do it by using “real” intelligence, or was I just data mining?
No, scribblenauts has some formalised knowledge. Like “wheels roll”. The IBM computer just looked for words like “Tom Cruise” and “Film” and then searched for them and comes back with “Top Gun” just from statistical analysis. It didn’t really understand the questions. It could be compared to a human who just guessed at the answers or used Google.
The IBM computer might be considered intelligent if it knew what it was doing. Like if you asked it “How did you know it was Top Gun?” and it answered “I don’t know. I just Googled it.” I think self awareness is important.
Imo that would be more like virtual creativity / imagination.
If I recall correctly intelligence was defined as the ability to solve problems. Being able to solve more complex problems and being faster means more inteligent.
Like, you do not really have to be intelligent to write a story, you just need to be good at making things up. Though… I’m not sure, could be some other type of intelligence, like creative-intelligence if that exists.
But to solve problems like how to get across a river given that you have some wood an axe and some rope requires you to imagine building a boat? Personally I think creativity and imagination are very important to intelligence. Also, possibly the ability to see connections and make generalisations (sometimes of abstract things that exist only in your imagination).
Like a chess computer in a way has to write the story of how it will win the game in it’s mind.
I think you’re vastly underestimating all the work that Watson (the IBM computer) does. It’s much more complicated than just googling something and pasting the first word it found. The idea that you can distinguish whether or not it actually “knew what it was doing” is basically the central argument of the philosophy of intelligence. How do you know for certain that anyone actually “knows that they are doing”? You can’t see inside anyone’s brains and see the thoughts moving around. You can’t know for sure whether or not they’re actually humans or Terminators, or whether they’re even real or you’re just dreaming or hallucinating. The only way to know whether something is “actually” thinking is to observe it and see how it acts and make an educated guess based on what you observe. That’s the main idea of the Turing Test.
There are AI’s that can do this, and it’s not particularly complicated. If you know that your goal is “get to the other side of the river” and you have the facts “boats get across rivers”, “i can make a boat from wood, rope, and an axe”, and “i have wood, rope, and an axe”, then you can use propositional logic to figure out that making a boat will reach your goal. AI’s in the Sims and Skyrim actually do this kind of thing. A Sim will know that it’s hungry, and follow a chain of thoughts that cooked food will reach its goal, that it can make cooked food by putting uncooked food in an oven, that it has an oven, that it can get uncooked food by using a refrigerator, that it has a refrigerator, etc.
Well I’m just going by the documentary I watched about Watson. It was disappointing to find out how it actually worked.
I suppose for something to “know what it’s doing” requires that it has the symbolic capacity to describe what it’s doing. Watson didn’t have that capacity to communicate, so it couldn’t communicate to itself what it was doing. It could say things but it didn’t know it said things because it didn’t hear things. You need that feedback loop in order to think.
That’s why I think adding a language capacity to the Sims would be amazing. Imagine the Sims communicating to each other? “Can you tell me the way to the shops?” “Certainly. It’s the first on the left.” It would be interesting to eavesdrop on their conversations. The would have the ability to lie also, but would refrain from lying to people they trust like family and friends.
I don’t know if the distinction between data mining and “real” intelligence is that clear. IMO data mining (as defined as in the CRISP-DM standard, so also using data preprocessing and not just using machine learning on “final” data) encompasses globally human reasoning
In the end it probably isn’t that important how you get the answer. But it’s probably more important to know how you got the answer. Such as “Yeah, I said Top Gun because it’s the only Tom Cruise film I’ve heard of.”
Like that philosopher said “The only thing we know is that we know nothing.” or something.
“There are known knowns. There are known unknowns. There are unknown knowns. And unknown unknowns.”
heh
Like I said. Self awareness is the key.
Here’s my thought process for the boat example:
“I need to get across the river to get the food. I could use the boat or swim. They probably want the answer ‘boat’ or they wouldn’t have mentioned it. Perhaps they’re looking for a more imaginative answer like I saw on that documentary. I could say I would tunnel under the river with my hands like a mole. That would be funny. Maybe they’ll think I’m an idiot if I said that. I’m running out of time. I’ll just say ‘use the boat’.”
I agree. But I think for a lot of people, probably the majority, they will always find something to arbitrarily say that humans are intelligent and machines are not, regardless of whether it makes sense. Most people just instinctively want to believe that humans are special and maybe even magical, and that the things they do can’t be described by mere physical laws.
You’ll find the human brain is much the same. Just a series of electrical impulses.
Putting on my evolutionary biologist hat, you’ve started with the wrong question. Language is one of the higher orders of intelligence. It involves associating random sounds with random concepts. It also involves highly abstract concepts. Try explaining something as simple as ‘the’.
Point is, only one species on this planet has developed intelligence to the point of language. Asking a computer to duplicate one of our most sophisticated inventions (written language with an alphabet) is just plain unfair. And computers built to do this task will invariably fail at the simpler tasks associated with intelligence.
Set your goal much smaller. Build a computer that can duplicate the intelligence level of a bacteria (been there, done that). From there expand it to pass the insect level tasks, then on to a small mammal.
Taking this approach might lead to a computer intelligent enough to conceptualise language in its own, without needing to be built to deal with a specific problem or test.
Yeah that is a popular approach. But I don’t necessarily think it would work. I can build a model of a toaster and make bigger and bigger models of more electrical systems until I end up with the iPhone. But it wouldn’t shed any light on the IOS operating system. For all we know the brain is like computer hardware which loads an OS encoded on our DNA when we are born and throughout life.
definition: “the #1” = “thing that is a #1 that we were just talking about or is in front of us”.