The Turing Test and the Limits of Science

In most every area of computer science, the world has seen far more progress than we ever imagined possible back in 1950. Computer vision and control is so precise that we can launch a cruise missile from hundreds of miles away, and choose which window it will use to enter a building. We edit documents with point-and-click, not to mention creating them with voice dictation. Anyone familiar with the old Star Trek series knows that we already have computers that sound far more human than we expected, back in 1966, that they would sound in several centuries.

You are also reading this at a computer far more powerful than those that took humans to the moon, or even that launched the space shuttle. The method by which all our computers are communicating is certainly not something envisioned back in 1950, or even 1970, when in “Colossus: The Forbin Project” two supercomputers took hours to learn a common language that they could use.

In 1981, Bill Gates believed that providing 640K of RAM to users “would last a great deal of time” (he denies ever saying that “nobody will ever need more than 640K of RAM”). But it was only six years, he says, before people were clamoring for more, to match the needs of ever-more-powerful applications.

Now for the notable exception: Artificial Intelligence.

In 1950, Alan Turing devised a straightforward test for artificial intelligence: a person sitting at a terminal, engaged in conversation (what we would call today an Instant Messaging chat), is unable to determine that he or she is conversing with a computer rather than another human being. He also predicted that by the year 2000, a computer could fool the person for at least five minutes roughly 70% of the time.

In 1968, when Stanley Kubrick’s “2001: A Space Odyssey” emerged, the thinking computer named Hal (trivia: despite the fact that each of the three letters “HAL” precedes by one the letters “IBM,” this was merely a coincidence) was considered a far more realistic possibility than Star Trek’s violation of Einstein’s relativity (the relationship between speed, matter & time appear to preclude faster-than-light space travel).

In 1984, when the sequel “2010” emerged, the field of AI seemed to be making great progress. I recall this was the one course that used the LISP language, because with LISP one could have the program itself write and then execute additional code — to, in effect, learn from events and write procedures to respond to the same events in the future.

Today there will be no further sequels to 2001 — not only because the craft was not launched in that year, but because we still have no thinking computer anywhere close to meeting the Turing standard. Since 1991, Dr. Hugh Loebner has sponsored an annual contest which hopes to award $100,000 to the first computer program to pass the Turing test. At the tenth test in 2000, 100% of the judges were able to recognize the computer within five minutes. Ditto five years later.

Furthermore, all of the entrants to date are programs which employ various tricks to make the judges think they are human — not one is an actual attempt to get a computer to think for itself. It is not merely that we don’t yet have a computer thinking like Hal, but that we don’t even have a prototype capable of entering the contest and beating out a group of clever programming tricks for the $2000 awarded for the “best attempt” each year.

My father-in-law was one of the (if not the) first to note that the Turing test is awfully close to the one provided by the Talmud. The Talmud says that a sage once created a golem, a man of clay, and sent it to another. The second attempted to engage it in conversation, and — realizing that it was unable to speak or communicate — recognized it as a golem and returned it to dust. Chazal were certainly familiar with the people who were deaf and mute, as well as mental handicaps, so it was obviously not a simple matter of speaking but cognition that was involved. And the commentaries say that our sages were not given the amount of Divine assistance necessary to create a thing that could not merely walk and obey orders, but communicate intelligently as well.

There are certainly any number of Torah-observant individuals who believe that modern computer scientists might eventually meet this standard with artificial intelligence. I am — just as with evolution — less sanguine.

You may also like...

20 Responses

  1. Justin A says:

    I’m a professional programmer for nearly 20 years. And I’d never try to make this claim in public view. While it’s true, that we will probably NEVER create a machine that actually perceives the world as we do, it would be likely to create a machine that can fool us into (or at least laymen) into thinking that it “thinks”.

    The problem is our capacity to measure. If we can’t define human intelligence/conciseness then how can we say “We’ve built a computer that thinks like a human.” It’s like trying to establish measurements in a house of mirrors.

    That’s why Turing set up the test the way he did. As an equal abstraction of both human and computer intelligence. It only tests a computers ability to emulate human communication, and since all judgments between observers are relative, so then must be thought.

    We already have AI’s that ‘create’ music and can translate human language. AI’s even that (help) to make other computers. No doubt we will one day build a machine that can fool us into anthropomorphizing it, but I don’t doubt there will always be an “alieness” about any such machine.

    The problem that arises is that Chazal remain true, but not without room for Biblical Critics to point to this “evidence” to their fallacy. As again the realm of human perception leaves room for fuzzy distinctions and overlapping value judgments.

    ps. If I still can’t “get” Israeli humor, I doubt any machine will be a natural in human society.

  2. micha says:

    Yes, the Turing Test is flawed because it confuses the effects of intelligence with intelligence itself. Only a real diehard behaviorist would be happy with this assumption. However, I do not believe that it’s possible to successfully imitate intelligence, so in practice, a false positive wouldn’t come up.

    R’ Dr Moshe Koppel, in his book “Metahalakhah”, shows that intelligence involves a middle ground, something that is neither algorithmic nor random.

  3. JewishAtheist says:

    I’m also a professional programmer, and I work with some sub-fields of AI.

    Turing and everybody else vastly underestimated the complexity of the brain. Real AI won’t be accomplished simply by throwing more hardware at the problem; our brains have countless subsystems which are themselves incredibly complex. Just think about all that’s involved with interpreting the visual data the sensors in our eyes receive.

    If we ever do develop real AI (and I think we will, but not in my lifetime) it’ll be through some sort of evolutionary process, not through top-down design. Intelligence is just too complicated.

    None of this has any relevance to religion, of course, except to show how difficult it would be to design the human mind. Sure, an omnipotent designer could do it since an omnipotent designer could do anything, but it certainly appears that our brains evolved. There’s a whole string of progressively less intelligent brains from ours through the other primates, the other mammals, the other vertebrates, through the animals with nerve clusters instead of brains, to those organisms with no specialized thinking processors at all.

  4. Seth Gordon says:

    I am glad to see a posting on this blog to which I can respond in a less charif manner. 🙂

    As another practicing programmer, I think that in theory, given sufficient programmer work and technological development, maybe, someday we could have a computer that passes the Turing Test. In practice, I don’t think it’s ever going to happen, because it’s not worth the effort. As The Economist once remarked, we have billions of human intelligences on the planet now, and making more is cheap and easy. A perfect simulation of a human intelligence would be a neat parlor trick, but not very useful.

    As a practical example: My employer markets a search engine that can analyze text documents and translate geographic references (“Paris”, “10 miles north of Baghdad”), etc., into latitudes and longitudes, so that you can search for all documents that refer to locations in a certain area and see them plotted on our map. This requires some clever processing of the text, because “Paris” could refer to a city in France or a city in Texas, “Madison” could refer to a city in Wisconsin or a former U.S. President, there’s a place in New Caledonia called “The”, etc., etc. I’m not allowed to tell you how we (usually) pull it off, but I can tell you that we don’t try to understand exactly how the human brain recognizes geographic references and then then translate that process into a computer program.

  5. Toby Katz says:

    I’ll confess I never totally understood the Turing test. Suppose you (a person) asked a computer or a robot, “What do you like better, vanilla or chocolate ice cream?” and the robot answers, “Vanilla.” Now you are convinced you are talking to a genuine intelligence, but in fact, the robot never has tasted ice cream and doesn’t really “know” what ice cream is, though it may be able to tell you the exact chemical formula for ice cream as well as the temperature range in which it remains frozen.

    In fact I really don’t believe that intelligence is POSSIBLE without sensory input. Even if it had sensors that could tell you the temperature and sugar content of a spoon of ice cream placed on the sensor, it would STILL not be “tasting” ice cream and its stated preference would be either random, or the preference of the programmer — in either case, a trick to make the robot /seem/ intelligent, to make it act /as if/ it had intelligence.

    Until robots have five senses, and emotions as well, there will not be real intelligence and all the seeming intelligence will be actually the intelligence of the programmer. I’m pretty sure my “until” works out to “never.”

    You may wonder why I include emotion as a prerequisite — there was a fascinating article about this in TIME a couple years back — it may have been based on a recently published book — about people who lost all affect as a result of brain damage and even though their IQ remained unchanged, they lost the ability to act intelligently. One example was a formerly brilliant and successful stockbroker who as the result of an accident, had no emotions. With no fear of failure and no excitement at success, he took unreasonable risks and then didn’t care when he lost piles of money. His career took a nosedive.

    Even the most seemingly rational and emotionless person, say a physicist, a mathematician, a Yekke or a Litvak, feels SOME emotion and his emotional sense that “this is an esthetically pleasing formula” or “this is too messy and is not satisfying” contributes to his ability to think intelligently.

    Therefore I long ago concluded that only G-d could create a being with genuine intelligence — having the necessary prerequisites of sensory input and emotion. It is highly unlikely that G-d’s creatures — us humans — will ever be able to duplicate this feat. And a machine that could pass the Turing test would only be a very sophisticated trick.

  6. JewishAtheist says:

    Toby Katz,

    Your story implies that emotions are caused by the brain. The brain is a physical object. Physical objects can in theory be simulated or duplicated. Therefore, it’s theoretically possible to create a computer which has emotions.

    Unless our intelligence comes from some immaterial soul, and the fact that brain damage can get rid of our emotions or intellect seems to imply that it doesn’t, there is no reason we shouldn’t be able to duplicate it someday.

  7. Yaakov Menken says:

    Micha, I don’t think Turing was confusing intelligence vs. acting as if it were intelligent. What does it mean to “think”? It’s not something you can measure. “Turing’s reasoning was that, presuming that intelligence was only practically determinable behaviorally, then any agent that was indistinguishable in behavior from an intelligent agent was, for all intents and purposes, intelligent.” Or, here’s how he put it in his original paper: “The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”

    If something acts intelligent, he says, then we’re going to say it “thinks.” The goal of AI was (and, in theory, remains) to create a computer / robot that can respond “intelligently” to unforeseen situations, experiences, and dialogues, the same way we learn from previous, similar experiences.

    Justin, I’m also a programmer, I just wear several hats (all black 🙂 ). Some of my recent work was mentioned here.

    I may well be proven wrong. But I don’t think so. The Turing test is itself very limited — a single five-minute dialogue. Yet the entrants are little more than collections of pre-coded responses and sentence-parsing algorithms. They are not true attempts to create a learning, growing intelligence.

    JewishAthiest, this is where we disagree. The ability of brain damage to ruin emotions or intellect hardly proves your contention — but Mrs. Katz makes a compelling case that we need both emotion and sensory input. This is very interesting, because the original goal in robotics was always to have something that would respond intelligently, but without emotion — and now we have, perhaps, learned that one without the other is impossible.

    Programming to learn is, at least, comprehensible — we can envision what we need. But how do you program emotion?

  8. JewishAtheist says:

    Yaakov Menken,

    It sounds like you are assuming that emotions are more than a result of the human brain. I can’t prove you’re wrong, but you sure can’t prove you’re right. Although Toby Katz’s story doesn’t prove it one way or the other, it does strongly imply that the brain is at least a necessary condition for emotions, and is probably a sufficient one as well.

  9. Yaakov Menken says:

    JA, that’s not what I mean. Those of us who believe in G-d do tend to believe that our intellects and emotions are far more than well-developed biology, and I (and from the comment above, Toby Katz) do believe that.

    But that wasn’t my point. From an AI perspective, and as a CS developer, I can at least envision what it means to program “intelligence.” I understand why Turing was under the misconception that it was achievable in 50 years. But emotion? I don’t see where it can begin or end — it’s like Toby Katz’s example of preferring one flavor of ice cream. Is your preference for one flavor over another merely random? If not, where does it come from? How would you program a computer to choose between them and change its mind when it got older? Any models we have for things like this are obviously too artificial.

  10. JewishAtheist says:

    Well, if you believe in evolution, you can hypothesize about the sources of our emotions. I don’t see why you couldn’t program a machine to “prefer” one thing over another. Emotions can be seen as “states,” where each state is vague and amorphous. When we’re angry, it’s really like our brains are in a “state” in which we are more likely to act aggressively and less likely to trust, for example. There’s no reason I can see that a machine couldn’t have similar characteristics. You can argue about whether a machine that acts emotionally does in fact have emotions, but that’s a philosophical argument I’m not sure can be resolved. For all we know, our brains are machines that act emotionally. (I understand that you and Toby Katz believe that our intelligence and emotion aren’t just biology, but you can’t possibly say that you know it for sure.)

  11. Different River says:

    1) I used to say that “Artificial intelligence is for people without the real thing.” 😉

    2) I now say, “Artificial intelligence is whatever we can’t do yet.” I remember in the early 1980s heading about one “artificial intellegence application” was looking at what a computer user was doing and, when he presses the “help” key, bringing up information relevant to what he’s doing instead of just a “help start page.” This is now called “Context-sensitive help,” and it’s everywhere — but now that it exists, it’s not called artificial intelligence anymore. Guess it was too easy to be AI, since someone figured out how to do it.

    3) I doubt people use the Turing Test very often, but a Reverse Turing Test is quite common — this very blog uses it to fight comment spam! I refer, of course, to that “security code” you have to enter to post a comment. A Turning Test involves a human trying to distinguish between a fellow human and a computer; this is a computer trying to between a fellow computer and a human!

  12. Jonathan says:

    You choose a silly example for measuring AI progress, and a “strong AI” strawman, a paradigm long discarded by mainstream CS research community. In reality, most CS researchers and practicioners have long moved on to data-driven/machine learning techniques, potentially with some domain-specific heuristics thrown in. These systems often outperform humans on “intelligent” tasks, yet lay no claim to undestanding or modeling the human brain — it is simply not necessary for “intelligent” output. One could argue that the Metacarta example is not an intelligence task; here is another example: Google and other search engines.

    Many technology-challenged people, and that surely accounts for more than Turing’s arbitrary 70% threshold, think that a search engine somehow “understands” documents, and a searcher’s query — to return exactly what they asked for. In fact, people think that search engines are somehow omniscient – there are many queries such as “when will i die” and “should I marry so-and-so”. Presumably these queries are a direct result of a human being impressed by the “intelligence” demonstrated retrieving documents created by other people. Can better results be returned by a human reference librarian, hiding behind a search engine’s query box? Often probably not much better.

    My point is simply that Turing’s test is irrelevant – and should not be used to demonstrate divine origin of the human brain lest we want to deal with another “torah codes” emberassment to our religion.

  13. Yaakov Menken says:

    DR and Jonathan are getting at the same point — that the skills learned in the search for AI are not worthless. We get context-sensitive help and computer vision out of AI, including OCR (optical character recognition) able to address wobbles in scanned text. Voice dictation, which I both referenced and use, is also an AI technology.

    Yet beneath that Jonathan has made a crucial error. The “discarded” paradigm offered by Turing does not depend on a computer actually duplicating the processes of the human brain. On the contrary, as Turing said, the computer merely needs to act in an intelligent fashion. And that is “discarded” only to the extent that it simply cannot be achieved at this point.

    Instead, AI is restricted to limited tasks with limited variables. Cruise missiles are not thrown off by wind blowing through a tree, voice dictation assembles the most likely phrase, and natural-language search engines determine what to look for. In all these situations, computers are being “trained” to respond correctly to greater degrees of chaos — but not to display thinking intelligence. [I have spec’d out an algorithm to address in a text search the fact that different people transliterate the same Hebrew word different ways. We’ll be implementing it on one of our sites, eventually. This, too, is AI under a very limited definition.]

    DR, the comment-spam-blocking “reverse Turing test” would be useless were computers closer to the Turing standard. A visitor to this web site may have never seen a graphic similar to the one displayed when leaving a comment, yet the visitor is able to read the characters (despite the varied backgrounds, distortions, and extraneous lines) and enter them correctly (most of the time!). While it is certainly true that any number of programs can find blog comments sections and enter spam comments, and other vision programs are actually capable of parsing the graphic, a program capable of addressing various roadblocks of this nature remains distant nonetheless (for which everyone with a comments section should be extremely grateful). And even this would be a tremendously limited version of “intelligence.”

    Jonathan, you are right that people are impressed with search engine results. But what happens after they enter “should I marry so-and-so?” Obviously the answers are no longer impressive. Of course, we could envision a computer program able to analyze the people and ask a series of questions such that the questioner could think through the issue more thoroughly — we’ve had programs helping people think through psychological issues since the 1960’s (Eliza).

    But Turing did not say that 70% of people would be fooled by a search-engine query. He said 70% of people would be fooled by a five-minute conversation. Here is the transcript of my conversation with Eliza:

    Hello, I am Eliza.
    hello eliza
    How are you today.. What would you like to discuss?
    how do i know you’re not human?
    What makes you think I am not human?
    it says you are a computer
    Perhaps you would like to be a computer?

    Elapsed time: under 30 seconds. Turing’s test is hardly irrelevant — first of all, the $100,000 prize is still out there. But the fact that people have turned aside to more specific tasks is precisely because the development of a computer able to act in an “intelligent” fashion is so difficult to achieve.

    I think I was careful to state that I’m not certain we won’t get there. I just don’t think it’s likely. And I’m not so sure the “codes” debate is settled yet, either.

  14. Akiva says:

    > Bill Gates famously remarked that “nobody will ever need more than 640K of RAM!”

    No, he didn’t. The 640K limit was a design decision at IBM.

  15. Yaakov Menken says:

    Akiva, it is a truly famous quote. You can find it on this page between Ken Olson, founder and president of DEC (Digital Equipment Corporation, which was purchased by Compaq which then merged with HP), saying “There is no reason anyone would want a computer in their home,” and the Yale University management professor who graded Fred Smith’s paper proposing reliable overnight delivery service, and wrote “The concept is interesting and well-formed, but in order to earn better than a ‘C,’ the idea must be feasible” (Smith went on to found FedEx, which is currently worth over $29 billion).

  16. Akiva says:

    I’m sorry — it’s widely claimed that gates said that — but check your IBM History — the 640K limit was a design limitation — the original chip could only address 1 meg, and IBM set the upper memory for IO/video/etc.)

    (Note: I’ve been doing Micro hardware and software since 1976…and I owned an original 16K IBM PC.)

    http://news.zdnet.com/5208-1009-0.html?forumID=1&threadID=6527&messageID=131873&start=1

    Akiva

  17. tzura says:

    I am a Neurobiologist, and this has always been a topic of interest to me. It is indeed true that the reason/emotion dichotomy that is tradiationally assumed in psychology is breaking down, but this is largely due to the fact that the concept of emotion is becoming better defined as a function of the physiology of the brain (and viscera) and therefore more amenable to computational modeling. In fact, there is an emerging field called Affective Neuroscience, which is the study of the neural mechanisms of emotion. Books by Dr. Joseph LeDoux and Dr. Antonio Damasio (look them up in Google or Amazon) are good places to start.

  18. Toby Katz says:

    You have reminded me that the TIME article I read about emotion being an integral part of intellect was indeed based on Damasio’s book, which I really must read one of these days.

  19. Micha says:

    Yaakov, you are correct. Turing saw that we could not discuss “can a machine think” in any scientific manner (at least, not until scientists have a methodology for dealing with 1st-hand private experience), and therefore set out to transvalue the word “think”. Different spin than what you put on it, but the same point. The “confusion” I was speaking of is in then going back and saying that if we got a machine to exibit “intelliegence” it was really AI in the original sense of the word intelligence.

    But, as I said, I don’t think it’s possible to fake intelligence. Intelligence, when explored long enough, will show signs of free will; i.e. it is neither random nor algorithmic. And so, nothining isomorphic to a Turing Machine (i.e. something that runs algorithms, with or without an added random number generator) will ever acheive AI.

  1. July 1, 2006

    […] Many years ago (1950), before most of the readers of this blog were born, a computer scientist named Alan Turing came up with a test for artificial intelligence and computers. It became known as the Turing Test of artificial intelligence. According to the Cross-Currents Blog, “In 1950, Alan Turing devised a straight forward test for artificial intelligence: a person sitting at a terminal, engaged in conversation (what we would call today an Instant Messaging chat), is unable to determine that he or she is conversing with a computer rather than another human being. He also predicted that by the year 2000, a computer could fool the person for at least five minutes roughly 70% of the time.” […]

Pin It on Pinterest

Share This