AI Week, Day 1: Now you’re speaking my language

One of the fundamental tests of Artificial Intelligence is something called the Turing Test. It’s a blind test where an operator types questions into a computer screen and receives answers from either a person or an AI. An AI is said to pass the test if the person asking the questions cannot reliably determine whether they are talking to a machine or a person.

Understanding language is one of the pre-requisites of passing this test, and a subject we could easily spend the week discussing. For now, we’ll approach it Minsky-like, with a couple of short paragraphs exploring different aspects of the problem.

Think In French!

I took five years of french in high-school and middle-school and don’t remember much of it at all. My teacher used to tell us to “think in French”, that way we would really understand the language and be able to speak it without having to parse each word. I never achieved even close to this level of proficiency but it does introduce an interesting question about language.

Do I think in English?

Some thoughts certainly come through as complete sentences, such as things I’m about to say or write, but what about emotions, desires, needs? A question I often ask my wife when we are trying to decide what we want for dinner is what food shaped hole is missing in your stomach (i.e. I have a Chipotle shaped hole). The higher level thought, what am I hungry for, may be in English, but the evaluation of that thought may not be. When you think of a Chipotle burrito, you think of more than just the words, but the sensations. Maybe you think of how full your stomach is after eating one, or the blend of taste sensations that come from biting into one. This sense memory is not expressed in words, and yet is understood and used to evaluate the question, what do I want to eat?

Language is not something we know from birth, but something we learn. A baby gets hungry, or wants a nap, or needs changed and is able to communicate this without words. Yet, at the same time, simply through the act of listening to people talk, it learns to say its first words and thus opens up a wider world of communication.

Do we program an AI with specific knowledge of language or teach it as it goes, like a baby?

Elementary my dear, Watson

Computers already understand language well enough to beat the top players in Jeopardy. The IBM computer, Watson, bested Ken Jennings and Brad Rutter in February of last year. If you watched the telecast it wasn’t even close, prompting Jennings to welcome their new machine overlords. Watson did more than just look up the right answer, but actually read the question, understood what was being asked, and provided a correct response.

It did this through sheer parallel processing. Watson is in fact not a single computer, but a series of 90 server racks with a total of 16 terabytes of RAM (4000 times the average computer), and nearly 3000 processors (think dual or quad core times 1000). It had access to 4 terabytes of digitized text (about 8 million books) or more than you could read in 100 lifetimes. Watson used a varitey of proven text recognition algorithms to determine probable answers, then compared the results, tabulating percentages of likelihood for each answer being correct. If enough different algorithms produced the same response it would answer, usually correctly.

Does Watson “understand” the question it’s being asked? Watson functions like Minsky’s 1000 of autonomous “unintelligent” agents. It uses thousands of different programs, run in parallel to parse a question and determine its most likely answer. In this sense it “understands” in that is correctly able to parse the meaning of a question. But “understanding” as we mean it, is a different question.

Juliet is a giant ball of gas

In Jonah Lehrer’s book, Imagine: How Creativity Works, he discusses research into brain injuries, particularly those to the right hemisphere of the brain. It was thought for a long time that injuries to the right side of the brain were not as serious, that the fundamental centers of language and meaning were all located in the left brain. But those who with right brain injuries ceased to be able to understand jokes, sarcasm, metaphor, even though they still understood language.

Lehrer states that the left brain is responsible for denotation (dictinorary meaning) of language, whereas the right is responsible for connonation (contextual meaning). Lehrer uses the example of “Juliet is the sun” from Shakespeare to illustrate this problem. We know that Romeo does not mean that Juliet is a big ball of gas, but rather that she is radiant and affects him in ways that the sun affects the Earth. Without both denotation and connotation, language is not fully understood.

Does Watson understand connotation?

By sheer volume, IBM’s Watson has access to more digitized text than any single person could absorb. Jeopardy is a game of word play, which requires some understanding of the different ways words are used. Watson does this by having a large databank of word usages to compare individual questions to. Is this how our minds work? Are we confused the first time we see “Juliet is the sun” until we can cross reference it with other information?

Is language necessary?

We want to talk to our computers and at some point AI needs to be able to communicate with us in order for us to work together. But language is only a framework for ideas. Just as language triggers specific ideas, sensations and feelings in us, so does computer code trigger specific electrical impulses in a computer circuit. For useful understanding an AI must parse the inputs (senses) and produce the correct response. Early AIs may be more like children, without the sophisticated understanding of language like Watson, but still able to tell us what it needs.

See you tomorrow! Check out some of Brian’s thoughts on AI methodology over here!

2 Comments

Filed under Trube On Tech

2 responses to “AI Week, Day 1: Now you’re speaking my language

  1. Watson was pretty slick, but he’d have a long way to go before passing the Turing test. 😉 But I agree that the mind stores ideas separate from the words for those ideas.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s