Tag Archives: Language

Fancy Pants Language

The purpose of writing is to convey meaning, to communicate.

This communication can be anything. It could be a thought, an image, an idea, an emotion, or even just a list. But the bottom line is that writing is meant to be understood by others.*

So why do we so often obfuscate (render obscure, unclear, or unintelligible) our writing?

“Eschew obfuscation, espouse elucidation” or put in more plain terms “avoid being unclear, support being clear” (thanks Wikipedia).

Sometimes we use language as a way to communicate that we are smart, or at least smarter than we may appear. Sometimes we use language to gain an advantage over others, to make them feel dumber and less knowledgeable. And sometimes we simply can’t help ourselves.

So here are some guidelines for when to use the five dollar word, and when to try something with less than three syllables:

1) Know your audience: Certain audiences expect certain language. In a business meeting for instance, don’t ever miss an opportunity to use the word utilize instead of the word use.

2) Know the word you’re using: Unsure of a word’s exact and precise meaning? Look it up or don’t use it.

3) Does this word convey a precise and specific meaning not conveyed by the shorter or more commonly known word? If not, then examine why you need to use it.

4) What is the purpose of this piece of writing? There is no one size fits all approach to this. For technical writing, precise correct words are desired. For a blog post meant to be read in 3-4 minutes with your morning coffee, maybe keep it simple.

5) Explain yourself: It’s not necessarily patronizing to provide a definition. In a blog post this can be done with a link, or between commas or parenthesis, something the reader who knows the word can skip (or check to make sure you know what you’re talking about).

6) It does not mean you’re not smart when you write clearly and concisely: In fact it tends to mean just the opposite. If you really understand something, you can explain it to others at whatever level they’re coming in with.

7) Use different words: You’d be amazed how many ways there are to say the same thing. Take Google’s definition of obfuscation from above (obscure, unclear, unintelligible). Or how about convey, teach, explain, etc.? Avoiding repetition in writing communicates that you have a command of the language more than flowery words.

What’s the most obscure word you use all the time? What is the most recondite word you utilize habitually?

*Excluding diaries, private writing, etc.

1 Comment

Filed under Writing

AI Week, Day 1: Now you’re speaking my language

One of the fundamental tests of Artificial Intelligence is something called the Turing Test. It’s a blind test where an operator types questions into a computer screen and receives answers from either a person or an AI. An AI is said to pass the test if the person asking the questions cannot reliably determine whether they are talking to a machine or a person.

Understanding language is one of the pre-requisites of passing this test, and a subject we could easily spend the week discussing. For now, we’ll approach it Minsky-like, with a couple of short paragraphs exploring different aspects of the problem.

Think In French!

I took five years of french in high-school and middle-school and don’t remember much of it at all. My teacher used to tell us to “think in French”, that way we would really understand the language and be able to speak it without having to parse each word. I never achieved even close to this level of proficiency but it does introduce an interesting question about language.

Do I think in English?

Some thoughts certainly come through as complete sentences, such as things I’m about to say or write, but what about emotions, desires, needs? A question I often ask my wife when we are trying to decide what we want for dinner is what food shaped hole is missing in your stomach (i.e. I have a Chipotle shaped hole). The higher level thought, what am I hungry for, may be in English, but the evaluation of that thought may not be. When you think of a Chipotle burrito, you think of more than just the words, but the sensations. Maybe you think of how full your stomach is after eating one, or the blend of taste sensations that come from biting into one. This sense memory is not expressed in words, and yet is understood and used to evaluate the question, what do I want to eat?

Language is not something we know from birth, but something we learn. A baby gets hungry, or wants a nap, or needs changed and is able to communicate this without words. Yet, at the same time, simply through the act of listening to people talk, it learns to say its first words and thus opens up a wider world of communication.

Do we program an AI with specific knowledge of language or teach it as it goes, like a baby?

Elementary my dear, Watson

Computers already understand language well enough to beat the top players in Jeopardy. The IBM computer, Watson, bested Ken Jennings and Brad Rutter in February of last year. If you watched the telecast it wasn’t even close, prompting Jennings to welcome their new machine overlords. Watson did more than just look up the right answer, but actually read the question, understood what was being asked, and provided a correct response.

It did this through sheer parallel processing. Watson is in fact not a single computer, but a series of 90 server racks with a total of 16 terabytes of RAM (4000 times the average computer), and nearly 3000 processors (think dual or quad core times 1000). It had access to 4 terabytes of digitized text (about 8 million books) or more than you could read in 100 lifetimes. Watson used a varitey of proven text recognition algorithms to determine probable answers, then compared the results, tabulating percentages of likelihood for each answer being correct. If enough different algorithms produced the same response it would answer, usually correctly.

Does Watson “understand” the question it’s being asked? Watson functions like Minsky’s 1000 of autonomous “unintelligent” agents. It uses thousands of different programs, run in parallel to parse a question and determine its most likely answer. In this sense it “understands” in that is correctly able to parse the meaning of a question. But “understanding” as we mean it, is a different question.

Juliet is a giant ball of gas

In Jonah Lehrer’s book, Imagine: How Creativity Works, he discusses research into brain injuries, particularly those to the right hemisphere of the brain. It was thought for a long time that injuries to the right side of the brain were not as serious, that the fundamental centers of language and meaning were all located in the left brain. But those who with right brain injuries ceased to be able to understand jokes, sarcasm, metaphor, even though they still understood language.

Lehrer states that the left brain is responsible for denotation (dictinorary meaning) of language, whereas the right is responsible for connonation (contextual meaning). Lehrer uses the example of “Juliet is the sun” from Shakespeare to illustrate this problem. We know that Romeo does not mean that Juliet is a big ball of gas, but rather that she is radiant and affects him in ways that the sun affects the Earth. Without both denotation and connotation, language is not fully understood.

Does Watson understand connotation?

By sheer volume, IBM’s Watson has access to more digitized text than any single person could absorb. Jeopardy is a game of word play, which requires some understanding of the different ways words are used. Watson does this by having a large databank of word usages to compare individual questions to. Is this how our minds work? Are we confused the first time we see “Juliet is the sun” until we can cross reference it with other information?

Is language necessary?

We want to talk to our computers and at some point AI needs to be able to communicate with us in order for us to work together. But language is only a framework for ideas. Just as language triggers specific ideas, sensations and feelings in us, so does computer code trigger specific electrical impulses in a computer circuit. For useful understanding an AI must parse the inputs (senses) and produce the correct response. Early AIs may be more like children, without the sophisticated understanding of language like Watson, but still able to tell us what it needs.

See you tomorrow! Check out some of Brian’s thoughts on AI methodology over here!

2 Comments

Filed under Trube On Tech