Tag Archives: Featured

DTD: New Blog Round-Up

Thanks to all of you who have been following and stopping by. We just crossed 3000 overall site views and are on our way to our 100th post, and 50th follower (one of us, one of us). I try to read other blogs whenever I can, and I thought I’d feature some links from blogs I discovered just last week (plus a couple of old favorites):

  • So You Want To Be A Writer Eh? – A great post from “An Author’s Life” on the ups and downs of trying to do this writing thing full time, and some good tips on both the creative and re-visionary stages. I particularly like her “ride low” thought.
  • Confessions of a struggling blogger – Blogging every day can be tough, especially after the first few months, and it takes courage to admit our struggles and to try to turn them around.
  • A letter to Facebook – And also from Afrozy Ara, an open letter to Facebook, trying to cheer it up after last week’s IPO.
  • The Uprising Of Punctuation – If it ever came to war, my manuscripts have a lot to be angry about.
  • The Sentry – A great pic of the supernatural creatures guarding Mr. Buckley’s house.

Have a good Memorial Day. See you tomorrow!

Leave a comment

Filed under Round-Ups

AI Week, Day 5: The Measure Of A Man

For our last day of AI week I thought I’d cover a little game my parents and I played when walking out of the theater after seeing A.I., Steven Spielberg’s exploration of a whether a robot boy can love. While the film attempts to cover some of the questions raised by artificial life, we felt that it fell short. Many of the themes and philosophical questions raised have been explored in episodes of Star Trek: The Next Generation, through the character of Commander Data. Given the film’s length, 146 minutes, we decided to pick 3 episodes of TNG that contained the same themes but explored in a deeper and more emotional way (and that were also more fun to watch). Below is our list:

Season 2: Episode 9 – “The Measure Of A Man

Is Data a sentient being or the property of starfleet? How do we define sentience? These questions and more are explored in this early TNG episode. A starfleet robotics expert, Commander Maddox, is seeking permission to disassemble Data to learn how to make more. Data determines that Maddox’s research and methodology are not far enough along to warrant such a procedure and refuses, resigning from starfleet to prevent himself from being transferred to Maddox’s lab. Maddox challenges Data’s right to resign, claiming that he is a machine, property of starfleet and subject to the orders of his superiors against his will. Because of limited judicial staff, Captain Picard is called to defend Data’s sentience, and Commander Riker is forced to prosecute. Some moments of this episode are hard to watch, particularly when Riker shuts off Data, and Picard’s final empassioned defense.

Season 3: Episode 16 – “The Offspring

Can an android love or be a good parent? After returning from a cybernetics conference, Data has been working on building an android child in secret, to pro-create and to continue his existence. Being the only android known to exist, if he were lost, then sentinent androids would be lost. Starfleet quickly learns of this new android and wants to raise the child at their facility instead of with her father, Data. This conflict over her destiny causes a cascade failure in her positronic brain which eventually results in her death. This episode is an exploration of what it would be like to raise a child AI, slowly growing in intelligence, and using the previous generation of machines to design the next. This episode was the first to be directed by a cast member, Jonathan Frakes (Riker), and features some of the funnier and sadest moments in the show. The final moment when the officer who has been trying to take Data’s daughter away walks out, describing how Data tried to save her, brings a tear to my eye just thinking about it.

Season 4: Episode 25 – “In Theory

Like so many men, Data tried to procreate before ever trying to fall in love (and I don’t just mean Tasha). Just kidding. As you might expect this episode explores the question of whether Data can fall in love, and be in a romantic relationship. The main subplot is enjoyable as well, Enterprise caught in a Dark Matter nebula that Picard has to help pilot them out of, but I digress. Data consults the crew on romatic relationships in general, and his situation in specific, and chooses to try a relationship with Lt. D’Sora, who has made her intentions quite clear when she kisses him in the torpedo bay. Data constructs a romantic sub-routine and attempts to be a good boyfriend. At times I can’t decide if he’s dense because he’s a man or an android (we all could probably learn a few tips from this episode). I love the little moment with Spot at the end of this episode.

That’s all for AI week! Hope you had fun. And be sure to read all the posts over at the Buckley Blog. He covers his own troubles with creating an AI, the Singularity, a wonderful short story about robots finding religion, and a critical analysis of the 3 laws of robotics.

Are you liking these theme weeks? Let us know in the comments section. I’m thinking about doing one on creativity in a month or two. I’ve been reading Jonah Lehrer’s book Imagine, and it has a number of provocative ideas about writing, creativity, and the like. In the spirit of AI week I pose one final thought:

Some brain studies have suggested that there is a correlation between clinical depression and creativity. Specifically, the rigor required in revising a sentence until it is perfect. For moments of intuition a more manic state is required, to form the distant connections between ideas. Lehrer goes on to suggest that the bi-polar or manic depressive personality is one that is well suited to the creative arts. If we wanted to make an AI that could write a poem, paint a picture, or compose music, would we need to make it bipolar? What would an AI with a mental disorder be like and how would we achieve it? Would a 1000 monkeys at 1000 typewriters stand a better chance a writing Shakespeare?

Enjoy that little puzzle and everything else your weekend has to offer. See you Monday!

4 Comments

Filed under Round-Ups, Trube On Tech

AI Week, Day 1: Now you’re speaking my language

One of the fundamental tests of Artificial Intelligence is something called the Turing Test. It’s a blind test where an operator types questions into a computer screen and receives answers from either a person or an AI. An AI is said to pass the test if the person asking the questions cannot reliably determine whether they are talking to a machine or a person.

Understanding language is one of the pre-requisites of passing this test, and a subject we could easily spend the week discussing. For now, we’ll approach it Minsky-like, with a couple of short paragraphs exploring different aspects of the problem.

Think In French!

I took five years of french in high-school and middle-school and don’t remember much of it at all. My teacher used to tell us to “think in French”, that way we would really understand the language and be able to speak it without having to parse each word. I never achieved even close to this level of proficiency but it does introduce an interesting question about language.

Do I think in English?

Some thoughts certainly come through as complete sentences, such as things I’m about to say or write, but what about emotions, desires, needs? A question I often ask my wife when we are trying to decide what we want for dinner is what food shaped hole is missing in your stomach (i.e. I have a Chipotle shaped hole). The higher level thought, what am I hungry for, may be in English, but the evaluation of that thought may not be. When you think of a Chipotle burrito, you think of more than just the words, but the sensations. Maybe you think of how full your stomach is after eating one, or the blend of taste sensations that come from biting into one. This sense memory is not expressed in words, and yet is understood and used to evaluate the question, what do I want to eat?

Language is not something we know from birth, but something we learn. A baby gets hungry, or wants a nap, or needs changed and is able to communicate this without words. Yet, at the same time, simply through the act of listening to people talk, it learns to say its first words and thus opens up a wider world of communication.

Do we program an AI with specific knowledge of language or teach it as it goes, like a baby?

Elementary my dear, Watson

Computers already understand language well enough to beat the top players in Jeopardy. The IBM computer, Watson, bested Ken Jennings and Brad Rutter in February of last year. If you watched the telecast it wasn’t even close, prompting Jennings to welcome their new machine overlords. Watson did more than just look up the right answer, but actually read the question, understood what was being asked, and provided a correct response.

It did this through sheer parallel processing. Watson is in fact not a single computer, but a series of 90 server racks with a total of 16 terabytes of RAM (4000 times the average computer), and nearly 3000 processors (think dual or quad core times 1000). It had access to 4 terabytes of digitized text (about 8 million books) or more than you could read in 100 lifetimes. Watson used a varitey of proven text recognition algorithms to determine probable answers, then compared the results, tabulating percentages of likelihood for each answer being correct. If enough different algorithms produced the same response it would answer, usually correctly.

Does Watson “understand” the question it’s being asked? Watson functions like Minsky’s 1000 of autonomous “unintelligent” agents. It uses thousands of different programs, run in parallel to parse a question and determine its most likely answer. In this sense it “understands” in that is correctly able to parse the meaning of a question. But “understanding” as we mean it, is a different question.

Juliet is a giant ball of gas

In Jonah Lehrer’s book, Imagine: How Creativity Works, he discusses research into brain injuries, particularly those to the right hemisphere of the brain. It was thought for a long time that injuries to the right side of the brain were not as serious, that the fundamental centers of language and meaning were all located in the left brain. But those who with right brain injuries ceased to be able to understand jokes, sarcasm, metaphor, even though they still understood language.

Lehrer states that the left brain is responsible for denotation (dictinorary meaning) of language, whereas the right is responsible for connonation (contextual meaning). Lehrer uses the example of “Juliet is the sun” from Shakespeare to illustrate this problem. We know that Romeo does not mean that Juliet is a big ball of gas, but rather that she is radiant and affects him in ways that the sun affects the Earth. Without both denotation and connotation, language is not fully understood.

Does Watson understand connotation?

By sheer volume, IBM’s Watson has access to more digitized text than any single person could absorb. Jeopardy is a game of word play, which requires some understanding of the different ways words are used. Watson does this by having a large databank of word usages to compare individual questions to. Is this how our minds work? Are we confused the first time we see “Juliet is the sun” until we can cross reference it with other information?

Is language necessary?

We want to talk to our computers and at some point AI needs to be able to communicate with us in order for us to work together. But language is only a framework for ideas. Just as language triggers specific ideas, sensations and feelings in us, so does computer code trigger specific electrical impulses in a computer circuit. For useful understanding an AI must parse the inputs (senses) and produce the correct response. Early AIs may be more like children, without the sophisticated understanding of language like Watson, but still able to tell us what it needs.

See you tomorrow! Check out some of Brian’s thoughts on AI methodology over here!

2 Comments

Filed under Trube On Tech

Bonus Friday Post (Chaos Ensues)

The Chaos Game Revisited

For those of you who enjoyed fractal week, or just like to see pretty animations, here are a couple of videos of the Chaos Game played on the Sierpinski Triangle and Hexagon with a little twist.

While moving towards our vertices we also rotate each point we play (more on the method some other time). I made both of these videos in high-school using Visual Basic 5/6.

Mandelbrot – The Early Years

And while we’re taking a trip down memory lane here’s another animation using Visual Basic, some early Mandelbrot work also from High-School (I apologize for the low-res, best I could render at the time).

Coming Soon

Next week is AI week here and at the Buckley Blog. We’ll be exploring ideas from pioneers and current thinkers, as well as pop-culture and video games. We’ll have a special edition of AGFV on Thursday featuring three of the more interesting portrayals of AI in video games (“Look at you, hacker”), and wrap it up with a little Star Trek fun.

Enjoy your weekend!

Leave a comment

Filed under Round-Ups, Trube On Tech