Tag Archives: Featured

DTD: New Blog Round-Up

Thanks to all of you who have been following and stopping by. We just crossed 3000 overall site views and are on our way to our 100th post, and 50th follower (one of us, one of us). I try to read other blogs whenever I can, and I thought I’d feature some links from blogs I discovered just last week (plus a couple of old favorites):

  • So You Want To Be A Writer Eh? – A great post from “An Author’s Life” on the ups and downs of trying to do this writing thing full time, and some good tips on both the creative and re-visionary stages. I particularly like her “ride low” thought.
  • Confessions of a struggling blogger – Blogging every day can be tough, especially after the first few months, and it takes courage to admit our struggles and to try to turn them around.
  • A letter to Facebook – And also from Afrozy Ara, an open letter to Facebook, trying to cheer it up after last week’s IPO.
  • The Uprising Of Punctuation – If it ever came to war, my manuscripts have a lot to be angry about.
  • The Sentry – A great pic of the supernatural creatures guarding Mr. Buckley’s house.

Have a good Memorial Day. See you tomorrow!

Advertisements

Leave a comment

Filed under Round-Ups

AI Week, Day 5: The Measure Of A Man

For our last day of AI week I thought I’d cover a little game my parents and I played when walking out of the theater after seeing A.I., Steven Spielberg’s exploration of a whether a robot boy can love. While the film attempts to cover some of the questions raised by artificial life, we felt that it fell short. Many of the themes and philosophical questions raised have been explored in episodes of Star Trek: The Next Generation, through the character of Commander Data. Given the film’s length, 146 minutes, we decided to pick 3 episodes of TNG that contained the same themes but explored in a deeper and more emotional way (and that were also more fun to watch). Below is our list:

Season 2: Episode 9 – “The Measure Of A Man

Is Data a sentient being or the property of starfleet? How do we define sentience? These questions and more are explored in this early TNG episode. A starfleet robotics expert, Commander Maddox, is seeking permission to disassemble Data to learn how to make more. Data determines that Maddox’s research and methodology are not far enough along to warrant such a procedure and refuses, resigning from starfleet to prevent himself from being transferred to Maddox’s lab. Maddox challenges Data’s right to resign, claiming that he is a machine, property of starfleet and subject to the orders of his superiors against his will. Because of limited judicial staff, Captain Picard is called to defend Data’s sentience, and Commander Riker is forced to prosecute. Some moments of this episode are hard to watch, particularly when Riker shuts off Data, and Picard’s final empassioned defense.

Season 3: Episode 16 – “The Offspring

Can an android love or be a good parent? After returning from a cybernetics conference, Data has been working on building an android child in secret, to pro-create and to continue his existence. Being the only android known to exist, if he were lost, then sentinent androids would be lost. Starfleet quickly learns of this new android and wants to raise the child at their facility instead of with her father, Data. This conflict over her destiny causes a cascade failure in her positronic brain which eventually results in her death. This episode is an exploration of what it would be like to raise a child AI, slowly growing in intelligence, and using the previous generation of machines to design the next. This episode was the first to be directed by a cast member, Jonathan Frakes (Riker), and features some of the funnier and sadest moments in the show. The final moment when the officer who has been trying to take Data’s daughter away walks out, describing how Data tried to save her, brings a tear to my eye just thinking about it.

Season 4: Episode 25 – “In Theory

Like so many men, Data tried to procreate before ever trying to fall in love (and I don’t just mean Tasha). Just kidding. As you might expect this episode explores the question of whether Data can fall in love, and be in a romantic relationship. The main subplot is enjoyable as well, Enterprise caught in a Dark Matter nebula that Picard has to help pilot them out of, but I digress. Data consults the crew on romatic relationships in general, and his situation in specific, and chooses to try a relationship with Lt. D’Sora, who has made her intentions quite clear when she kisses him in the torpedo bay. Data constructs a romantic sub-routine and attempts to be a good boyfriend. At times I can’t decide if he’s dense because he’s a man or an android (we all could probably learn a few tips from this episode). I love the little moment with Spot at the end of this episode.

That’s all for AI week! Hope you had fun. And be sure to read all the posts over at the Buckley Blog. He covers his own troubles with creating an AI, the Singularity, a wonderful short story about robots finding religion, and a critical analysis of the 3 laws of robotics.

Are you liking these theme weeks? Let us know in the comments section. I’m thinking about doing one on creativity in a month or two. I’ve been reading Jonah Lehrer’s book Imagine, and it has a number of provocative ideas about writing, creativity, and the like. In the spirit of AI week I pose one final thought:

Some brain studies have suggested that there is a correlation between clinical depression and creativity. Specifically, the rigor required in revising a sentence until it is perfect. For moments of intuition a more manic state is required, to form the distant connections between ideas. Lehrer goes on to suggest that the bi-polar or manic depressive personality is one that is well suited to the creative arts. If we wanted to make an AI that could write a poem, paint a picture, or compose music, would we need to make it bipolar? What would an AI with a mental disorder be like and how would we achieve it? Would a 1000 monkeys at 1000 typewriters stand a better chance a writing Shakespeare?

Enjoy that little puzzle and everything else your weekend has to offer. See you Monday!

4 Comments

Filed under Round-Ups, Trube On Tech

AI Week, Day 1: Now you’re speaking my language

One of the fundamental tests of Artificial Intelligence is something called the Turing Test. It’s a blind test where an operator types questions into a computer screen and receives answers from either a person or an AI. An AI is said to pass the test if the person asking the questions cannot reliably determine whether they are talking to a machine or a person.

Understanding language is one of the pre-requisites of passing this test, and a subject we could easily spend the week discussing. For now, we’ll approach it Minsky-like, with a couple of short paragraphs exploring different aspects of the problem.

Think In French!

I took five years of french in high-school and middle-school and don’t remember much of it at all. My teacher used to tell us to “think in French”, that way we would really understand the language and be able to speak it without having to parse each word. I never achieved even close to this level of proficiency but it does introduce an interesting question about language.

Do I think in English?

Some thoughts certainly come through as complete sentences, such as things I’m about to say or write, but what about emotions, desires, needs? A question I often ask my wife when we are trying to decide what we want for dinner is what food shaped hole is missing in your stomach (i.e. I have a Chipotle shaped hole). The higher level thought, what am I hungry for, may be in English, but the evaluation of that thought may not be. When you think of a Chipotle burrito, you think of more than just the words, but the sensations. Maybe you think of how full your stomach is after eating one, or the blend of taste sensations that come from biting into one. This sense memory is not expressed in words, and yet is understood and used to evaluate the question, what do I want to eat?

Language is not something we know from birth, but something we learn. A baby gets hungry, or wants a nap, or needs changed and is able to communicate this without words. Yet, at the same time, simply through the act of listening to people talk, it learns to say its first words and thus opens up a wider world of communication.

Do we program an AI with specific knowledge of language or teach it as it goes, like a baby?

Elementary my dear, Watson

Computers already understand language well enough to beat the top players in Jeopardy. The IBM computer, Watson, bested Ken Jennings and Brad Rutter in February of last year. If you watched the telecast it wasn’t even close, prompting Jennings to welcome their new machine overlords. Watson did more than just look up the right answer, but actually read the question, understood what was being asked, and provided a correct response.

It did this through sheer parallel processing. Watson is in fact not a single computer, but a series of 90 server racks with a total of 16 terabytes of RAM (4000 times the average computer), and nearly 3000 processors (think dual or quad core times 1000). It had access to 4 terabytes of digitized text (about 8 million books) or more than you could read in 100 lifetimes. Watson used a varitey of proven text recognition algorithms to determine probable answers, then compared the results, tabulating percentages of likelihood for each answer being correct. If enough different algorithms produced the same response it would answer, usually correctly.

Does Watson “understand” the question it’s being asked? Watson functions like Minsky’s 1000 of autonomous “unintelligent” agents. It uses thousands of different programs, run in parallel to parse a question and determine its most likely answer. In this sense it “understands” in that is correctly able to parse the meaning of a question. But “understanding” as we mean it, is a different question.

Juliet is a giant ball of gas

In Jonah Lehrer’s book, Imagine: How Creativity Works, he discusses research into brain injuries, particularly those to the right hemisphere of the brain. It was thought for a long time that injuries to the right side of the brain were not as serious, that the fundamental centers of language and meaning were all located in the left brain. But those who with right brain injuries ceased to be able to understand jokes, sarcasm, metaphor, even though they still understood language.

Lehrer states that the left brain is responsible for denotation (dictinorary meaning) of language, whereas the right is responsible for connonation (contextual meaning). Lehrer uses the example of “Juliet is the sun” from Shakespeare to illustrate this problem. We know that Romeo does not mean that Juliet is a big ball of gas, but rather that she is radiant and affects him in ways that the sun affects the Earth. Without both denotation and connotation, language is not fully understood.

Does Watson understand connotation?

By sheer volume, IBM’s Watson has access to more digitized text than any single person could absorb. Jeopardy is a game of word play, which requires some understanding of the different ways words are used. Watson does this by having a large databank of word usages to compare individual questions to. Is this how our minds work? Are we confused the first time we see “Juliet is the sun” until we can cross reference it with other information?

Is language necessary?

We want to talk to our computers and at some point AI needs to be able to communicate with us in order for us to work together. But language is only a framework for ideas. Just as language triggers specific ideas, sensations and feelings in us, so does computer code trigger specific electrical impulses in a computer circuit. For useful understanding an AI must parse the inputs (senses) and produce the correct response. Early AIs may be more like children, without the sophisticated understanding of language like Watson, but still able to tell us what it needs.

See you tomorrow! Check out some of Brian’s thoughts on AI methodology over here!

2 Comments

Filed under Trube On Tech

Bonus Friday Post (Chaos Ensues)

The Chaos Game Revisited

For those of you who enjoyed fractal week, or just like to see pretty animations, here are a couple of videos of the Chaos Game played on the Sierpinski Triangle and Hexagon with a little twist.

While moving towards our vertices we also rotate each point we play (more on the method some other time). I made both of these videos in high-school using Visual Basic 5/6.

Mandelbrot – The Early Years

And while we’re taking a trip down memory lane here’s another animation using Visual Basic, some early Mandelbrot work also from High-School (I apologize for the low-res, best I could render at the time).

Coming Soon

Next week is AI week here and at the Buckley Blog. We’ll be exploring ideas from pioneers and current thinkers, as well as pop-culture and video games. We’ll have a special edition of AGFV on Thursday featuring three of the more interesting portrayals of AI in video games (“Look at you, hacker”), and wrap it up with a little Star Trek fun.

Enjoy your weekend!

Leave a comment

Filed under Round-Ups, Trube On Tech

Bonus Friday Post (Fractals You Can Build)

Thanks to all of you who have been following Fractal Week! Today I’ll show you how to make a 3D Sierpinski triangle made from nothing but toothpicks and mini-marshmallows! Plus we have a comic from guest artist Brian Buckley!

First stage, base of one of the four pyramids that make up the full size triangle. Our cat Dax is already curious.

Build pyramids in the three corners, leaving the middle open. Some left-over fuel to the left.

A complete mini-pyramid. The full size pyramid is made up of four of these, three for the base and one on the top. We built the base a few days ahead of time allowing it time to harden. (The Little Red Haired Girl also suggested reinforcing it with hairspray which seems to have done a good job).

The final product with one of my wife’s birthday gifts to me guiding the final construction.

View from one side, as you can see the whole middle section is removed, as it would be in the real triangle. It may come as a surprise to some of you, but Fractals with parts removed are not the most stable, hence why we reinforced it.

Most 3D Sierpinski’s I’ve seen are computer generated and have four sides. I felt this wouldn’t work for our shape for two reasons. One triangles, even ones with parts removed are stronger than squares. And two, this shape has a Sierpinski triangle on all sides, even the base.

Several hundred toothpicks and about half a bag of mini-marshmallows went into the construction of this fractal (some to fuel the builder).

The view from inside.

Friend of the blog Brian Buckley was kind enough to draw another “versus” picture for this week.

Have a great weekend, we’ll be back to regularly scheduled programming next week.

The Fractals You Can Draw posts are collected in ebook form, available now on Amazon.

1 Comment

Filed under Uncategorized

Fractals You Can Draw (The Hilbert Curve or What The Labyrinth Really Looked Like)

Our last fractal of the week is the Hilbert Curve, a variation of the Peano curve first described in 1891. While this is our oldest fractal of the week, its uses and derivation have implications in modern technology and modeling.

Drawing the Hilbert curve is best learned by seeing the various stages:

Initial Stage:

This is the start of our Hilbert curve or its Axiom (a term we’ll come back to late). The second stage is as follows:

Stage 2:

Each stage of the Hilbert curve is a construction of three rotations:

1) Make a mirror image of our initial stage to the right of our first stage (there’s a gap between each new section which we’ll get to in a moment).

2) Make a copy of our left object and rotate it counter-clockwise 90 degrees.

3) Make a copy of our right object and rotate it clockwise 90 degrees.

You now have four u shapes with a gap between them equal to the length of one of their line segments. Rather than leaving these objects separate, we connect the four shapes with three new lines.

This is repeated for each stage:

Stage 3:

As you can see Stage 3 contains 4 copies of Stage 2.

Stage 4:

Stage 4 contains 4 copies of Stage 3.

Here’s a drawing of Stage 5:

Hilbert Curve (Hand-Drawn)

Hilbert Curve (Hand-Drawn)

The easiest way to draw the Hilbert curve is to keep a copy of the previous stage on another sheet of paper.

The Hilbert Curve has a Fractal dimension of 2 like the Dragon Curve and has some interesting properties. At Stage 6 (below), the curve is constructed of 4095 segments of equal length contained in an area 128 lengths by 128 lengths. This means that the length of the curve is 32 times greater than one side of the square containing it.

This kind of compression is practical in antennae construction. The longer the antennae the weaker the signal you can pick up. Curves like the Hilbert curve allow for lines of incredibly long length to be confined in a small area.

Going Deeper (L-Systems):

I have a confession to make. There’s an easier way to construct the Hilbert curve (with the aid of computers) using a technique called Lindennmayer Systems (L-Systems for short). L-Systems are an algorithmic way of describing the formation of an object, and is used in plant modeling, as well as being able to make all of the fractals we’ve drawn this week.

There are three parts to an L-System, the axiom, the instructions, and the translation rules.

The axiom is a string of characters that describe the initial state of the fractal. For the Hilbert curve the axiom is the following:

Axiom: A

For each L-System there are a set of characters that provide drawing instructions. Typically ‘F’ means draw a line forward, ‘+’ means rotate left and ‘-‘ means rotate right. There are other characters that make up the Axiom and the rules that dictate the growth of the shape, but are disregarded once drawing begins.

To draw the Hilbert curve there are two rules:

A → – B F + A F A + F B –

B → + A F – B F B – F A + 

In this case we rotate 90 degrees with every turn. To use an L-System start with the axiom and apply any rules that apply. In this case we get:

– B F + A F A + F B –

If we stop here and remove all characters that are not instructions we get the following:

-F+F+F-

Which draws our initial stage:

If however we apply the translation rules one more time, the resulting string is:

+ A F – B F B – F A + F + – B F + A F A + F B – F – B F + A F A + F B – + F + A F – B F B – F A + –

If we again strip all non-instructions we get:

-+F-F-F+F+-F+F+F-F-F+F+F-+F+F-F-F+-

Which draws our second stage.

Let’s try another one for the Sierpinski Triangle:

Axiom: A

A → B−A−B

B → A+B+A

In this case we rotate 60 degrees with every turn, and A and B are both used to mean draw a line forward. Here’s what we get after 2,4,6 and 8 stages:

Koch Curve L-System:

Axiom : F++F++F

F → F−F++F−F

Use 60 degree turns. Result after 4 iterations:

Dragon Curve L-System:

Axiom: FX

X → X+YF

Y → FX-Y

Use 90 degree turns. Result after 10 iterations.

Other L-Systems:

Quadratic Fractal:

Axiom: F+F+F+F

F→ F+F-F

Use 90 degree turns. Result after 4 iterations.

Koch Curve Variant:

Axiom = F

F → F+F−F−F+F

Use 90 degree turns, result after 6 iterations:

Fractal Plant:

Axiom: X

X → F-[[X]+X]+F[+FX]-X

F → FF

Use 25 degree turns. When you encounter a ‘[‘ save the current angle and position and restore when you see ‘]’. This is an example of a recursive L-System. Result after 5 iterations:

All of the above images were generated using C++ code, rendered to SVG then converted to PNG. There are more L-Systems than I can list here, but all follow the same basic construction rules.

*Whew!* Lot to cover today, but hope it was fun. Feel free to leave any questions in the comments section. Tomorrow we’ll take a look at some non-conventional fractal construction methods.

Want some fractals you can color? You might like my new Adult Coloring Book: Fractals.

11 Comments

Filed under Uncategorized

Fractals You Can Draw (The Dragon Curve or The Jurassic Fractal)

For Day Three of Fractal Week we have our youngest fractal, first described in the mid-sixties. This fractal curve goes by many names “The Harter-Heighway Dragon”, “The Dragon Curve” or “The Jurassic Park Fractal”. This last name refers to this fractal’s most well known appearance, on the section title pages in the book Jurassic Park, iterating a little more in each section. This is also probably the toughest fractal to draw this week, and also features our guest artist “The Little Red-Haired Girl.”

Step One: Draw a line. Rotate a copy of this line 90 degrees (clockwise) and attach it to the end of the first line.

The black line is our starting place, and the red line is the line we’ve rotated.

Step Two: We now have an L shaped line. Rotate a copy of this ENTIRE line 90 degrees, and attach it to the end as in Step 1.

The number of lines it takes to draw this fractal doubles with each iteration. It is important to always rotate a copy of the entire previous shape and attach it to the end of the previous iteration.

Step Three: We now have a backwards question mark. Rotate a copy of this ENTIRE line 90 degrees, and attach it to the end of the previous step.

Hopefully now you’re beginning to get the idea. This next iteration is where it gets a bit tricky.

Step Four: Repeat Step Three for the ENTIRE new line we’ve constructed.

As you can see our rotated segment intersects with the previous step to form a box. While the Dragon Curve never crosses itself, there are a number of intersections like the one in this step.

Step Five: Repeat Step Three for the ENTIRE new line we’ve constructed.

By five iterations we’re starting to get an idea of the whole shape. As in all of the previous drawings, the section in red is the rotated copy, and the black is the previous stage. Let’s see a couple more iterations:

Step Six: Repeat Step Three for the ENTIRE new line we’ve constructed.

Step Seven: Repeat Step Three for the ENTIRE new line we’ve constructed.

Here’s a drawing with two more repetitions or iterations (Stage 9 of the fractal, mirror image):

Dragon Curve (Hand-Drawn)

Dragon Curve (Hand-Drawn)

This fractal turned out to be very difficult to draw because of the number of times it curves back on itself. While the final product took only about 20-30 minutes to draw, both of us took a crack at it for hours. Two things are important in trying to do this yourself.

1) Draw the new stage in a different color or on a different piece of paper to always have the previous shape as a reference.

2) Remember you are rotating the entire previous step, and always attaching it to the end (like uncurling a ribbon). You want to maintain the same rotation and direction.

Here’s a couple more iterations using a computer:

13 Iterations:

16 Iterations:

Going Deeper (Dimension and Self-Similarity)

The Dragon Curve is an example of a space-filling curve and has a fractal dimension of 2. This essentially means the dragon fractal is a shape, a 2D object, despite the fact it is a curve that never crosses itself and does not meet at the ends.

Also, as you can see from our illustrations above, each new stage of the fractal contains two copies of the previous stage. This is a slightly different kind of self similarity to the Sierpinski Triangle and Koch Snowflake, which look exactly the same no matter how far you zoom in.

Observe on the Koch Snowflake (image source: Wikipedia):

The Dragon curve, while not being exactly the same at all resolutions, still is constructed from previous iterations, and maintains the same basic shape.

Sidebar: Jeff Goldblum’s character in the movie Jurassic Park is a mathematician who specializes in Chaos Theory, related to the Chaos Game we played on Monday. While the Dragon Curve is a good choice for a book about dinosaurs, it is completely deterministic and not chaotic at all. The dragon will always be the same at various iterations. The Chaos Game, on the other hand, yields slightly different results each time, which resolve to the recognizable shape of the Sierpinski Triangle.

One more Fractal to go, our oldest fractal, The Hilbert curve. Any questions?

Want some fractals you can color? You might like my new Adult Coloring Book: Fractals.

12 Comments

Filed under Uncategorized