Wednesday, June 23, 2010

Researching A.I.

In order to get a feel for what I should be setting out to accomplish with the artificial intelligence for Flames of Redfield, I decided to read a few books and articles related to the subject, namely The Universe Within by Morton Hunt, Artificial Life by Steven Levy, The Quest for Machine Intelligence by Harold Kimball and The True Value of Socially Constructive Gaming by Michael Powell.

When I read for informational purposes, I take notes; and, when I take notes, I like to create diagrams, pictures, and charts to help me better understand what I just read. I know some people out there learn the same way, so I’ll post my drawings that help me understand my research.

The first diagram summarizes my research from chapter one of The Universe Within:

When we explore the interior universe, we find that the human mind can perform an extraordinary amount of tasks, including: word recognition, remembering, recognizing absurdity, handling directions, word searching, language translation, problem solving, logical reasoning, and comprehension of fiction.

I minimized these aspects to the brain to ones that I deemed the most important for Artificial Intelligence in game play: Recognition, Remembering, Comprehension.

However, this is just an observable way of creating Artificial Intelligence that some would argue is not intelligent at all. An Artificial Intelligence that only displays these three traits would only be a façade of intelligence.

So, how can we push for a more respectable intelligence? Kimball in The Quest for Machine Intelligence states that Jeff Hawkins has found a way.

The methods of old artificial intelligence projects aimed at creating a set of rules that the computer would follow to display the observable behavior of intelligence, but Hawkins developed a new approach to the problem. Instead of making the computer work in a concrete fashion, it should develop internal representations in its hierarchy (this method being applied to his computer vision system).

What can be taken from these developments? A machine intelligence should mimic not the observable behavior of intelligence, but the inner cognitive workings that lead to these behaviors.

Computers have a large amount of processing power, allowing them to do things humans simply cannot: such as checking thousands of chess moves in less than a second, checking to see which will be most beneficial, and finally decided on the most advantageous move. Humans simply to do not think in that manner -- we quickly narrow our search to a few moves, check those moves, and quickly pick one because of our time limitations.

Therein lies the problem of machine intelligence: the need to actually figure out how the brain works, how it creates this simple decision tree, and how to mimic these internal thought processes in computer code.

No comments:

Post a Comment