One of the Magic blogs I watch is one from a guy writing a Magic program called Forge. It’s all written in java using swing stuff, and for just one guy, it’s pretty impressive. It’s got a fairly large card library and decent AI with lots of wingdings and features like drafting and a quest mode. Unlike a lot of magic programs that depend on the players to enforce rules, it actually has a concept of phases and the flow of the game. Props.
He (I’m assuming he; sorry if that’s incorrect) made a fairly interesting post today about 2 types of AI. The first he describes is where each card has certain rules associated with it, and based on these bits, it comes up with a plan. The other is a mathematical where various cards are weighted and plays are evaluated (like I described in earlier posts). Forge uses the former, while Duels of the Planeswalker deals with the latter. I figured I would expand on this some.
What he’s describing is actually a fairly important divide in AI right now. AI can be split into two general categories: symbolic AI and statistical AI, with symbolic being the specific knowledge and statistical being the weighting of cards. Right now, AI is actually heavily skewed towards statistical AI, mostly because it works. Symbolic AI matches better with how we think humans work: it has concepts and some knowledge coded in. It, however, hasn’t delivered. The big successes in AI recently have been with a statistical approach. For example, web search (like google) is an AI problem; given a set of query words, try to send back meaningful links. It would be very difficult to actually come up with specific meanings for the crazy number of websites out there. Instead, methods like data mining can apply statistical methods to draw connections from millions of examples.
Of course, the boundary isn’t strict. Take the “related videos” in youtube. They’re mostly generated by analyzing tons of videos, figuring out what people watch after their current video, relative ratings, and so on. An important part to that, though, is determining the presence of tags and similarities between videos. For example, the tags “batman” and “superman” are more likely to indicate similarity between 2 videos than “batman” and “carrots.” Of course, this analysis is also a part of the huge mathy cog that is data mining, but at some level, it’s backed by some representation of knowledge, even if it’s just a word inputted by the user to describe the video.
I’m not yet willing to hedge my bets on whether one is better than the other. So far, statistical AI has proven very useful, but I actually spent last summer working with a cognitive architecture to generate football plays. It’s amazing how logical inference and reasoning can take some simple things, such as moving a player around, and demonstrate some idea of trying to find gaps. One of the big obstacles is trying to demonstrate general intelligence instead of relying on domain specific knowledge. Like the Forge programmer mentioned, he writes blurbs specifically for giant growth or shock. The first step for a reasoning agent would be to generalize shock logic to incinerate. Trying to use that to play chess is another entire matter, but small steps might one day yield some crazy emergent qualities, and that AI would be something that truly understands what it’s doing.
[…] mentioned in a previous post that I’ve spent some time working with a cognitive architecture, and it seemed like a good […]