Archive for the ‘AI’ Category

On the Application of Bayesian Analysis to Magic by Chris D

Sunday, December 13th, 2009

(Here’s one of the more technical papers I got. Very, very interesting analysis of Magic AI. Of course, the people out there who actually do Magic AI have a better idea how true these things are, but I think there a bunch of good observations. -Kevin)

Recently, deterministic computer systems have been shown to be highly effective in the solution of various games of strategy previously thought exclusive to the provenance of human thought.  Chess, and other chess-like games, are a primary example of this.  The ability of computer systems to play strategy games such as chess has presently surpassed all human players.  Indeed, for some simpler games, such as checkers, computer programs have successfully “solved” the game – creating an algorithm that, along with a corresponding database of predetermined calculations, can provably determine the best move in any position in a relatively short amount of time.  On the other hand, many other strategy games have not been successfully played by computers.  This lack of success can be attributed to a variety of vicissitudes, depending on the characteristics of the game being attempted.

Chess is a particularly auspicious candidate for computerized solutions, being that it is: (1) open in the information domain, that is, all aspects of the game state are known to all players at all times; (2) highly tactical, that is, many if not most variations involve lines of play in which each player is forced to play from among a very small number of good-looking moves, and where all other moves can be clearly shown to lose, which decreases the branch factor of the game tree; (3) simple to define, that is, each piece moves according to basic geometric factors and legal moves tend to remain legal over long periods of time; (4) drawish, that is, games tend to produce draws and wins always involve capitalizing on the opponent’s errors.  These factors combined make chess a good candidate for computer play.

Magic the Gathering, on the other hand, is, for many reasons, poorly suited to be played by an artificial intelligence.  These include the facts that it is: (1) closed in the information domain, that is, portions of the game state, such as the content of the players’ hands, is known by only one player, and, additionally, the order of cards within the players’ decks, which, overall, represents most of the information content of the game state, is hidden from both players.  This will present problems to any computer trying to play a good game of Magic, although it can be mitigated somewhat by considering the fact that the impact of the information of card order within the deck on the outcome of the game is much lower than that of the information that is available to the players.  Additionally, Magic the Gathering is: (2) highly strategic, that is, players are seldom forced to do anything, and there are many points of time within each “turn” where players may act or react (but usually choose not to), further increasing the time complexity of the game tree; (3) difficult to define, that is, as there are thousands of Magic cards in print, many of which have eccentric and difficult to implement rules that alter or create new aspects to the game state, it is difficult to represent and effectively analyze the game state; (4) random in outcome, that is, games tend to be won or lost often based upon random chance, and in many cases strategy is less important than drawing the correct card.

Many of the ways in which Magic is difficult to implement on a computer can be resolved, if only partially, through an aggressive use of Bayesian analysis.  In particular, Bayesian analysis allows the program to make decisions based only on a partial knowledge of the game state, and to update its decision algorithms depending on new information that becomes available.  However, since the expected behavior of the Magic the Gathering, when considered as a system, is highly dependent on the behavior of the opponent, it is difficult to use Bayesian analysis to determine the correct play in a given situation.

A computer should be able to manage Magic data from multiple perspectives.  In general, we can consider the effects of a play as an Aggro, Combo, or Control action.  Also, we can attempt to quantify the loss/benefits of an action in terms of variables such as (1) life lost/gained, (2) damage dealt, (3) field advantage, (4) card advantage, (5) mana acceleration, (6) information gained/lost.  An example of a card which would give card advantage is Divination, which draws two cards at the cost of one.  An example of a card which causes information loss is Time Spiral, since it causes cards from the hand/graveyard to be shuffled back into the library.

When designing a deck, it is important to keep a balance between many different factors.  For example, in decks which include mana acceleration, it is critical to have exactly the right number and kinds of cards which provide mana acceleration.  Even in decks which do not focus on card advantage, it is often beneficial to include it.  For example, I included two Sign in Blood cards in my class tournament deck, in order to provide card advantage at critical times.  I also included two Gravediggers, which are also intended to provide card advantage.  In general, I found the addition of the additional element of card advantage to be invaluable in defeating certain kinds of Aggro decks within the environment, specifically those which would go full Aggro for the first few turns of the game and then run out of cards to play.  Against these sorts of decks, I can trade my creatures one for one (or in many cases one for two) and, by gaining card advantage from such cards, still have one or two extra cards later on to play when my opponent has run out of cards.  Similarly, any computer program that attempts to play magic must be aware of elements such as card advantage, field advantage, and mana acceleration, which, although possibly orthogonal to their main strategy, will provide the additional push needed to dominate a similarly situated opponent.

One major barrier to the performance of the Magic the Gathering engine is the lack of professional Magic games recorded in a computer-readable format.  In comparison, chess has hundreds of years of history over which the rules of the game have remained basically the same, and millions of recorded games played throughout that history.  The nature of chess also lends itself to being recorded and analyzed by computer programs.  On the other hand, Magic the Gathering has rules that change every few months, and its games have no real ability to be recorded, partially due to the limited information available to players.

Several elements of strategy which I found useful during the tournament could, under the right conditions, be adapted to play using a computer algorithm.  I found that it was generally best to get out a creature on turn one, and to use the most powerful creature available when one does so.  So, for example, I started off many games with my tournament deck by playing a Swamp and then casting a Vampire Lacerator, which is a 2/2 with a negative ability of life loss for its controller.  If I am going second, this is especially useful because it discourages attacks from 1/1 creatures my opponent may have played on his or her first turn.  If I am going first, then I can usually attack on turn 2 for 2 damage, and then play an appropriate 2-drop creature, such as a Rathi Trapper 1/2 or a Putrid Leech 2/2, both of which can block enemy one-drop attackers.  A computer would also need to know when to trade its creatures for its opponent’s creatures; for example, if I played a Vampire Lacerator on turn 1, and my opponent played a Goblin Guide 2/2 and swung with it, I would probably block and trade my Lacerator for his Goblin; on the other hand, if instead I was playing green and had out a Llanowar Elves 1/1, and my opponent played a Raging Goblin 1/1 Haste and swung, I would probably not block and the computer ought not to block either.  On the other hand, there may be special conditions during which these actions would not be correct.  For example, if I controlled a Lacerator and my opponent swung with a Goblin Guide, and I noticed that I have a Child of Night and two Feast of Bloods in my hand, then I might not block in order that I can play my Child of Night on turn two, and then control two vampires in order to satisfy the cast condition for Feast of Blood.  So this situation is really more complicated than simple rules can determine.  This is why Magic is such a complicated game and is hard for computers to play well.

The Icarus Cognitive Architecture

Thursday, September 3rd, 2009

In my last post, I talked mostly about Knowledge Representation (KR), an area of AI where the intelligence is logical inference on statements. In an example there, I showed how First-Order Logic provided the symbols to have a computer infer that a creature was tapped. KR is very good at making inferences about and understanding its environment, but perception is half the battle for intelligence. You also need to act, and that’s where cognitive architectures come in.

Cognitive architectures attempt to provide an integrated system for all sorts of intelligent behavior. How this happens is fairly broad, but generally, you need some components for recognition and some for action. Some of the better known ones include ACT-R, Soar, and Prodigy. The one that I know about is Icarus. I’ll give you a quick run-down about how it works, and then show it in action. You can read the manual if you want a more complete picture (it’s fairly well written and not too technical), but I’ll try to pick out the important parts.

Icarus is a goal-driven architecture. When it starts, you give an agent a certain goal, and it attempts to use various skills to work towards that goal until it recognizes that it has been completed. An agent will work at this goal for a given number of cycles (or ticks). On each one of these cycles, it first looks to see which percepts are satisfied. For example, you might have a percept that recognizes a creature that either has haste or been under your control since the beginning of your upkeep. On top of these percepts, Icarus can make inferences to come to beliefs about the world. We say that the agent has “concepts” of what potential “beliefs” there are. An example of this might be (blocker ?creature) which specifies that a creature should be used as a blocker because it satisfies readiness, your opponent has a creature, and has a much higher toughness than power. As it is, (blocker ?creature) doesn’t mean a lot, but it might be leveraged later to determine an action.

So percepts and beliefs are our grounding in KR. We also have a goal, which is a belief so that we can recognize that our goal has been completed. The last part here is the set of skills that our agent has. These skills have start conditions and functions that are invoked to make things happen. These functions are the actions; they can actually change the environment (usually by manipulating the variables representing the environment). If an agent sees that its goal has not been satisfied because the goal is not in the set of percepts or beliefs for this cycle, it will invoke the skill that should cause this goal to be reached. This becomes even more powerful because Icarus has hierarchical skills as well. This means that skills can have subgoals which invoke other skills that must be satisfied first. For example, the (cast ?spell) skill might have subgoals of (mana ?cost) and (target ?creature).

So that’s a lot of junk. I have no idea if that makes any sense (actually, tell me if it doesn’t, because it means I need to refine this for the class), but I do have some examples of this at work. The code I’ve linked to is Icarus code. It’s definitely not like C because it’s all written in Lisp, a very popular language among AI researchers. I’d like to hope it makes sense on its own, but I might have lived in code for long enough that code looks like plain English to me (that’s a tragic thought). An important thing to note in all of the code is that in Icarus, anything that has a question mark in front of it, like ?creature, is a variable, and most other stuff is a constant. And anything with a semicolon in front of it is a comment, absolutely in plain English so I can try to explain what’s going on. So you can take a look at my very simple Icarus agent that determines whether a creature should attack, and how much it needs to pump before it’ll attack with it.

Magic Icarus Agent Code

So we can see the output for the running of this agent here. You can see its goals, percepts, beliefs, and what skills it’s trying to execute on each cycle. In this case, on cycle 1, it sees its goal is attack-set. It gets the skill for that. On cycle 2, it sees that it needs to satisfy one of its subgoals, so it executes attack. Note that it already knows that craw wurm is the strongest, so it can skip that subgoal. On cycle 3, it sees that it satisfied all of the subgoals of attack-set, and on cycle 4, it declares attack-set completed. Fun.

But maybe that’s not really that impressive, so I ran it again. In this run, you can see that the first thing I do is add a new creature to the enemy side; a Kalonian Behemoth. The big difference here is that it first sees that our creature isn’t the strongest. To make this belief true, it has to use both of its subgoals. Maybe it doesn’t sound so impressive, but I think it’s neat that we have a single agent that behaves differently based on its environment.

So hopefully that gives you a sense about how cognitive architectures work. Of course, I realize that my agent isn’t nearly powerful enough to actually play magic. I do think, though, that something very impressive could be built out of it. Compared to our minimax AI, I think it’s very clear that this “understands” Magic far better. It recognizes parts of the environment and has a relatively clear line of deliberate actions and skills to help it reach the goal of winning.

But your likely skepticism is warranted. There are a few major drawbacks here. One, there’s a lot of code to do this. As I mentioned last time, it might take hundreds or thousands of predicates (or percepts or concepts) to encompass Magic, and although it might be more true to life, that’s a lot of code that has to be hand-written. And that’s one of the real problems for researchers interested in cognitive architectures: a lot of the knowledge has to be done by hand. And if it’s done by hand, you’re hard-pressed to argue that your agent is intelligent. Instead, the intelligence happens to all be that of the person who programmed the agent. Thus, a big part of research in this area is to provide generalizable knowledge that extends past domains. For example, an agent that could play Magic would be good, but if that agent could play Yu-Gi-Oh based entirely on its own reasoning without additional code, that would be tremendous.

Another drawback is that these agents are bad with numbers. It’s hard to imagine computers being bad with numbers, but you can kind of see that with this agent. For example, let’s say you wanted to figure out how much you want to pay for your firebreathing. You might test it with 3, then 4, then 5. That’s hard for an agent to do because it doesn’t support numbers as variables as part of the state. Or maybe you wanted your goal to be to minimize the damage you take this turn. It doesn’t deal with that too well, either. These, however, would be easy for a statistical algorithm. It would just run a for-loop over those values for firebreathing and see which had the highest resulting evaluation.

So that’s a look at using a cognitive architecture to play Magic. Likely more difficult to do than just programming it straight up, but for cognitive architectures, Magic is just one of many domains it might try to tackle. Large, statistical algorithms are definitely more popular now, but maybe we just need a good presentation of an generally intelligent agent to prove that symbolic AI has some promise.

And by the way, big Zendikar spoilers coming out as I write:

http://mtgcast.com/?p=2365

Symbolic AI and Knowledge Representation

Monday, August 31st, 2009

I have no excuse for why I haven’t written. Know that it isn’t because we aren’t working on the class, though, because that has been progressing nicely.

I mentioned in a previous post that I’ve spent some time working with a cognitive architecture, and it seemed like a good place to test out some Magic logic. Specifically, I was working with the Icarus cognitive architecture, designed by Pat Langley and maintained by his lab. Cognitive architectures aim to model the logic and behavior of a thinking system as a path towards AI. Notably, it tries to give a symbolic, big picture approach to AI. Although Google’s search algorithm and alpha-beta pruning are both products of AI, neither really matches how people think. We don’t know to go to mlb.com for baseball news because a million people linked to it; we know because we understand what mlb represents and the connection to other ideas.

And that’s one of the major “religious” goals here: learn a lot by a single example. Data mining requires condensed information thousands of examples. Cognitive architectures require detailed information from a single example. One isn’t necessarily better than another, but are instead different approaches t learning. As an analogy, data mining is like heavy playtesting for tweaking. When you know how to play your deck, you might look to play many, many matches to know whether you need 3 or 4 lightning bolts in your hand. And you can learn a lot about your deck that you couldn’t from playing one match. But you’re learning so much in just even that first game. For example, I was watching a Standard B/G Elf deck play against a Jund Aggro deck a couple weeks ago. I understood that Chameleon Colossus was good, but only when watching the Jund player squirm did I realize how well it works, with lightning bolt, terminate, and maelstrom pulse all being useless. I didn’t need to watch 1000 games to know that the Elf deck will do better with more Colossuses in; I saw and understood that in one go. How? There wasn’t nearly enough statistical data to do that sort of analysis. For cognitive architecture people, maybe it’s logic.

If you’re not familiar with First-Order Logic (FOL), it’s not really that important. Just know that we have a very good way of formalizing to say a variety of things. It takes a bunch of statements, evaluated to be either true or false, and tries to chain those together into a much larger and more powerful language. Logic is definitely philosophy stuff, but in AI, we have Knowledge Representation, or KR. KR attempts to encode our knowledge into logical statements and make inferences based on that data.

For example, here’s an example involving what happens when creatures attack. First, we need to define several predicates. Predicates are similar to functions but only return true or false depending on the values passed to it. Let

attack(x, t) be a predicate that means that creature x attacked on turn t, so attack(“GrizzlyBears”,3) means that you attacked with a Grizzly Bears on turn 3

tapped(x, t, p) be a predicate that means that creature x is tapped during turn t in phase p, so tapped(“ElvishArchdruid”, 4, “1stmain”) means that the Elvish Archdruid was tapped during the main phase of your 4 turn

after(x,y) be a predicate that means that phase x is after phase y, so after(“combat”, “upkeep”) means that combat comes after upkeep.

As a matter of convention, all of the single letters are variables, and the constants are put in quotes. This scheme isn’t really standard but will work well enough for our purposes.

So, we, of course, known that when you attack with a creature, you have to tap it. We could encode this in First-Order Logic as follows:

∀c∀t∀p ((attack(c,t) ∧ after(p, “combat”)) → tapped(c,t,p))

That looks a little weird (and might be wrong, so someone double-check me on that), but let’s break it down:

  • ∀c∀t∀p means “For all c, t, and p”. c,t, and p are all variables (just like in algebra and programming), so this says that the following statement will apply for all values of c, t, and p.
  • (attack(c,t) ∧ after(p, “combat”)) means that it is true both that creature c attacked on turn t and phase p comes after combat. The ∧ means “and”, which is why both the first part AND the second part must be true
  • → means that something implies something else. In this case, it means that the bit we just looked at implies the rest. We can say that something implies something else if either the implied part is true, or the implying part is false. It can also be read as “If A, then B”. Maybe this will make more sense when we put it together
  • tapped(c,t,p) means that creature c is tapped on turn t during phase p

So when we takes that all together, the sentence says:

For all c, t, and p, if creature c attacked during turn t, and p is after combat, then c is tapped during phase p.

This should make sense and is generally true. We could probably consider this one of our known truths (axioms) in Magic. Of course, things like vigilance, dying, untap abilities, and more don’t necessarily fit this, but we could extend our set of predicates and axioms to include those. And it’s somewhat vague, as there might be multiple combat phases and such, but this is just a model.

So why does this matter? Because this sort of sentence allows us to make inferences about a situation. For example, let’s say we know that

attack(“RagingGoblin”,”1″)

is true, so a Raging Goblin attacked during turn 1. If we assert that

∀c∀t∀p ((attack(c,t) ∧ after(p, “combat”)) → tapped(c,t,p))

and we know

after(“end”,”combat”)

is true, then we can infer that

tapped(“RagingGoblin”, “1”, “end”)

So we know that the Raging Goblin is tapped at the end of turn 1. You were waiting for something more insightful, eh?

So this might seem pretty simple, but you can imagine that if you add enough predicates, it can do more and more complex inference. We might be talking hundreds or maybe even thousands of predicates and axioms (without even trying to encode individual cards), each with tens of arguments, but when you think about, this is all stuff that you know as a player. When you learn Magic, someone has to tell you that there’s a main phase before and after combat, and all creatures untap at the beginning of your turn, and creatures can’t attack or use tap abilities unless you controlled them since the beginning of your upkeep.

I’ll get around to talking about the advantages and disadvantages more fully about this approach later, but it might help to put things in perspective if we take a moment to compare it to Alpha-Beta Pruning and Minimax (I’ll use ABP to represent this class of algorithms, though that’s probably misleading and wrong), a statistical approach to AI. Read that if you haven’t, but quickly, we can represent Magic as a set of positions connected by actions, and we’re trying to search over that tree of positions to find the path that gets us the biggest reward (likely where you win). It’s a little tricky, since the tree has to have different levels and links for your actions and your opponent’s actions, and there are a lot of positions in Magic, but alpha-beta pruning can speed that up.

Anyways, note that in KR, the computer actually “understands” what’s happening. Using →, it has axioms that see the consequences of tapping a land or drawing a card. For ABP, it can see how the game changes, but even that doesn’t matter that much to it as it just collapses all of that with the evaluation function. While the resulting behavior can certainly be the same, KR seems much more similar to how we as humans think, and intentionally so. If you were to see a printout of ABP evaluation, you’d probably get a bunch of indexes and see it total various positions. If you were to see a printout of KR evaluation, you’d see various values being plugged into logical sentences, trying to reason out what else is true.

Hopefully by now, I’ve proven to you that KR can do a lot in understanding a game and determining what’s true and false. Understanding the rules of the game and what is true is much different from knowing how to play the game, though. Although knowing that your Raging Goblin will be tapped after attacking is a necessary pre-requisite to playing well, the game is about determining whether it should be attacking and how you’re going to reach the goal of winning the game. That’s where Icarus comes in; it’s a goal-driven cognitive architecture with beliefs and skills and actions and goals to take KR and turn it into a decider.

If you’re still skeptical that KR will give us a complete basis for Magic playing, though, I promise you, you’re absolutely right. There are some pretty gaping holes that, even if not impossible to fix, are amazingly confusing and overly complex to work around. I’ll probably address that at the end of this series, but definitely comment on the limitations of KR. It would be really helpful to me to know how much about the idea of KR I’ve conveyed in this, and a good exercise to think about what is possible. I’ll probably try to address them next time as well when I talk about Icarus.

Alpha-Beta Pruning

Sunday, July 12th, 2009

Two weeks ago, I wrote about search in a search tree. The idea was that a game (like Magic) can be broken up into a series of positions, and a good way to determine the best strategy is to evaluate all of these positions and find a path that gets you to a winning state, even when your opponent makes their best move as well. I left a few holes in that article, including how one turns the evaluation function into rules about what to do and what other shortcuts one can find in searching over this very large tree. So let’s finish that.

I briefly mentioned minimax at the end of the last article, and that’s where I’m going to start. Minimax is a strategy for evaluating these trees that should be fairly intuitive. How do players usually make decisions? Based on which outcomes are mostly likely to allow you to win. When you can look at the entire depth of a tree from your current position (as in, you can see every possible way that you win or lose), then you’re trying to reach one of those winning states. To have that happen, you probably need to make a choice on that second to last position to win. But you can also assume there that your opponent is going to make a choice one position before that that tries the most to keep you from winning. What this means is that you can guess at how a game is going to go, and how good a position is by propagating the alternating best and worst positions for you.

Let’s make that more concrete. You are at 2 life, have 2 Ashenmoor Gougers and a summoning sicknessed mogg fanatic in play before your combat step and no cards in hand.  Your opponent is tapped out with a foul imp untapped, 1 foul imp tapped, no cards in hand, and is at 5 life*. What do you do?

It should be fairly obvious that you swing with both gougers and then pop the fanatic for the kill. So let’s see how you came to that.

Your current position is listed above and you’re in your declare attackers step. Your choices are attack with 0, 1, or 2 gougers, each of which represents a different branch from your current position. If you attack with 0 gougers, nothing happens. The next major choice (I cheat a lot since magic has tons of intermediary positions for phases changes and such) is your opponent’s attack step. We assume your opponent is rational and makes his best move, which is to kill you. Technically, he could’ve done nothing during his attack step, but we won’t count on that. So you lose. We can propagate losing up a level to say that not attacking results in you losing.

The same applies for attacking with 1 gouger. You can do that, so the next position to evaluate is how your opponent blocks. He can either block with his imp, or not. Since he’s rational, he sees the lookahead 1 move to where you pop your fanatic and determines that the result of not blocking is losing, and blocking is winning. So he should choose to block, so this position for you can be generalized as a loss.

But if you attack with both, all moves for him result in losing. Thus, that can be propagated to say that your current position is a winning on because the best outcome out of all the possible next positions is a win.

So that’s minimax in its complete form. As I mentioned in the previous post, though, it’s often unfeasible to enumerate the entire tree to the end of the game. Instead, one might look maybe 5 moves ahead and consider all of those possibilities. Since these don’t have definite win/loss states, we can instead use the evaluation function to give them numerical values. We can using the same propagation method to assign the values for positions 4 moves ahead by considering the values of the 5-ahead states that it’s linked to. We can continue to work backwards until we’ve figured out the values for the next move, then pick the best one.

Even so, this is slow. So here’s where alpha-beta pruning comes in. Alpha-beta pruning applies a heuristic to minimax to avoid looking at entire trees. The basic idea here is that we don’t need to consider entire subtrees (all of the positions following a given position) if we know that that subtree can be skipped. From any position, you’re basically doing a lookahead 2 positions to see if you’re in a better spot after your opponent makes their choice. Since we can find maximums and minimums along the way, certain subtrees don’t matter as they’re already less optimal. That was ambiguous; let’s take a trivial example.

Let’s say your opponent has a prodigal pyromancer in play but with summoning sickness. It’s your turn, you have 1 life, and you know your opponent has a goblin piker on the top of their library because you excommunicated it with no cards in hand. You have a lightning bolt and goblin piker in hand with only enough mana for one of the spells, with your opponent at 6 life. From this position, you have 4 choices: bolt the pyromancer, bolt your opponent, make your opponent discard a card, or do nothing. So we can consider these each as a different path down the search tree.

The first is to bolt the pyromancer. If you do this, your opponent’s board is clean. Next, it’s your opponent’s move. They’re only options are to either play or not play their piker. If they play it, your evaluation function puts it at -10, while if they don’t play it, you’re at -5. Since your opponent is trying to minimize your evaluation function, they will play, so the action of bolting the pyromancer has a net value of -10.

The next is to bolt your opponent. If you do this, your opponent is now at 3. On their turn, they can poke with the pyromancer or not, play the piker or not. If they poke you, you die, resulting in an evaluation of -1000. Here’s where alpha-beta pruning comes in. At this point, you know that the action of bolting the pyromancer results in a worst case of -10. Bolting your opponent results in at least a worst case of -1000. There’s a chance that other actions your opponent will make will be worse (probably not), but that doesn’t matter because if you bolt your opponent, one of your opponent’s action is worse than the worst case of another action. So you don’t need to worry about considering the cases where they play the piker and so forth. This is the pruning action of not considering this sub-tree.

This example is a little trivial, but it’s hopefully very clear. Alpha-beta pruning can be used at multiple levels. For example, the position we discussed above might be 3 levels deep in another tree rooted at what you’re actual position is, and the result of that will propagate into a larger situation.

Hopefully, this all seems intuitive. Indeed, we as humans actually prune a lot more deeply than this; in a game, it would never have crossed my mind to not bolt the pyromancer, but those are other heuristics. Maybe that’ll come up in a future post. In the mean time, you can google around for alpha-beta pruning and get more rigorous explanations of how it works and why it’s actually called alpha-beta pruning.

*You’ll note I cheated majorly and made sure that no one has any tricks because minimax, in its most naive form, assumes perfect information. Maybe I’ll deal with this later

Two Kinds of AI (extended)

Thursday, July 2nd, 2009

One of the Magic blogs I watch is one from a guy writing a Magic program called Forge. It’s all written in java using swing stuff, and for just one guy, it’s pretty impressive. It’s got a fairly large card library and decent AI with lots of wingdings and features like drafting and a quest mode. Unlike a lot of magic programs that depend on the players to enforce rules, it actually has a concept of phases and the flow of the game. Props.

He (I’m assuming he; sorry if that’s incorrect) made  a fairly interesting post today about 2 types of AI. The first he describes is where each card has certain rules associated with it, and based on these bits, it comes up with a plan. The other is a mathematical where various cards are weighted and plays are evaluated (like I described in earlier posts). Forge uses the former, while Duels of the Planeswalker deals with the latter. I figured I would expand on this some.

What he’s describing is actually a fairly important divide in AI right now. AI can be split into two general categories: symbolic AI and statistical AI, with symbolic being the specific knowledge and statistical being the weighting of cards. Right now, AI is actually heavily skewed towards statistical AI, mostly because it works. Symbolic AI matches better with how we think humans work: it has concepts and some knowledge coded in. It, however, hasn’t delivered. The big successes in AI recently have been with a statistical approach. For example, web search (like google) is an AI problem; given a set of query words, try to send back meaningful links. It would be very difficult to actually come up with specific meanings for the crazy number of websites out there. Instead, methods like data mining can apply statistical methods to draw connections from millions of examples.

Of course, the boundary isn’t strict. Take the “related videos” in youtube. They’re mostly generated by analyzing tons of videos, figuring out what people watch after their current video, relative ratings, and so on. An important part to that, though, is determining the presence of tags and similarities between videos. For example, the tags “batman” and “superman” are more likely to indicate similarity between 2 videos than “batman” and “carrots.” Of course, this analysis is also a part of the huge mathy cog that is data mining, but at some level, it’s backed by some representation of knowledge, even if it’s just a word inputted by the user to describe the video.

I’m not yet willing to hedge my bets on whether one is better than the other. So far, statistical AI has proven very useful, but I actually spent last summer working with a cognitive architecture to generate football plays. It’s amazing how logical inference and reasoning can take some simple things, such as moving a player around, and demonstrate some idea of trying to find gaps. One of the big obstacles is trying to demonstrate general intelligence instead of relying on domain specific knowledge. Like the Forge programmer mentioned, he writes blurbs specifically for giant growth or shock. The first step for a reasoning agent would be to generalize shock logic to incinerate. Trying to use that to play chess is another entire matter, but small steps might one day yield some crazy emergent qualities, and that AI would be something that truly understands what it’s doing.

Search in a Search Tree and “Decision-Point Theory”

Wednesday, June 24th, 2009

Just this morning, I figured I should pick up the starcitygames feed to make sure that I was seeking more important articles. That was a great decision, because I already have an article to write about.

In my last post, I was talking about how a Magic game can be formalized into a series of positions, each with its own set of values representing the entirety of the game. We can take all these positions and put them into a search tree, where each position is linked to others depending on what actions can lead from one to another. With this, we can now talk a little more formally about how search through that works (if you’re trying to follow in the original dailymtg article, we’re still in first section, “Basic Concept”).

So the search algorithm that a computer would use on a search tree like this would be to find the best (ambiguity in defining best because I’ll address it later) path of transitions between positions that lead from the current position to a winning position. All that really means is that the computer’s algorithm for winning is to determine each step it needs to make that gives it the best chance of winning from where it is now. That doesn’t sound nearly as nerdy.

Because that’s exactly what we as humans do, and Gavin Verhey just posted 2 articles* on starcitygames discussing that exact topic. Basically, he says that magic is a series of “decision points,” and at each, you have new choices and decisions to make and you can make the right decision by looking ahead on that road and seeing what options are there. This shouldn’t sound surprising; this is both intuitive, and very similar to what Patrick Buckland had to say in the dailymtg article.

I guess the real trick here is determining which choice is the best and doing that relatively quickly. There are obviously tradeoffs between these two goals, but I’ll see if I can walk through the intuition for how this is done step by step.

So let’s temporarily assume that we have infinite time. You’re, say, trying to figure out whether you should chump block a shambling remains with a mogg fanatic now, or wait and hope that you don’t die before you can use the mogg fanatic more effectively when you can deal 1 more point of damage to the shambling remains. Since we have infinite time, we can just see exactly where both of these decisions would lead. Even though the trees below both are still huge, they’re finite, so it can be done in infinite time (maybe think 5 monkeys with typewriters and infinite time writing the entire works of shakespeare?). This way, you can determine which is better by, say, taking the ratio of wins versus losses in each sub-tree and picking the more favorable one. So if blocking results in 1 million wins to 500,000 losses and not blocking is 2 million wins to 1.9 million losses, let the fanatic die now.

So your first instinct here is that just taking the ratio isn’t a good way to determine which path is better. Certain paths are more likely (have a greater probability) of happening. Which is true. And maybe not all of those options are equal. In that case, maybe we can weight each outcome by the probability that it would occur. But in the end, it seems like our best method is to somehow quantify (probably numerically, since this is going to be a computer algorithm) the value of various paths and pick the option that has the better summed up value in its sub-tree.

Unfortunately, we don’t have infinite time. Because our finite time is probably significantly less than the size of the tree, we can only look a few decisions and positions ahead (perhaps even as few as 1). In this case, we take shortcuts, or heuristics. One of these is to evaluate the goodness of any position and compare those. We can do that using an evaluation function, which is just a fancy word for taking a bunch of important factors, weighing them, and getting an end value. For example, our 2 choices may result in something like:

2 * 1 (for having the mogg fanatic) + .1 * 16 (being at 16 life) + .1 * 1 (for having just dealt 1 dmg to your opponent with the fanatic’s ability) + …other stuff = 5

2 * 0 (no mogg fanatic) + .1 * 20 (20 life) + .1 * 0 (didn’t pop the fanatic) + …other stuff = something less than 5

(Ignore the 2 most obvious faults that popping the fanatic is another decision and that there isn’t a linear value to life)

So it seems like in the position immediately resulting from blocking or not blocking, it’s more valuable to block. This, however, is obviously short-sighted. Maybe, your opponent can immediately do something else (like, say, play a figure of destiny that you now no longer have a mogg fanatic to kill before it pumps). So what you really want to do is look as far as you can in the future and evaluate that position, assuming that your opponent made intelligent decisions all along the way. What actually ends up happening is that you need to alternately need to look through states where your evaluation function is maximized (for when you made the best choice) and then a state where your evaluation function is minimized (for when your opponent made a choice and tried to screw you). This is a rule in game theory called “minimax”  where you’re trying to maximize your own evaluation function in a game with some adversary.

These are getting really long. Anyways, I’ll continue to talk more about this in another post. In that, hopefully I’ll finally get to talking about alpha-beta pruning and more exciting, AI-y sorts of things.

* I should make a couple comments about these articles. 1) I honestly thought they were kind of dumb, but clearly, I appreciated them enough to link to them. In some sense, he’s doing exactly what I’m doing, and taking a more complex idea and explaining it simply. So maybe I should like them more? 2) Decision-point theory, I’m pretty sure, is a term he came up with. I’ve never heard of it before, and it doesn’t really mean anything. It can either be part of game theory or ai, but it definitely isn’t legit on its own. 3) The best point I thought he made was that pros often play complex decks that require a lot of decisions, and good playskill often shows that they choose options that give them more options down the road. This actually deals with another part of search algorithms called “constraint-satisfaction problems”. In that, “forward checking” is something of a way of doing lookahead, and “least-constraining variable” is a heuristic for making choices. But that’s all another post, if it comes up again.

Position space of Magic

Tuesday, June 23rd, 2009

Yay lunch break!

One of the topics I had decided to put into the syllabus was AI, mostly because I like it myself and have a decent enough fundamental breadth in the field to talk about it. I was thinking I would need to pull from a lot of other games and try to work it into Magic, but coincidentally, Wizards has just a game for the XBox Live Arcade called “Duels of the Planeswalker,” a mini Magic simulator for XBox, including a computer player. At first, it seems really lucky because it now gives me something concrete to talk about. On the other hand, I feel like some of my thunder has been stolen. Alas.

Anyways, just yesterday, the feature article of the week on the Wizards site was posted, and it’s all about the AI in “Duels of the Planeswalker.” I first thought about just writing briefly about what he says, but I don’t know if I have a lot to say. It’s very well written and takes some very complicated stuff and reduces it into very simple terms (which is somewhat necessary because it’s a Magic site, not an AI site). I guess what I can do is take what he says and give a little more depth about how it works in a less hand-wavy manner.

He starts out talking about how the game is really just a tree of possibilities. This is pretty much what it sounds like: game trees are directed graphs that connect a bunch of game positions with various transitions. Any position is the exact, parameterized explanation of the game, including everything like life totals, which creatures are on the table, cards in hand, whether lands are tapped, etc. The transitions are the various actions you can take, which are exactly the actions that you can take in the game, such as playing a card, tapping a land, etc, and also any game changes, such as going from one phase to another. Together, these are supposed to represent all possible Magic games at all positions. So somewhere, there’s a massive, massive tree that has all of these. Fortunately, most of them aren’t considered, since a given game where the AI needs to run doesn’t have to worry about a lot of positions where certain cards aren’t in the game and such, but know that they exist.

In case that didn’t make a lot of sense, take a Rubik’s cube as an analogy. At this level, these 2 games are pretty similar; you have a starting position to the game, many possible positions in-between you and a win, and ways to move from position to position. The trick to be smart (either solving the cube or winning the magic game) is to find a way to get from your starting position to the win position without getting stuck in a non-winning position (such as losing the magic game) and hopefully in a fast way. So the various positions correspond to all the permutations of the Rubik’s cube. You can link together all of the various positions by considering the transition actions of spinning various faces of the cube. When you do that, you end up in a new position, which is an arrow from the previous position to the new one.

So I realize I actually have a lot more to say about this than I want to write in one post. What I guess I can leave you with is a funny point about magic. Often, you here people talk about how complicated games like chess are. I have no idea whether it’s immediately apparent or not, but Magic is so much more complicated than that. As a point of scale, first think about the Rubik’s cube. The concept of the toy is relatively simple, but there are so many possible positions it can be in. According to wikipedia, there are 43,252,003,274,489,856,000 possible positions. That’s a huge tree.

Now let’s take Magic.  According to Gatherer, there are about 9000 created so far (I tried to wolfram alpha that, but it failed miserably. Too bad for hardcoding + mathematica). The normal magic deck has 60 cards. If you just get combinations of 60 cards for a normal deck like that, then there are apparently on the order of 10^155 possible deck builds. Granted, most of those are very unfeasible (you probably wouldn’t have enough lands). But you have to take that and multiply for having multiples of certain cards, having 2 players, and then the number of possible positions in a game with a given deck. That’s a lot. Again, for comparison, according to wolfram alpha, there are only 10^80 atoms in the universe. That’s a lot.

Look out for more on this topic.

(edit: from a few sources, it looks like the proper term for the nodes in the search tree is “positions” not “states”. I’ve edited all of these, because it’s early enough and I should try to be consistent with the literature in my writing)