When I wrote the AI for Galactic Civilizations, everything was hard-coded in C++. That was nice for me but not so nice for everyone who would like to tweak things later. Moreover, it makes it very easy for things to become a mess later on as improvements rely on my memory of what technique I used to do a given thing.
Yesterday and this morning, I officially began my work on the Elemental AI. What’s in the game presently isn’t AI, it’s just a few IF/THEN statements. My job is to design and code a structure that allows the computer players to play intelligently and provide a challenge without having to cheat.
So…how do you start?
The first step is with XML. You can do an awful lot just by defining lots and lots of “stuff” in the XML.
First, I start by defining AI personalities. My starting point is AI_General which is, as the name implies the general AI personality that I’ll start from. When I’m done, modders will be able to go in and define their own AI personalities that behave completely differently.
Here’s an example that I whipped up last night.
I define some Priorities (Economy, Farming) and I define some things I want the AI to avoid (War, Construction Times, etc.). Now, as I get better at this, I’ll tie together these definitions throughout all the game data.
So for instance, items that are tagged as Economy and Farming by the “AI General” will get their priority multiplied by 3.0X and 5.0X respectively. Someone else might come along and make AIPersonality=”RavenX” and have a totally different set of priorities. When the game starts, it’ll eventually look for all the AIPersonalities and grab them for the different AI. We might even let people from within the game pick which AI personality they want (not in 1.0 but eventually).
I’m a n00b
I’ve been using XML for years for skinning purposes but I’ve never tried to data drive an AI before. It’s not hard, but there is no substitute for experience. So over time, it’ll get better and better so that less and less of the AI is actually hard-coded and what requires “real” coding would be provided as Python.
Eventually, we will also have all the AI difficulty levels clearly defined so that players can adjust them far beyond what you could provide in the game UI.
BTW, feel free to shoot off your own ideas on this. My “AI skillz” largely revolve on producing macro-level systems for intelligent artificial players. Implementation is not something I am passionate about so don’t feel like you’re stepping on my toes by suggesting different implementation techniques.
What about combining the 2 -- when you get to turn [randomized] turn X change priorities (your #1) unless Event Y happens (your #2) first? That way players can't indirectly affect AI players by doing/not doing something that would trigger a certain, undesired, AI response.
I think most of you forget that your opinions are all irrelevant since you all will soon be replaced by your Korean equivalents.
Wait...I thought the Chinese were taking over the world...?
yah man you need to read up, the Chinese have some amazing hackers. but then again ive been to Korean and they are some dedicated gamers.
What about people that are already Korean, will they turn into some frighting Super-Korean ??
Thanks. : )
I think we're in good hands. Frogboy has said he will nerf any feature if the AI can't use it too. That's a strong incentive to make sure the AI can handle the new feature! : )
From the AIs I have seen till now the greatest problem always has been trying to get a high complexity while still having a decent response time.
Since a good/high complexity always needs a fairly long evaluation period I am wondering if it is possible (or even already done in some games) to start evaluating the current condition already during a players turn, thus being able to check for some things and eliminate some "bad" strategies before the player hits the "next turn button".
And yes, I really like the idea of the AI being as accessible to the players/modders as possible.
CharonJr
Nah, Starcraft 2 is our secret weapon against the Koreans.
North Korean I hope. Every sovereign you create has only one template-
Strengths- Genius, superhuman physique, weather control, beloved by all.
Weakness- THE DEAR LEADER HAS NO WEAKNESSES. IT IS ALL CAPITALIST LIES.
as already mentioned by others the whole fixed changing behaviour on some turn seems extremely limiting. While adding other restrictions like cities owned etc might make it more interesting, it still lacks a bit I think. I'd prefer, if behaviours where stackable. and their start/end depends on certain events.
an example:
default behaviour would be finding a nice spot and building a first city. once that has happend the behaviour ends (ending condition is something like 'cities owned >= 1') now the upgrading of the city starts. building huts, gardens etc. additionally the event 'research possible' occured, so we start researching a bit
then, some time into the game, another kingdom is being discovered. this too could lead to a new event.. for the agressive AIs this would mean: boost the military production, start attacking.. more peaceful nations will try to get treaties out of their neighbor and still other kingdoms might just start building walls.
that means the xml should be a list of behaviours, each able to influence the multipliers of chances what the ai will do. these behaviours are stackable and have starting and ending terms.
on another note is the difficulty. right now the only way to influence difficulty is to let an ai focus on something which doesn't pay off. this way, it won't be as strong (in terms of tech and unit count) as the human player. I don't really like this, as the ai doesn't get 'dumber', just the choices it has are less effective. a dream would be, if the intelligence (difficulty) score would influence events directly. an extremely easy opponent will attack the first unit he sees, no matter its strength. an insanely clever ai will try to find the unit which he can hurt the most with the least risk. the latter already has to be implemented for tactical battles - so just make it possible for each of the 'smarter' things an ai could do, to be disabled or have an alternative stupid counterpart
just some ideas I had while desperately trying to avoid having to do actual work (for gods sake, I'm an underpayed programmer, not an author.. I hate writing specs! *g*)
ora
I'm reading a book about Clausewitz and the book he wrote about war.
What would be usefull for the gam eis : First you have goal and ways. Then you have goals and ways. Last you have goals and ways. In fact Politic is the topmost "thinker". Politic choose the strategic goal. In game etrm it could be "how to win the game ?"
Then to win the game you choose a way to do it : you want a military victory, but you can choose to get an edge in magic or civilization or etc. But your goal is military. Then you break each way. you want a military victory. You choose magic to achieve that. Then you choose ways to enhance magic but in a way to achieve a win with military ! each goal you have "inherits" the gformer goal.
The more I read that book about Clausewitz, the more it makes me think about object inheritance.
If a goal was mis-chosen then you get one level up in the goal-hierarchy and try something else (and recreate a "ways" tree)
Expanding on my previous irrelevant opinion from my reply 41 (not repeated here for brevity):
Data (not John_Hughes' android DATA, the not-capitalized numerical data) -- some is referenced in reply 41:
Strength is determined by # of units, their 'hit points', armor, weapons, strengths&weaknesses relative to attacker's units, etc.
Significant attack failure = more than half attacker strength lost in a failed attack, lesser failure is otherwise.
Upon significant attack failure the strength of the next created attack force doubles, lesser failure would increase by 50%.
And say the formula to determine which target to attack (attackValue) is (defenderStrength x 1) + (distance x 3) -- the lowest value is chosen [better to hit the nearer, stronger target -- up to a point].
And let's say there's 2 possible targets (cities for example):
target1: defenderStrength of 30 and distance of 10 (attackValue of 30x1 + 10x3 = 60)
target2: defenderStrength of 10 and distance of 20 (attackValue of 10x1 + 20x3 = 70)
Target1 is chosen as it's the lower attackValue.
So let's say the first attack was sent out and failed significantly. The AI would learn from this, modifying the next attack as defined above. But let's add in a few more options.
1- Next attack force would be doubled in 'strength' (as mentioned above)
2- A second possibility would be to give defenderStrength a different weight. Let's increase it to 2x (the strategy being to avoid the defenders strong points and hit her/him where she's/he's weakest, even if further away), all else unchanged, so recalculating attackValues:
target1: defenderStrength of 30 and distance of 10 (attackValue of 30x2 + 10x3 = 90)
target2: defenderStrength of 10 and distance of 20 (attackValue of 10x2 + 20x3 = 80)
Target2 is now chosen for the second attack. It's possible that in other situations this won't result in a different target being chosen (I fudged the numbers to get this result )
We can even modify this a bit more by adding in consideration of distance and attack force composition -- if distance is over a certain value (dependent upon map size) then the attack force composition would shift to faster units (cavalry, or units with movement bonus items, etc.).
3- a third possibility is to combine both 1 and 2 -- a distraction attack, or feint, or an attempt to split the defender's forces. This would entail the AI waiting long enough to construct both attack forces (a disadvantage, but one countered by the advantage of a double attack).
Let's give choice 1 a weight of .5, 2 a weight of .4, and 3 a weight of .1 (weights total 1.0).
Recapping, after the significant failure of the first attack, the AI would learn and respond by:
-50% chance of hitting the same target with a doubled force
-40% chance of bypassing target1 to hit target2 with a doubled force (as the AI's initial assessment of what is required to defeat a certain defenderStrength was insufficient, it learned and adjusted for all subsequent attacks, not just repeats of the initial, failed attack -- and remember that significant success would result in attackStrength being reduced so it's not just a one-way adjustment). This doubled attack force would be comprised of faster units to try to catch the defender off-balance, unable to reinforce the new target in time.
-10% chance of doing both
==============
I don't see this requiring extra files or some special noggin to store and retrieve all that learned knowledge, just modify existing priorities/algorithims/weights/data values. The original values would be kept, and there could be checks made against them to keep the modified values from getting too far out of whack (to prevent unforseen interactions) and to 'start over' the next game (or the modified values could be kept to allow the AI to improve from game-to-game, as the human players skill increases).
This involves no 'cheating' by the AI, just the AI adjusting according to successes/failures -- ie learning.
An idea I had once for a MOM AI which was use use an tree of smaller AI where some higher level AI gives orders to lower level AI.
For example: A wizard is having bad negitiation with another player decides that we should focus on war. The order pass to the Ai below which could be for example city management that will now change the buildings for military ones and produce more units.
The only think you have to watch is to make sure the AI has a memory. When it takes some decision, like it's the case in a compay, you generally stick to it until it get revised. It prevent the bug where the AI is changing it's focus every 5 turns which makes him do nothing meaningful.
Oh, to be sure the current Civ 4 AI is still lacking and has to rely on substantial bonuses to remain a challenge. I'm more focusing on the improvement I saw in the AI over time and thanks to community involvement; it got a heck of a lot more challenging with the late BTS patches than it was when Civ 4 was first released. Vs. the Total War series which tends to have terrible AIs that are never improved.
I wonder what Brad's view is on AI bonuses? Surely its ok for the AI to recieve some bonuses; the better the AI, the smaller bonuses needed but I don't think there's any shame in the AI using some bonuses to be a challenge. This isn't chess after all.
Normally, AIs are developed before anyone plays. In the case of our way early beta, and the early multiplayer beta, we could take advantage of this early input to make the AI extra competitive. My idea would be to identify some top players from the multiplayer beta, and to have them document their decision trees and justification for each move on each turn. I'm sure some good players wouldn't mind doing it to make the game AI stronger. This information could be used to help shape the game AI. I think having this type of information would both help in development of the algorithms and to test the performance of the AI.... Just a thought
A flaw I see in many tbs games is that the AI is too tactics centered. ie.. see threat there, go squash threat. I really think there has to be both a strategic decision tree and a tactical one. Since the strategic AI is what I normally see missing, I'm going to give a few of my thoughts.
Strategically, the AI should consider the its goals, approach, situational awareness
= Goals = I think each AI player should have a goal for the type of victory it wants to achieve. This will greatly impact its decision making throughout the game. These are not short term objectives, but more of overarching goals like military victory, diplomatic victory, etc.. There should also be exit criteria, where the goals should change. It is ideal if you can stick with a single goal throughout the game, but sometimes factors make it that reaching that particular goal are no longer achievable. At that point, a new goal with a seemingly better outcome should be chosen. For each goal, there should be a series of objectives for the ai to achieve. Let's say we are looking for a military victory, then the AI would need to get some level of scouting to know its oponents strengths, it would need to build an economic engine to support its military aimes, it would need to build a relatively strong military, it would need to choose an initial target, it should look for allies to secure its non-conflict borders, etc.. Each of these objectives would need prioritized and have lots of variables we could change in the xml to tweak it.
= Approach = When I think of approach I think of play style. Aggressive, expansionist, economically driven, defensive, etc.. If different AI have different approaches and different goals, then how they play should be very different. Having different approaches to the game will make the Ai more unpredictable. You will not be sure if it is building up for an aggressive military campaign or building up its economy to take a late game victory. Once again, ideally, an AI could change its approach it it sees it is no longer beneficial. If I was planning on being aggressive militarily but I have two fronts at war with aggressive enemies, I should probably start playing more defensively and diplomatically. If AI is predictable, it is a fail
= Situational Awareness= There should be both internal and external situational awareness that the AI uses in its decision making. It should keep track of how strong it thinks its opponents are, how likely the are to attack, how much threat opponents are displaying by having units in war position, location of enemy units, state of its economy, relative strength of is military, location of its units, its defensive position, its ability to withstand counterattack, the amount of information that it knows, etc... These are they types of factors that humans use when making decisions in a TBS. For the AI to be competitive, it will need to factor them into its thinking.
For me AI is the single most important feature in this game and I'm really encouraged by all the emphasis it is getting and all the great past experience of the dev team.
What resources and restrictions will there be for AI community modifications?
In detail :
1.- Will I be limited to changing numbers in that XML file with the actual AI implementation being unaccesible in the game executable? Or, will I have full access to the game state visible by each AI player in python and control over all possible actions an artificial player can do?
2.- Will I be able to read and write external files from python? Which could be be used for opening moves, in example.
3.- Will we have a game solver? That is, the game with zero graphics or UI where I could configure with command line options a set of AI players and have it run until one wins.
4.- Will there be a game reply feature? Where output from the game solver or a normal game could be loaded in the graphic game and replayed for review.
5.- Will there be any restriction in the distribution of 3rd party AI mods? Like Blizzard's "You can't make profit on mods for our games". Or "all your mods are belong to us".
A wild goose says:
No, Yes
No,
Yes,
Yes.
I will be honest here before I start, I have yet to touch XML stuff. I'm 4 months into learning visual basic at college and thats only given me the most basic of knowledge to create a VERY basic AI for a game of cluedo I made when I was bored, so take this advice as you want here...
anyway the idea of grouping things to their priorities like this
<avoid> <war = 5.0></avoid>
to me is a good idea, however this seems to give the AI a very set structure on how it acts as a whole, and in a sense doesn't go towards the feel of it thinking strategically and more just making sure it fits tick boxes throughout the game.
I believe that creating some paramenters underneath these would be useful to make the AI more dynamic, but if possibly keeping them out becoming a necessity to people developing their own AI structure (yes it will be less dynamic but then less effort would have been put into the mod so I doubt people can complain on that regard)
anyway the modifcation I'd suggest would be something like this
<Avoid>
<War> <standard=5.0> <AgainstEmpire> <standard=2.0> <Weakdefence = 0.5> </againstempire> </War> <War></Avoid>
in this example the AI would spend a focus of X5 priority on avoiding war. However on certain circumstances it would lower(or in another scenario raise) it's priority relative to that group (in this example the priority would drop to 2.0 if it was an empire faction the AI had noticed (wanting to go to war with him more) and drop even further if it knows an empire player has little to no defences)
What this would do is change the priorities based on the connections the AI has with that group and/or what information it had gained, and giving them a feel to being more human like, and you could code in a lot of these parameters to further "humanify" the AI and give the player the feel that they are creating the mindset that fits with them.
for example if you joined enough of these categories you could create an AI like this
<Avoid> <War> <standard = 5.0> <Exempt>
<large force = 2.5> <introuble = 2.0> </Exempt> <againstempire> <goodtrade = 6.5> <intimidating = 3.5> </againstempire> </war></avoid>
(now you could set that the earlier it is in the section the more important that priortity is held as a form of conditional control.)
with this little XML coding you have an AI that will try to avoid war in general, however if it believes it has a large enough force (which could be specified in another section of the XML) or that it's in dire straights, would take the idea of forcing themselves to get some extra land a lot better.if however none of these are at a level of worry then it would check things like if that group was empire or not, and if the trade was worth keeping them there, or if they're forces were giving them the "creeps"
a lot of this would probably need some comparisons (for example you would need the ai to compare if his idea of a "large force" is "large" compared to the mega army over yonder) and that might be impossible! I don't know, this isn't something I've touched yet (unfortunatly)
also with the early/mid/late game category a <varied> category that the game may take priorities from at any point during the game to add a bit of a "chaos" factor and making the ai less predictable would be something else to consider.
anyway theres my food for though, eat it if you wish, I do not promise protection againt food poisoning!
-Dingbat ( now going to sleep at 1 AM after writing this + deep thought - apologies for any typos!)
I'm in though I can only do it during the summer! (no really I am interested if someone will help me and I do have teh beta)
"I spilled the coffee"
"Changing behavior relative to turn counters is probably not the best way to manage behavior changes. I think that there's actually another number on which you base the choice to switch from economy building to military building, and that's economic strength to support the military. There might be other things that factor into this, such as how much of a lead your opponents have on you in technology or military. You might, for example, want to send a small attack to slow down your opponent who is trying to research technology quickly and neglecting his military. You wouldn't want to give a player a 50 turn respite to get a technology advantage with no fear of reprisal."
If we are still speaking of an AI opponent, then how are they to gain this knowledge without "cheating". Something a human can't do unless they spend time and resources taking said Tech by force, or facing said Military, thus gaining the knowledge, again by force of arms?
Changing behaviors relative to turn counters may not be the best, but in a Turn based game, it is something that can always be reliable and will be readily available always. Using "events" will certainly work, as long as those "events" actually happen. Stacking of multiple options would be great surely as well.
Perhaps Scouting may provide some of that Intel about the enemies Military strengths at certain game points, but unless you engage in Diplomacy, known enemy Techs would be more difficult to uncover. Use Espionage? Well, they would obviously also have espionage, it would just be called counter-espionage etc etc.
Since making an AI that thinks may be a tad lofty, making an AI that can make assumptions (if/then's) (and/or) (this/that) or whatever the code allows based in any Intel discovered, may not be so hard.
We don't want to get away from the FUN part either. I have no doubt, a very smart AI can be had, but trying to have FUN while getting outsmarted by a machine, for whatever reason, doesn't always fit that bill.
I do really like the idea of players recording/noting things done in game and then doing some detailed AAR's based on the whole how/why/when/ they did certain things, as well as what drove them to do those things at those times.This assumes there will be no "game recording facility" built in of course.
If the Dev can see that as useful then having a recording device would be totally perfect for that cause. Record, then transcribe.
I doubt very much a very smart AI can be had. Artificial intelligencies today are very artificial and almost no intelligent at all.
Only recently has a master-level chess AI been achieved. And chess has a "small" search space estimated around 10^50. Go AIs, which have to deal with an estimated search space of 10^170, still have trouble with dedicated amateurs.
So, it's only to be expected that games with complex rules, hundreds of piece types/spells/turns and a total lack of academic study get such bad AIs. In just the initial phase of Elemental (around 50 turns), the search space is bigger than that of the game of Go.
AIs for computer turn based strategy games at non-cheat levels are very bad.
MoM's AI is a laugh.
Civ4's Noble difficulty, the one where the AI has almost no bonus, is so easy I already won consistently since my 2nd game. Which means you can win it even when you still don't have a full grasp of the game.
GalCiv 2's Intelligent difficulty is a challenge at least for some games but not too many of them.
And Elemental's AI will most likely be easy to beat without AI bonuses.
So don't worry about FUN when related to the AI. The better the AI is, the more fun it will be. Because an opponent whose difficulty comes from clever play is way more fun to play against than one whose difficulty comes from cheaty bonuses.
edit : sorry, double post.
Here are some random thoughts by a wierd guy who will be replaced by his Korean counterpart, and who has no idea of AI programming:
I sympathize with the idea of AIs that learn. Maybe once a smart AI programmer (like the ppl at Stardock) has identified several numerical parameters (like the xml values we just saw), the learning AI could accumulate data from many many games and try to analyze it using statistics in order to optimize these parameters. The computer would then have to store only a manageable list of parameter values and outcomes.
Yes, let's learn Python!
It would help me out at work, so doing it with Elemental in mind makes it a lot more fun. My work is not focused on programming though, and I can really give no advice on how to begin with Python. But I'm really interested in learning!
There are many great features available to you once you register, including:
Sign in or Create Account