Patch 1.23

Patch 1.23 came out for Supreme Commander 2 last week and brought with it the infinite build queue system. This system allows players to queue up build commands as much as they want, whether they have the resources for it or not. This means players don't have to wait for the resources to be available before placing a building and don't have to check their factories to see if they paused themselves. I think that was worth calling it a "big change", don't you?

Patch 1.23 also included the new AI neural networks. This has given the AI's platoons a much needed boost to their intelligence. So much so that they may have become a little too timid. Previously, the AIs platoons would move back and forth because of a bug in the neural networks that would give them bad data and make them make bad decisions. Now, the neural networks have become smart enough to recognize bad decisions and it makes them not want to take any risks. They always want to have the advantage.

The solution, in my opinion, is the yappy dog approach. You know, that annoying small dog that thinks it is a doberman. We can take a queue from the small yappy dog and make the AIs platoons think they are bigger than they really are. When we get the data for the platoon and enemies for the neural network we can adjust the platoons numbers by a multiplier, say 25%. So, if the platoon had 100 DPS, we would raise that to 125 and send that data to the neural network. That way, the AIs platoons still get valid data back from the neural network and will still run if they are greatly outnumbered. But, if it is close, they will attack. This will give players the sense that the AI is being more aggressive and will push the AI to take more chances. Now, this is no guarantee that this change will make it out in a patch, that isn't up to me.

I spent most of last week integrating all of my recent AI changes over to Kings and Castles in preparation for full production. I can't wait to get started. I will be the Lead AI Engineer on the project so this will truly be my baby and I hope to get the opportunity to really push the limits of RTS AI design.

Lately, I have been trying to become more active in the AI community and hope to utilize AIGameDev.com as a resource to help me take RTS AI to the next level. So far, I think Supreme Commander 2's AI is a great starting point.

Neural nets in more detail

Since so many of you have requested more info on how the neural nets in Supreme Commander 2 work I have decided to make another blog entry to go into the neural nets in more detail. Actually, I am a bit surprised I haven't made a blog entry about them already.

Supreme Commander 2 contains 4 neural networks, each of them consisting of 3 layers of neurons (input, hidden, and output) and learn via backpropagation. There is a neural network for Land, Naval, Fighter, and Bomber/Gunship. The neural networks are used for AI platoon fight or flight mechanics which are used by the bulk of the AIs combat platoons.

When a platoon is formed it contacts the AI's strategic manager and requests a place to attack. The strategic manager looks at the map and chooses a place to attack based on a risk versus reward ratio. It also checks for pathability. Once it has chosen a spot it generates a path using good old A* and returns the path to the platoon. The platoon then sets up a series of move orders to take them to the attack location.

Up until now the neural networks have not even come into play (although, that is one thing I would like to change in KnC). Once the platoon encounters an enemy it gathers information about the enemies in the area and it also gathers information about allies in the area. Note the difference between live and training. In training the platoon only gathers information about itself, in live the platoon gathers information about all allies (including player controlled units) in the area.

It then takes all of that information and feeds it into the input neurons of the neural network and feeds the network forward. It then gathers the outputs from the neural network and evaluates the output. Each output corresponds to an action that the platoon can take such as, attack structure from range or attack highest value target. Each output is going to have a value of between 0.0 and 1.0 with below 0.5 being a bad decision and above 0.5 being a good decision. If there are no good decisions returned by the neural network it means run away. After a small delay the enemy and ally data are gathered up again, feed to the neural network, and a new decision is made.

The neural networks are trained by having the AI fight it out repeatedly and backpropagating the results. At the end of a match the game writes out the new neural networks to a file. The game runs using a few special command lines which do the following:

  • Set the game to run at +/- 50 sim speed.
  • Automatically restart the game when it ends.
  • Enable neural network training.
  • Set which map to play on.
  • Sets up the AIs.
This allows me to run neural network training 24 hours a day on a dedicated machine. For Kings and Castles I want to look at having several computers running training at once and merging the results. The more iterations the neural networks get the better they can be. This is because, during training, the platoons are choosing actions at random and recording the results. It can take a long time to test every action in large set of circumstances.

Hopefully this answers most, if not all, the questions you all had.