Monday 25 April 2016

Continuing AI Implementation & Bug-fixing

Following on from my last post, Further AI Implementation, I have continued on with the enemy AI, now implementing some of the desired Not in Range decisions from the decision tree in Reading: 'Artificial Intelligence for Games', and AI Planning. The Enemy AI will now: 

  • Judge if it is in range of a User Unit
  • Will check their health and inventory, to find out if they need healing, and have a health vial to do so
  • Will flee from User Units if they are in range of them, in the opposite direction so they are one tile out of range
Before reaching this point, I discovered and amended a few bugs:

Current Unit Index
I have been using an int, currentUnitIndex, to cycle through units in the units list. If the index hits the units.Count, it is reset to 0, starting the cycle again. I discovered that when a unit died, and it went to their turn, this was throwing off the index, throwing back errors. Thankfully, I managed to come up with a simple solution to this. In the gameManager Update method, I added a single if statement:


        if (currentUnitIndex >= units.Count)
        {
            currentUnitIndex = 0;
            pathfinding.target.transform.position = units[currentUnitIndex].currentTile._pos;
            pathfinding.FindPath();
        }

This means, if the Index attempts to go over the count, it will automatically be knocked down to 0, and the pathfinding target will be reset to the current units current tile position.

Movement Issues
Having thought I had handled all movement issues, I soon realised that I hadn't, when I could move the first user unit, but the second would not draw a path. I realised that the path target was not resetting, and the path itself was not being drawn. I handled this by adding a single line to my Move() method:

    public void Move()
    {
        if (units[currentUnitIndex].moving == false)
        {
            units[currentUnitIndex].moving = true;
            units[currentUnitIndex].attacking = false;
        }
        else
        {
            units[currentUnitIndex].moving = false;
            units[currentUnitIndex].attacking = false;
            pathfinding.target.transform.position = units[currentUnitIndex].currentTile._pos;
        }
    }

By adding in the bold line above, it meant that the path target would be reset, and would start to find the path correctly once the mouse starts to move over tiles.

AI Implementation
Having tackled the two above bugs on the previous Friday, Saturday I started with the actual implementation again, adding the functions listed at top. Once I had implemented these features, I decided to try out a couple potential actions the AI could perform, leading to this:
When Enemy A attacks, Enemy B attacks too
First I tried two units in range, to ensure that pathfinding and AI decision making were working for in range, still. After this, I made a horrific discovery:
If Enemy A retreats, Enemy B's movement failed
If Enemy A retreats, when Enemy B goes to attack, it never finishes its movement. At this point, I was aghast, having spent the previous day ensuring correct movement. Concerned that I may be breaking the code, I called it a day at this point, and went to my lecturer on Monday, to discuss the issue, and see if he could point out the source of it.

We discussed what I had done, and I demonstrated the issue. Upon debugging the moveCurrentPlayer co-routine, we discovered that, instead of following the path tile at a time, as intended, it was making large jumps within the path, moving One square, then Two at once, then Three, coming to a stop there. (In this case, two tiles away from the User Unit.)

I had set up the move method as an IEnumerator co-routine, to make sure of yield. This is what allowed units to "walk" from tile to tile. It had led to some issues previously, and I had realised at an earlier point in the project that this may cause issues further down the road, as I consider this method rather cheap and dirty.

When the IEnumerator was changed to a standard void method, with the yield removed, while units would teleport to the destination tile, using the above scenario again, Enemy B would successfully move and attack. My lecturer deduced that it may have been the yield causing the issues.

For now, I have changed it permanently to a void method. I intend to finish the rest of the AI, and attempt to implement the Unique Support Mechanic before the project is finished, with my AI being made a priority. Should time allow, I will then attempt to add the visual movement back into the project, using a cleaner method.

I realise now that I did take a risk using the IEnumerator; it was the quickest option at the time, and did what was required, but had I put extra time into creating my own movement system, or using one based around Time, it could have saved me time at this point in the project, but oppositely, I could have got stuck in working that out, which could have possibly delayed me further, but regardless, has made me consider risk/reward situations in coding to a greater extent in the future, and is something I will have to start implementing in my work ethics.

Below you can find all code relating to the project up to the 25th of April, 2016.

Saturday 16 April 2016

Further AI Implementation

Following on from my last post, AI Implementation, I have now discovered where I was going wrong with the path lengths, and furthered the AI decision making to encompass one side of my decision tree.

With the path lengths issue, I now realise that my base understanding of Unity was flawed, and this is where the issue stemmed from. While I was changing the target in the turn update, the path was not being retraced. I had believed that Update() methods ran parallel, meaning all update methods ran at the same time. My lecturer explained to me that instead, Unity ran through game objects, running the update method on them sequentially, based on Unity's built-in building system.

To deal with this issue, I was told that I would need to retrace the path after the target had moved. This was done by creating a public FindPath() method in pathfinding.cs. This is then called on every time after the target has been moved, returning the correct values for the path count.

Once this was working correctly, I could move onto the first point of my decision tree: 

Is the enemy in range?
As I could work out the path lengths, this meant I could now determine which User Units would be in range of the AI. For each User unit, the target positioned itself on them, retraced the path, then compared the path<> Count - 1 to the AI moveLimit. The minus 1 is added, as the AI does not need to be on the same square as the user unit, but an adjacent square. If the path count is up to and including the user units current node, it does not need to be counted. If the path count - 1 is less than or equal to to the moveLimit, that user unit is added to the inRange<> list.

Is there more than one? and Does my weapon beat any of theirs?
Originally I had intended to check the inRange list once it was populated, check to see if there is more than 1 in the list, and if so, check through that list to see if any user unit equippedWeapon weaponTypes matched the AI's equippedWeapon's posType. If true, they would have been added to the posWeapType<> list.

Instead of this, I ran an if statement after a unit had been added to inRange, checking the weapon type. If it returns true, that unit is also added to the posWeapType list. So, if there is only one unit in range, that will become the target, but if there are multiple, and one falls in the posWeapType list, they will default to the target.

While working on the AI, I started to notice some issues with the movement:

  • When a user unit moved, the path would remain constant, from the initial starting point, to the target, until the unit had reached the end of the path.
  • At the same time, when the AI moved, the path was retracing, shrinking with the movement of the AI, throwing off numbers, meaning the AI would often be one or two tiles behind where they should have been.
  • I changed a lot of the moveCurrentPlayer's if statements, in an attempt to streamline them, and ensure it worked with no errors for both a target set beyond the current units move limit and a target's which has a path shorter than the current units move limit.
  • The main issue was stemming from the fact I was setting the pathfinding.cs bool pathTraced to false, so the path was getting retraced when it shouldn't. I placed the FindPath() method at the end of the function, and this seemed to sort out all movement issues.


Currently, the AI defaults to attack the first in the list, depending on if the bool for a unit / units being in the posWeapType returns true, or just units inRange.

My next steps will be to implement the AI decisions for no enemies in range, and then tidy up the AI coding, streamlining it as much as possible.

Thursday 7 April 2016

AI Implementation

Following my last post; Reading: 'Artificial Intelligence for Games', and AI Planning, I have now started attempting the implementation of the pseudo-code created.

To start with, I have now given units a tile variable, allowing them to know what tile they are on specifically, at all time. I had done this initially for the IndexOf value of the node they were on, as specified in my pseudo-code from my previous post:


This is the code that has been implemented so far. The main issue I have been facing with this, is incorrect values for the path lengths:

I added a simple input, where pressing 'z' will give me Path Count.
As seen at the bottom, '22' was the value returned for the topmost User Unit.

For the User Unit to the right, '6' was the value returned for the path count.
Finally, '4' was the value returned for the left User Unit
The issue is, the results generated by the current AI code seem very far away from this (With the if statement disabled for clarity):
As can be seen, one loop has all path counts returning as '36', with every loop afterwards returning with a value of '6'. All Grid Positions shown are correct.
From what I am able to understand, once initial loop has been made, the target is left on the right-hand User Unit. While the names and grid positions are cycling through, there seems to be an issue with the target itself, though I cannot see any issue within the code, other than perhaps the fact that the change in positioning is not able to keep up with the loop it is in.

With this issue at hand, I will have a discussion with my lecturer, to clarify that my current code and pseudo-code is viable, and to find out if the issue I am facing is efficiency issues, or incorrect code.

Wednesday 6 April 2016

Reading: 'Artificial Intelligence for Games', and AI planning

As I have now implemented the inventory system into my game, following my previous post The Inventory System, I have put time into researching AI development for games. At this point, I have never tackled AI to a high level standard, with most consisting of AI repeating an action, or homing in on a player character. To learn more, and hopefully find a starting point, I read Artificial Intelligence for Games (2009, Millington, I & Funge, J). Below are the notes I have taken from my reading:


Artificial Intelligence for Games

Millington, I & Funge, J

1.2 Model of Game AI

The AI Model provided within the book.

Model splits AI tasks into three groups; Decision Making, Movement, and Strategy.
Decision Making & Movement contain algorithms working on a per-character basis.
Strategy operates on a whole team.

Board games (Chess, Risk) only require Strategy level.
Platform games (Jak and Daxter, Oddworld) only require Decision Making and Movement level.

1.2.1 Movement

If a character is attacking in melee, they will home in on the player, only activating the attack animation at a certain range.
Movement can be more complex; e.g. Splinter Cell, if the player is seen by a guard, they may attempt to locate the closest wall-mounted alarm, which can involve navigation over long distance.

1.2.2 Decision Making

“Involves a character working out what to do next.”
Will typically have a range of behaviours to choose from, decision making system need to work out the most appropriate behaviour at the given time.
Zelda games; farm animals stand still, move a small distance if bumped into.
Half-Life 2; enemy AI attempt multiple strategies to reach the player.

1.2.3 Strategy

Most action-based 3D games use only Decision Making and Movement.
“Strategy refers to an overall approach used by a group of characters.” In context of the book.
Will often still require characters to have their own Decision Making and Movement, with Decision Making being influenced by the group Strategy.

1.2.4 Infrastructure

To build an AI, a whole set of additional infrastructure is required.
Movement requests must be turned into action in the game.
AI requires game information to make sensible decisions, sometimes called “perception”; “working out what information the character knows”.
“World Interfacing is often a large proportion of the work done by an AI programmer”, and often the majority of AI debugging.
AI systems need to be managed to use the correct processor time and memory.

1.2.5 Agent-Based AI

Focused on “producing autonomous characters that take in information from the game data, determine what actions to take based on the information, and carry out those actions.”
Can be seen as a bottom-up design:
  • ·         Work out how each character will behave, and the AI required to support it
  • ·         Overall behaviour of the game is a function of how character behaviours work together
  • ·         Decision Making and Movement make up the AI for an in-game agent

Non-Agent-Based AI act oppositely, from top to bottom. E.g. Pedestrians and traffic in Grand Theft Auto 3. Traffic and pedestrian flow is calculated via the time of day and the region of the city, and is only turned into singular cars and pedestrians when in view of the player.
“A good AI developer will mix and match any reliable techniques that get the job done, regardless of approach”.

1.3 Algorithms, Data Structures, and Representations

1.3.1 Algorithms

They are step-by-step processes, generating a solution to problems faced by AI.
Data structures hold data that allow the algorithm to quickly manipulate it and reach a result.
Data structures must often be tuned for one specific algorithm.
Set of elements must be understood to implement and refine an algorithm:
·         The problem that the algorithm tries to solve
·         A general description of how the solution works, including diagrams where needed
·         Pseudo-code presentation of the algorithm
·         An indication of the data structures required to support the algorithm, including pseudo-code where required
·         Particular implementation nodes
·         Analysis of algorithms performance: Execution speed, memory footprint, scalability
·         Weaknesses in approach

5.2 Decision Trees

“Fast, easily implemented, and simple to understand.”
Simplest technique discussed in book, with the potential to become quite sophisticated.
Very modular and easy to create.

5.2.1 The Problem

Using a set of knowledge, corresponding actions must be generated from a set of possible actions.
Mapping between input and output can become complex.
Same action may be used for multiple inputs, but a change in one input value can turn that action from sensible to stupid.
Require a method capable of easily grouping lots of input together under one action, while allowing the significant input values to control the output.

5.2.2 The Algorithm

Decision tree is made up of connecting points, starting at a “root” decision, with ongoing options being chosen thereafter.
Example of a decision tree

Choices are made based on the characters knowledge, often referring directly to the global game state, rather than a personal representation.
Algorithm continues along the tree until there are no more decisions to consider. An action is attached to each leaf. Once a leaf is reached, the relating action will be carried out.
Tree decisions will often be simple, with only two responses.
NOTE: Even just at this point, I feel a lot more confident pushing into the AI, structuring a tree in my mind that will be put onto paper. Though I have seen tree diagrams and flowcharts used in many ways previously, it never occurred to me that it could be implemented into AI Decision Making.
Common for object-oriented engines to allow the tree to access methods of instances.
To AND two decisions, they are placed in series in the tree; “I A AND B, then carry out Action 1, otherwise carry out action 2.”
To OR two decisions, they are placed in the opposite series in the tree; “If A OR B, then carry out Action 1, otherwise carry out action 2.”
An AND and OR tree
As decisions are built into the tree, the amount considered will be much smaller than the amount within the tree.
Can be built in stages; simple tree can be initially implemented, with additional decisions added on as needed.
Can be given more than two options to choose from;
Guard in a military facility, decisions based on current alert status; green, yellow, red, or black. Using binary decisions, the alert value must be checked three times. Using multiple branching, the value can be checked once for the decision.

Implementation of multiple branches are not as easily optimised as binary. Majority will always be binary.


Following on from this reading, I felt like I had a much clearer understanding of how to tackle my AI, and went about creating a simple decision tree for my AI:
Using this tree as a basis, I have gone on to pseudo-code the process:


While I believe there may be further tweaking required once implementing the code, the process of creating the decision tree, and writing some basic pseudo-code for the process, has greatly helped my learning process, and eased many concerns I had towards AI.

Tomorrow I shall begin implementation of the code, and hopefully have a level of working AI before the week is out.

Bibliography

Millington, I & Funge, J (2009). Artificial Intelligence for Games. 2nd ed. Burlington, MA: Morgan Kaufmann. p8-12, 295-300.