AI Programming
Overview
As lead AI programmer it was up to me to make the enemies of Guncaster. On top of other tasks I had this would have been impossible to make any rigid AI system. The end product is a highly customizable and expansive system that can be applied in almost any way possible.
Features
-
Utilizing the Flyweight pattern using scriptable objects we define well tuned settings that multiple agents can use. Settings cover a wide variety of behavior
Pathing Location
Movement
Rotation
Animation
-
All agents utilize the observer pattern to subscribe to update events with custom tick speeds. This cuts down performance and allows for complex behaviors and a good number of agents.
-
AI behavior is organized into states, like conditions, and reactions, for actions.
There’s no hard limit on how many of either an agent can have. Since everything is also event-driven it is efficient and has little overhead.
Each state defines when it should be active, and can contain as many reactions as needed.
Because states follow a hierarchical structure, higher-priority states can override lower ones, allowing for fluid, dynamic decision-making.
-
Context-Aware State types that activate based on:
Distance (close or far)
Line of Sight
Health Thresholds
Timed State
-
Expanded reaction types that can be committed to or just play out:
Combat
Pathing
Timed
-
States and reactions have token costs. AI agents are given tokens, which they can spend on stronger or rarer behaviors.
This encourages resource-based decision making. Not every option is always available, adding depth and variation.
-
When choosing among available reactions, the system uses weighted randomness. Reactions with higher weights are more likely to be chosen, but less common ones still have a chance.
This makes behavior less predictable but still logical.
-
If an agent takes enough damage in a short period, it can trigger a Poise Break, something like a special event. This could be a stagger, scream, ragdoll, you name it.
It's a flexible trigger system for memorable and dynamic moments.
-
Agents can enter and exit ragdoll states, adding fun and realism when hit or defeated. Great for humor, variety, or drama.
-
Agents use root motion and animation events to control when attacks fire, particles fly, sounds are played, anything really.
-
Custom animation curves allow you to fine-tune how the AI moves or jumps. This translates to faster movement when close or stopping at a destination, even how high an agent can jump.
-
Stop-motion-style snappy movement for stylized feel. Doubles as an efficiency buffer
Dissolving enemies when dying
Distinct silhouettes and poses for personality or combat types
Iteration 1
The initial AI system for Guncaster was built around a basic finite state machine (FSM). This version featured five fixed states (Attack, Flee, Chase, Pursue and Patrol), shared uniformly across all enemy types. Each agent strictly followed this rigid state structure, limiting variety and adaptability. Combat behavior was binary and agents operated based on this single Boolean flag (isAttacking == true), offering minimal nuance or reaction to player actions.
Despite its simplicity, this version did include a patrol state, though it was eventually deprecated. As the game’s focus shifted more toward direct combat rather than exploration, patrol logic became unnecessary.
Notably, this first iteration also introduced flying enemies. These units were decoupled from the NavMesh system and instead used independent movement logic, ignoring Y-axis constraints. Their behavior was minimal but functional, relying on basic flee and seek states, with supplemental arrive and avoidance mechanics to simulate aerial maneuvering. Sadly, due to complexity these were also deprecated.
Lastly, AI behavior in this version was handled using a fully component-driven architecture. All agent values were tightly bound to Unity components, which limited flexibility and made scalability a challenge in later versions.
Iteration 2
The second AI iteration introduced a Hierarchical Finite State Machine (HFSM) with improved structure and modularity. Agents still operated within four core behaviors (Attack, Flee, Chase, Pursue), but could now hold multiple attacks and commit to actions via override functions.
Animation control was vastly improved. Agents could use any Animator parameter type, not just a single Boolean. Parameters were managed through regular expressions, offering quick setup but causing performance issues at scale.
Other upgrades included:
Location-based states enabling world-aware behaviors.
A new state probability system requiring fallback logic.
Enter/exit events for states and reactions (lightly used).
A cleaner, component-based architecture for easier setup and debugging.
This version balanced flexibility and control, laying groundwork for more dynamic agents while exposing bottlenecks to address in future iterations.
Iteration 3
The third iteration prioritized efficiency, scalability, and reuse. A custom Update Manager was implemented to regulate agent tick rates, minimizing unnecessary computations and improving runtime performance across the board.
This phase also introduced the Flyweight design pattern by converting nearly all agent settings, such as movement parameters and animation data, into ScriptableObjects. These lightweight, shared data assets allowed multiple agents to reference common configurations without duplicating memory, streamlining both performance and setup.
A major pain point from the previous iteration, the use of regular expressions to control animator parameters, was eliminated. Instead, hashed IDs enabled direct and efficient access to both animator parameters and states, allowing agents to transition instantly when needed.
Key advancements included:
Full integration of the token system, enabling agents to dynamically "buy" into states and reactions.
ScriptableObject-driven animation commands, enhancing responsiveness and customization.
Early work toward condition based state selection, replacing the rigid four states structure.
This iteration marked a significant leap in both architectural cleanliness and runtime performance, preparing the system for more complex, reactive AI behaviors.
Final Iteration
The final iteration focused on refinement, scalability, and expressive behavior. It was around this time that I received a break from other duties and could hunker down and actually write some solid logic. This version introduced a truly modular and event-driven architecture, allowing agents to manage any number of states through lightweight Boolean checks instead of expensive distance or health evaluations. Hierarchical prioritization ensured higher-level behaviors could cleanly override lower ones, enabling more dynamic decision-making.
Agent configuration reached peak flexibility with fully separated ScriptableObject settings for movement, rotation, location targeting, and animation. This not only improved data organization but also enhanced fine-tuning per behavior type.
Key enhancements included:
Unlimited event-driven states, each toggled by simple Boolean flags for performance.
Split and specialized SOs for better reusability and customization.
Flair and Personality:
Ragdoll systems for dramatic deaths or recoveries.
Animation curves for jumps and movement speed, adding natural motion.
Poise-based events that triggered new behaviors from prolonged damage.
Improved token and probability systems:
Tokens could be toggled off for efficiency.
State selection now used cumulative weighted probability, recalculating only when needed.
Flexible state types replaced the old four-state model.
Dual-state machine architecture, enabling phase-based AI (think bosses with Phase 1 and Phase 2).
Advanced movement behaviors:
Leap mechanics using OffMeshLinks or dynamic points.
Fleeing using distance-based evaluation and predefined path nodes.
More intelligent and expressive navigation across the vertical plane.