Organic AI
(Disclaimer: Heavy WIP)
The What
TL;DR
Human like AI, parallel-decision making, data-oriented (ECS ready), modular (component-based and no dependencies), easy controls for designers
The Why
The problem with Behavior Trees is that you have to have duplicate trees just for a slightly different set of circumstances. The issue with Finite State Machines is the difficulty in processing parallel behaviors. A score-based system like Utility AI would cause computing overhead, as it needs to constantly evaluate everything, while also overwhelming designers with endless number tweaks and hacks to elicit desired actions.
But what if we take everything and use only the good parts?
This is my attempt at an organism-emulating AI system that is motivation-driven (a matrix of needs), sees the world through its own lens (individual perception), is capable of evaluating multiple implications of multiple stimuli (memory and preference interaction), and does what’s most logical or easiest at the moment, all while being performant.
I give huge credit to the work of AI researcher Dave Mark, whose GDC talks and input to the community on different platforms have solidified my ideas. Another person is Bobby Anguelov, one of the AI designers of Hitman, whose absolute gem of an AI talk has prevented me from coding myself into a corner.
The High Level
On the crust, we combine state machines and score-based systems so that the AI will choose a task to act on based on the level of stress it experiences, e.g., how hungry it is. Then it evaluates the difficulty of lowering that stress (eating something) by checking if it has ever seen any food/its distance from the food, etc. If the task is not too urgent (say, it’s not too hungry), it’ll proceed to check if it has other needs. If it does and if those are easily resolvable, it’ll go and do those first. For example, if it starts to also get a bit thirsty while looking for food, and there’s a glass of water nearby, it’ll drink the water first before proceeding to access the burger across the room.
Pros:
- We’re only evaluating necessary information each frame
- It’s a relatively simple state machine where things are easily changable.
The Mid Level
On the upper left, we have curves as a function of the AI’s response to certain stimuli. Below them are the controls for the AI’s memory capacity, i.e., how long it will take for a certain stimulus to fade from memory or how prominent a stimulus should be when perceived for the first time.
On the right is where we lay down every step the AI needs to take in order to meet its need. Notice how we’re homogenizing the tasks so that their actual actionable steps become similar, i.e., a “Thirst” task will look almost identical to the “Hunger” task, while an “Attack” task would also include Find (the target), Access (the target’s last known position), and Do (Shoot). The actual Data Table is shown below.
The next step of this project is to implement the AI’s response to emergent stimulus by inserting steps or adding tasks at runtime.
Pros:
- We’ve created an interface between the designers and the AI in a data-oriented way.
- Designers could easily dictate an AI’s reaction to a stimulus by tweaking the AI’s preference towards them (curves as characteristics), the AI’s interest in them (memory weight), and the AI’s evaluation of the stimulus’ implication (step difficulties).
- The AI knows how close it is to completing each task, therefore choosing the correct task when presented with equally urgent tasks.
- Ease for designers: All AI tasks are centralized in one Data Table that is pulled OnBeginPlay.
The Low Level, the devil and the challenges
The lack of a cone trace in Unreal has been a great inconvenience across projects. So I’ll just build my own, I thought, and proceeded to get burned by the different front axis in external modeling softwares for the 5th time, and you don’t know why everything is out of wack even though you’ve double-checked so many times with your math. The dark side of implementing marketplace assets (even officially endorsed ones) is a story collectively untold.
As an exercise to follow the data-oriented paradigm, every data is in struct form that lives on a centralized BP (GameState). While the main benefit lies in the boosted performance by removing UObject
overheads (and allowing further threading optimizations to achieve ECS efficiency), it created a lot of hurdles that could easily be solved by doing it the object-oriented way. The biggest challenge is in the forced use of maps
(aka Dictionary/ 2D array) with important actors/smart objects as the Key
to their own data Value
, and the current limitations of Blueprints where you can’t set any elements within maps
by reference.
Another problem I ran into was that, since we’re dealing with individual AI’s memories and perceptions, things could actually be destroyed (but not yet garbage-collected) while the AI still thinks it exists. Compounded by the fact that I was building my own data structure that needs to account for all data updates, it was one son of a development hell to switch the Keys
of the data themselves to Soft Object references. Needless to say, it’s a miracle that everything works as intended now, and I’m proud of this framework.