163
Tool Creation |Human Agents
162
Tool Creation |Human Agents
Ordering food at a counter on the other hand, would require the agent to first check if there is a lineup. If there is, the agent would need to find the location of the end of the line and move there. It would then have to calculate a new position each time the line updates until it reaches the front of the line. Once it is the agent’s turn, it would then need to define the location of the counter, move to it, and start playing an animation for portraying the action of ordering food.
Check if there is a lineup for the counter.
Get location depending on if there is a lineup.
If there is a lineup, get the location for end of line
If there is no lineup, get the location of the counter.
Walk to location.
Play Animation depending on if there is a lineup or not.
Play wait animation and check once per second if the line is moving forward. If moving forward, update location for the agent to walk to. If the agent is at the start of line, and the counter is free, go to step 2b.
If there is no lineup, and the agent is at the counter, play food ordering animation and loop it for a randomized time interval.
After the agent finishes ordering, set AgentState to another task (such as going to food pickup or finding a table).
As it can be seen, all these scenarios follow a similar logic, in which the agent requires a location, a direction, and an animation. The use of these three attributes within the behavior tree provides us with an intuitive method of portraying these agents performing various tasks. With this, we can set up a series of nodes that can accommodate any type of object interactivity within the simulation environment. (Fig. 3.3.21) It is of course unrealistic to account for every scenario within the scope of this thesis, therefore the best course of action at this stage of tool creation is to establish a generic base object with definable parameters that works with a wide range of use cases and can be easily modified depending on future use cases and object typologies.
Object Interact State
This state defines what the agent does when they choose to interact with an object. This process undoubtedly depends on the typology of the object; if the object is a book, the agent might go and read it; if the object is a touch screen, the agent might go and touch it; if the object is a chair, the agent might go and sit on it.
In all these cases, we would need to break down these tasks into their individual logical actions. With the chair example, the agent would first need to calculate the location of the chair. It would then need to walk to that location and align its body with the chair. Only when all these steps are taken would the agent be able to sit on the chair. Once the agent sits down, it may then have a randomized range of time before the agent decides to get up again. Translating this then into machine logic, we would have the following steps:
Get location of the front of the object.
Walk to location.
Align agent body to the chair.
Play sitting down animation.
Play sit animation and loop it for a randomized time interval range.
Play standing up animation.
Set AgentState to Default State.
Naturally, we would require a version of this state for every typology of an object within the simulation. Interacting with a bookshelf for example would require the agent to calculate a location in front of the bookcase, turn to it, and choose a book. The agent then might quickly skim the book before choosing another book and continue to do so until the agent finds the right book or becomes uninterested.
Get random location in front of the object (depending on the size of the bookcase; this can vary a lot)
Walk to location.
Turn towards the bookshelf.
Play retrieve book animation.
Play reading animation and loop it for a randomized time interval.
Play animation for keeping the book OR putting the book back on bookshelf (depending on what the agent chooses.
Set AgentState to Default (or another task) OR repeat step one to find another book from the bookshelf.