Presentation Feedback
Recently I had to perform a presentation to two of my tutors at the University of Suffolk to give them a progress update on my final project. To begin the presentation, I spoke about some of the milestones stated in my project proposal that I had completed which was mainly about researching into the various AI techniques, Line of Sight (LoS) and beginning to implement a Behaviour Tree (BT) and some behaviours for my Guard AI.
I then gave a rough breakdown into how a BT functions before discussing some issues that I have been having with the project and in my personal life, and how they have affected the timeline of the project leaving the project around two weeks behind the intended schedule. I then went on to explain the future of the project speaking of what I intend to have implemented by the deadline and what stretch goals I plan on implementing should there be enough time.
After the presentation tutors were then given a chance to ask questions around the project, I didn't receive much in terms of questions however I was advised to try and find a few more journals around the topic. The video of the presentation can be found below.
I was also directed to watch a GDC talk by Martin Walsh about the AI in Splinter Cell and how they implemented their AI systems. (Walsh, 2018) talk about AI awareness and perception models and the characteristics that are involved with these models, the first point Walsh makes is that the AI needs to be fair when detecting the enemy as players might get frustrated with the game if they feel cheated by the AI. The second point is that the AI needs to provide constant feedback towards the play, such as through animations, speech etc so that the player can understand what the AI is about to do, and the final characteristic Walsh talks about is showing intelligence because "if your opponents feel dumb, then you really get no satisfaction in beating them" (Walsh, 2018).
Walsh explains that there are four common Awareness and Perception models being;
- Visual
- Audio
- Environmental
- Social and contextual
The first point he goes on to speak about is visual, and how they set up the AIs vision in Blacklist. Walsh starts by explaining how most games use a standard "vision cone" as I have done with my game, however, in blacklist they use multiple different strategies to imitate real vision, which is around 180 degrees.
In blacklist the AI is set up with multiple detection zones, close to the player they have the standard vision cone, with nested layers, so "the player would obviously get spotted faster in the inner most cone" (Walsh, 2018). Alongside the vision cone, the AI also has nested box or coffin shaped vision zones, this is to imitate that you can see things in the distance in front of you but maybe not off to the side.
Alongside the detection zones, once they player is within this zone the AI will then raycast to 8 different bones on the protagonist's (Sam Fisher) body, and depending on the stance of the player, a different amount of bones have to be visible, for example standing up will require the lest bones visible to be detected, while crouched behind cover will require the most bones to be visible.
While this would be a very good system for me to implement and gives the player a much better chance of sneaking around within the scene, while I work to complete my milestones I will stick with the standard vision cone system, as for my project I am focusing mainly on the BT and how that functions. Although if there is enough time I would definitely enjoy trying this method of line of sight out within my project as a stretch goal, but (Walsh, 2018) says that tuning the system "was a nightmare" although that was probably as a whole system, including the bone raycast and detection time, rather than just the detection zones.
The other points that Walsh talks about are very interesting explaining how the guards move around with positional nodes and choke points to act intelligent and to guard doorways. They are also able to notice changes to the environment such as a door that was initially shut but is now open and react accordingly.
The final point (Walsh, 2018) talks about is social and contextual where the NPCs might be having a conversation and how some games just break out of the conversation if interrupted and carry on doing something or they might rejoin the conversation at set points "although this sounds robotic" (Walsh, 2018). I have just implemented conversations into my project, however I am only simulating the AI having a conversation, so there will be no dialogue or animations.
Reference List
[1] Walsh, M. (2018). Modeling AI Perception and Awareness in Splinter Cell: Blacklist. [online] YouTube. Available at: https://www.youtube.com/watch?v=RFWrKHM0vAg [Accessed 4 Mar. 2019].
In blacklist the AI is set up with multiple detection zones, close to the player they have the standard vision cone, with nested layers, so "the player would obviously get spotted faster in the inner most cone" (Walsh, 2018). Alongside the vision cone, the AI also has nested box or coffin shaped vision zones, this is to imitate that you can see things in the distance in front of you but maybe not off to the side.
Alongside the detection zones, once they player is within this zone the AI will then raycast to 8 different bones on the protagonist's (Sam Fisher) body, and depending on the stance of the player, a different amount of bones have to be visible, for example standing up will require the lest bones visible to be detected, while crouched behind cover will require the most bones to be visible.
While this would be a very good system for me to implement and gives the player a much better chance of sneaking around within the scene, while I work to complete my milestones I will stick with the standard vision cone system, as for my project I am focusing mainly on the BT and how that functions. Although if there is enough time I would definitely enjoy trying this method of line of sight out within my project as a stretch goal, but (Walsh, 2018) says that tuning the system "was a nightmare" although that was probably as a whole system, including the bone raycast and detection time, rather than just the detection zones.
The other points that Walsh talks about are very interesting explaining how the guards move around with positional nodes and choke points to act intelligent and to guard doorways. They are also able to notice changes to the environment such as a door that was initially shut but is now open and react accordingly.
The final point (Walsh, 2018) talks about is social and contextual where the NPCs might be having a conversation and how some games just break out of the conversation if interrupted and carry on doing something or they might rejoin the conversation at set points "although this sounds robotic" (Walsh, 2018). I have just implemented conversations into my project, however I am only simulating the AI having a conversation, so there will be no dialogue or animations.
Reference List
[1] Walsh, M. (2018). Modeling AI Perception and Awareness in Splinter Cell: Blacklist. [online] YouTube. Available at: https://www.youtube.com/watch?v=RFWrKHM0vAg [Accessed 4 Mar. 2019].