ITsecurity
twitter facebook rss

Artificial Intelligence in the Real and Virtual Worlds

Posted by on April 25, 2017.

Artificial Intelligence and Games

While most of the tech media’s ‘next big thing’ buzz is still focused on VR, that technology is beginning to settle into the early days of its role. Gaming and technology blogs will soon be looking for a new source of excitement. With some of the recent products and demonstrations, it seems almost certain that soon there will be new excitement for artificial intelligence. Advances in this field have been on the fringe of media awareness for a good while now, but AI is poised to kick up a storm, and seems to be diverging in several directions.

Boston Dynamics has been making some impressive showings from its range of ambulatory robots for several years now. The new version of the Atlas model demonstrates an uncanny ability to keep balance, interact with objects and move around its environment independently. While bipedal navigation might not be something that immediately springs to mind when ‘artificial intelligence’ is mentioned, it still requires some very sophisticated methods for the robot to understand and navigate its surroundings. However, in order to perform more complex interactions with its environments, like picking up an object or opening a door, the Atlas appears to need specialised images resembling QR codes to be able to interpret what it needs to do.

The Atlas, as well as others among Boston Dynamics’ ambulatory robots, might start to look familiar to some gamers when the researchers push and trip the robots. Atlas stumbles, trying to regain balance, responding to the unexpected kinetic force in much the same way as characters in games using the Euphoria physics engine, developed by NaturalMotion. First used in Rockstar’s GTA IV, the Euphoria engine predates Boston Dynamics’ creations by some years. This doesn’t indicate that the Euphoria technology was directly influential for robots like the Atlas, but it’s possible that they both drew from the same advancements.

For the most part, however, the sphere of video gaming has been much more sluggish to advance with AI than it has with other tech, like physics, lighting and special effects. Some of the most highly regarded AI systems in gaming are from titles like F.E.A.R., Halo and Black & White – all of which are more than ten years old.

A document detailing the AI in 2004’s Killzone demonstrates some of the techniques and logic for programming the enemy combatants. The result was quite effective for the time, with enemies responding to the tactics and approach of the player character by using cover, positioning, grenades etc., but the document reveals that the logic behind this is less complicated than it seems. While certainly a huge leap beyond the AI of earlier first-person shooters – where any enemy seeming to act intelligently was almost certainly performing a pre-scripted routine – Killzone’s algorithms are more similar to a computerised chessboard than anything resembling human decision-making. Once the developers define certain pieces of the environment as elements that the AI can interpret (defensive cover here, line of sight here, etc.), it becomes much easier to program chess-like responses on the virtual battlefield for potential player tactics.

This kind of AI system was still very ambitious and impressive in 2004 but, although gaming has made great strides in technology in terms of presentation, the process of decision-making for computer controlled agents is still not much more sophisticated than it was twelve or thirteen years ago. Especially with artificial intelligence beginning to gather speed in other fields, it’s incongruous that gaming, usually such a technologically forward industry, has been so stagnant for so long on this subject.

The Breakthrough of Conversational AI

It may be that no-one has designed a modern game which would benefit from more advanced AI. In shooters like Killzone and F.E.A.R., the enemy AI should be sophisticated enough to be entertaining, but ultimately exists just as an obstacle for the player. Advanced machine learning would not only be wasted on characters usually only seen for a few seconds, but could add unnecessary frustration if they became consistently able to predict, outwit and outmanoeuvre the player. Less combat-oriented games have more potential for making use of complex AI, but do not have the same kind of market share as action-based titles. However, whether the gameplay is violent or not, many games are making more use of emotive, character-based storytelling and one particular field of artificial intelligence could, with a little more advancement, play into this very strongly.

Already looking poised to make some grand changes in both the tech and gaming world is what is currently being referred to as ‘conversational AI’. Widely popularised by chatbots like CleverBot for entertainment purposes, conversational AI takes a language-based approach to intelligence, using natural language processing to make a machine capable of understanding, interpreting and responding to language. This approach achieved some notoriety for Microsoft’s failed ‘Tay’, a semi-autonomous AI-based Twitter account. Regarded now as an ambitious but disastrous experiment, Tay was designed to learn conversational patterns and topics from other Twitter users, and even eventually to be able to formulate her own opinions. While to a certain extent Tay worked as intended, the unregulated nature of the experiment caused Microsoft to pull down the account and heavily modify Tay’s code, as Twitter users intentionally exposed her to a lot of controversial sentiments and iconography, pro-Nazi material in particular, highlighting conversational AI’s main pitfall. While machine intelligence is reaching an impressive state of being able to parse, learn and implement human language, Tay showed that it’s difficult to make a machine which can emulate empathy or exercise judgement on what is or isn’t acceptable material.

Currently the most visible example of conversational AI has to be the Google Assistant; on the surface a simple improvement and refinement on virtual assistant apps such as Apple’s Siri and Microsoft’s Cortanta, Google Assistant’s AI offers a small but important improvement: the ability to read from context and ‘remember’ more than one remark or command back in time. A CNET demonstration shows that a series of voice commands can come eerily close to a conversation with an actual intelligence, although the Google Assistant still shows some very computer-like misunderstandings.

Gaming and AI’s Future Importance

Video games and technology have gone hand in hand since the earliest days of Pong prototypes and the first arcade machines, and often have a symbiotic relationship. Gaming has always needed technological advancements to expand the power available to it, but the demand for bigger, more impressive, better looking games has become a significant driving force for advances in processing power and programming. Even the earliest games may have had an influence on the switch from text-based operating systems like MS DOS to the graphical user interfaces of Windows and Apple’s System 1.

If gaming is to have a similar impetus on the advancement of AI, there’s certainly some catching up to do first. However, if video games can adopt the technology and techniques of conversational AI and machine learning, and once gamers start to see the potential for AI to enhance the entertainment experience, an advanced, natural-feeling AI could become a much stronger selling point for games. The selling point is an important factor; it’s the primary motivator for larger studios to devote their impressive budgets towards research and development, and for smaller, more individualistic studios to apply the ingenuity and creativity of their talent.

Gaming has already shown that the demand for quality and innovation can drive some impressive technological advances, and it’s also worth noting that technology with video game applications tends to get more widespread media exposure. Most of VR’s coverage has centred around either VR games or other forms of entertainment, and it was a video game – Pokémon Go – which brought AR more prominently into the public eye.

Currently, AI at the consumer level is still in a rudimentary state. Most advances in the technology are geared more towards simulating an intelligence rather than creating something capable of deductive reasoning or any kind of autonomous reactions; a ‘smart’ device may be able to read out a shopping list or remind you of appointments, but it’s simply responding to the user’s data based on predetermined behaviours. Human-like robots such as Hanson Robotics’ Sophia may be able to show an impressive lifelike facial response and cross-reference emotional expressions with language cues, but in terms of artificial thought it still runs along the same processes as a standard chatbot. The Google Assistant and other conversational AI may be a bridging point, and once AI starts to play a bigger role in dynamic, interactive experiences it is likely to develop and evolve exponentially.


Share This:
Facebooktwittergoogle_plusredditpinterestlinkedinmail

Leave a Reply

Your email address will not be published. Required fields are marked *

Submitted in: Expert Views, Josh Townsend |