Intelligent agents were used in many of my past projects.

Attached to agents are individual Neural Networks, each internally defined as an object that usually equate to sensors as inputs, and the outputs usually control triggers, game states, or various decision-making values. In the above example, the agent playing chess is evaluating a specific zone on the board. The NN inputs are defined as locations on the board which contain the value of the chess piece currently occupying that square, then an internal topology of (hidden) connections allow the "weights" for each to be adjusted and the value of which will determine how much that particular connection influences the function is applied from the previous connection to the next, to the output which, in this example, holds a score for how well the chess piece performs within this pattern. This is a very general description, in reality there are many layers of NN objects, more complexity to the scoring function, and the weights are adjusted in the mutation function when a child is created from the parent to be included in the population for testing in the next generation.
Agents, aka: Intelligent Agents, Non-Player Character (NPC)

In artificial intelligence (AI), an intelligent agent (aka: software agent or NPC), is a programmed autonomous entity which receives data through various sensors and has a set of behaviors on how to react to these input data. Within a software simulation, agents are set-up and behaviors are defined, which may consist of short term and long term goals. A scoring system may be implemented to compare relative performance. Although there are several classes of agents (see: Russell & Norvig, Artificial Intelligence: A Modern Approach, Pt.1 Ch.2): simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, and learning agents, most games only use the first 2 or 3 types and do not include any form of adaptive learning capability due to the complexity of the programming. My focus to programming Agents has been to increase traditional pre-programmed capabilities by including adaptive learning through the use of artificial object Neural Networks (NN), Fuzzy Logic, and the amazingly optimal performance of Evolutionary Computation (EC).
Generally, several agents are set-up (I usually start with 100) in the simulation to compete against each other and they can either be randomly chosen or sequentially selected for competition. In chess, I sequentially selected all agents such that agent #1 competed against agent #2, then agent #3 and continued with this pattern until all agents had been selected for competition. After all agents had competed for 10 times, their scores determined the portion of the population that were the "best fit" for this round of competition.
The wonderful thing about using this approach is that you can add NNs and EC to an existing expert system, which means that you can optimize a system that has already been programmed to the best that can be programmed by a human expert, and the adaptive learning will continue to attempt to discover how to optimize the performance of the agent. Dynamic learning can be turned off in order to create agents with specific levels of difficulty to compete against a human player.
Agents can be set-up to do many things in many different types of software and you may already be familiar with many of them as used in games such as combat (shooters), flight, racing and other simulations. Other examples of agents include those used which learn to play Chess and Checkers better than the human programmer, perform pattern recognition to dynamically verify a real-world flight-path in real-time, discover the best performing path and/or sequence of operations to survive and reach a goal (used in U.S. military UAVs), discover the best match to mask a sound or broadcast signal, discover the best molecule to fit damaged DNA (cancer research), discover "coding" sequences of miRNA (micro RNA), and other things that I'm not at liberty to discuss but you can use your own imagination.





Copyright (c) 2014 Touch Play Games, all rights reserved