Notes 03/10/2014 Software Engineering II Review of topics related to artificial intelligence Based upon Artificial Intelligence - A Modern Approach by Stuart Russell and Peter Norvig http://aima.cs.berkeley.edu/ The Uses of the Slime Mold Lifecycle as a Model for Numerical Optimization, p. 17-19 http://www.cs.okstate.edu/~monismi/dissertation/MonismithDissertation.pdf Last time we discussed AI and Machine Learning We determined that an AI provides the appearance of human-like behavior or a sense of intelligence We have seen examples of this in real life before. These include: Recommendation Engines Search Engines Spell Checkers Natural Language Recognition Tools Agents within games such as Non-Player Characters etc. We also discussed intelligent agents briefly These are things that can perceive an environment using sensors. Agents can act upon the environment with effectors. Agents could be a class or group of classes that act together to sense an environment and act upon it. Agent programs may act upon a percept using a lookup table. They may also interpret inputs as a state using a set of rules called Situation-action or if-then rules After interpreting an input (percept), a state may be matched with a rule, and then an action may be carried out based upon that rule. Many agents are goal based. These may need some goal information to accomplish an action. Ex. using a GPS to get from Maryville to Kansas City Inputs: Starting and Ending locations Percepts: current location, current speed, etc. Actions: Displaying current location on map, telling driver where to go, etc. Achieving the goal will make the agent “happy”. Search and planning areas of AI are devoted to achieving these goals. Agents may also be utility-based. Goals alone do not give quality behavior. Achieving the goal => happy Not achieving the goal => unhappy We would prefer to have a tool that maps a state or group of states to a value representing happiness (i.e. utility). Such a tool is called a utility function. This will give us the ability to make a trade-off when attempting to achieve many goals or when a goal may only be achieved partially. Environments may also be divided into types Accessible or inaccessible Deterministic or non-deterministic next state determined exactly by the previous one or not? Episodic or non-episodic Game levels, need to think ahead or not Static or dynamic Discrete or continuous Chess vs GPS Need a definition of a problem Problem - information an agent needs to do its job Need the initial state of the problem the set of possible actions the agent may take which actions to carry out next based upon the previous action These define the state space of the problem All states reachable from the initial state in any course of solving the problem. We may have multi-state problems, too. These require a state space set. For searching, we know many problem solutions Depth-First Search Breadth-First Search A* Many others What if these are too slow? What if we are ok with a sub-optimal solution? Consider the Genetic Algorithm for solving single objective numerical optimization problems State Space: A bounded n-dimensional space Agents: Individuals within the population with a location, and fitness function evaluator Actions: Crossover - combining an agent’s data with another agent’s Mutation - slightly modifying an agent’s data Population update - replace or supplement old population with the new one Rules: How to perform crossover and mutation Elitist Culling Aging Many others Utility function: Defined by evaluation of f(x) Smaller f(x) => better solution The algorithm needs a starting point Initial random locations for each individual within the search space Sample Genetic Algorithm Initialize a random location and evaluate the function value at that location for each individual in a parent population of size NP. Archive (save) the best individual in the parent population. For the number of generations, For i = 1 to floor(NP/2), Select a pair of parents to form children using a tournament selection (give parents with high fitness a a high chance of being selected). Create a pair of children by performing recombination on the parents with a fixed probability (e.g. 60%-95%). If a pair of children was created, Perform mutation on the children with a fixed probability (e.g. 0.5%). Evaluate the function values of the children. Add the children to the child population. Else, Add the parents to the child population. End if. End for. For i = 1 to NP, If child[i].fitness < parent[i].fitness) parent[i] <- child[i] End if. End for. Archive (save) the best individual in the parent population. End for. Keep software engineering in mind when using these methods. GAs should only be used when there are not better alternatives. The requirement to iterate over a number of generations may make the algorithm slow to converge. Search strategies should consider: Completeness (does it always find a solution) Time Complexity (time taken) Space Complexity (memory) Optimality (does it always find the best solution)