Title: Genetic algorithms applied to robotics.
Subject(s): GENETIC algorithms; NOLFI, Stephano
Source: Electronic Engineering Times, 5/13/96 Issue 901, p37, 2p, 1c
Author(s): Johnson, R. Colin
Abstract: Reports that according to Italian researcher Stefano Nolfi, genetic algorithms evolve better robotic controllers than humans. Use of neural networks in solving automatic pattern recognition problems in artificial intelligence; Experiment conducted by Nolfi which used an autonomous robot that classifies objects into different shapes.
AN: 9605190227
ISSN: 0192-1541
Database: Academic Search Elite

Section: TECHNOLOGY

TECHNIQUE CLAIMED TO YIELD MORE ROBUST CONTROLLERS THAN TRADITIONAL AI

GENETIC ALGORITHMS APPLIED TO ROBOTICS

Rome -- Genetic algorithms evolve better robotics controllers than humans, according a researcher with Italy's National Research Council.

Researcher Stefano Nolfi observed that neural networks solve one thorny problem of artificial intelligence (AI)-automatic pattern recognition--but that robotics AI applications also require intelligent controller design. It appears that genetic algorithms might be the ideal solution to that problem.

"By following the evolutionary approach, a simpler, more robust solution can be obtained," Nolfi said.

Robotics designers solved the shortcomings of traditional AI by harnessing neural networks to learn tasks that were difficult to program and by assigning the easier tasks to separate modules. The approach, sometimes called behavior-based robotics, isolates the robotic-controller function in independent modules; some of the modules are neural networks, but the rest use conventional AI programming techniques.

Nolfi contends that the decomposition process (whereby the various functions needed are isolated in independent modules) and the integration process (whereby the modules are made to work together) should also be automated. "Decomposition and integration should be an adaptation process, not the decision of an experimenter who has at his disposal only a few trials," he said.

Older, manual design methods can only try a few controller designs, since an engineer must decompose each one into modules, debug the modules and then integrate them back into a working system for testing. Nolfi postulated that a genetic algorithm could automate those design steps, making thousands of trials before settling on the best one, and that it would evolve a simpler, more robust solution.

To test his claim, Nolfi crafted an experiment with an actual, autonomous robot that classifies objects into different shapes. Both the decomposition/integration design method and the genetic-algorithm design method were then implemented and analyzed.

The robot used was a Khepera measuring just 55 mm around x 30 mm high and weighing only 70 g (see Feb. 12, page 43). A Khepera balances on two wheels with dc stepper motors and two small Teflon balls serving as "training wheels."

The Khepera is equipped with eight infrared sensors--two on the back--though Nolfi used on the front six in his experiment. The Khepera's environment was a rectangular, open-topped box measuring 60 x 35 cm with 3-cm-tall walls all covered with white paper. The target object was a 2.3-cm-diameter cylinder that was the same height as the walls (3 cm) and was also covered with white paper.

The robot controller's task was to recognize the target object, go to it and stay nearby. The six sensors enabled the Khepera to "see" the target object only when it was at a certain intermediate distance--not too close, not too far--and when facing the target object at a suitable angle.

The Khepera software simulator was used to mock up all of the experiments, after which the real robots acted out the scenarios that proved most successful on the simulator. "No significant differences were found between simulated robots and real robots in their environment," said Nolfi.

The decomposition step identified the modules needed to solve the problem and overcame the "brittleness" of AI solutions by allowing one module to be a learning neural network. Deciding which modules to use was straightforward, since the Khepera already has many behavior-based modules freely available for downloading. The four modules chosen included one to look for the target object, another to avoid objects, a third to home in on and stay near the target object once recognized, and a fourth to recognize the target object.

The first three modules were available virtually off-the-shelf, but the recognition module went beyond the ability of conventional AI techniques. Consequently, the recognition task was assigned to a back-propagation neural network.

The neural network used the six frontal sensors as its inputs and had just one output to designate whether the robot was "seeing" the target object. Three variations of the back-propagation neural architecture were tried: one with no hidden neurons (input neurons were connected directly to output neurons via variable synoptic weights); one with four hidden neurons to connect input neurons to output neurons; and a third with eight hidden neurons (more hidden neurons than input neurons).

Each neural network was then trained from 7,200 different input patterns recorded from the infrared sensors of the real robot at various distances and angles with regard to the target object. The neural network with the four hidden neurons correctly recognized the target 34 percent of the time, compared with 22 percent for the net with no hidden neurons. The network with eight hidden neurons did no better than the one with four.

Unfortunately, even after designing all the modules and integrating them, the resulting system performed poorly. An analysis of the system revealed that the "dead zones" in front of the robot from which it could not "see" the nearby target were the major cause of its poor performance.

Nolfi speculates in retrospect that he could have improved the decomposition/integration design by adding yet another module. The fifth module would have detected the dead zones, thereby allowing the robot to continue forward even when it could not see the object.

The second robotic solution eliminated all the manual programming steps of separate modules and evolved a neural network that was simpler--with no hidden neurons--yet more robust. More significant was that the genetic algorithm turned a disadvantage into an advantage: It evolved a novel navigation technique that actively perceived the dead zone as a guide to finding the target object.

With the genetic-algorithm design, the neural network had the same six inputs from infrared sensors, but its outputs directly controlled the wheels, instead of leaving that task to manually designed AI modules.

Family tree

The genetic population consisted of 100 individual neural-net-work "genotypes." The original "Adam/Eve" population was created by setting each individual's neural-network connection weight-matrix to random values. Individuals were allowed to "live" for five epochs, each consisting of 500 actions that were basically pulses to its stepper motors. At the end of life, the five best individuals were allowed to procreate 20 offspring each, yielding 100 new individuals, and the process was repeated.

The evolved robot performed better by quickly finding the target and hovering nearby. The evolved robot was also simpler, using just one module--consisting of a neural network with no hidden neurons--whereas the decomposition/integration design comprised a neural network with hidden neurons for recognition; three traditional AI modules, for exploring, avoiding and homing; and a fifth, hypothetical "dead-zone detecting" module, to improve performance.

Analysis of the genetic algorithm also revealed a common methodology evolved by the robots that used the dead zones as guide posts. They actively sought the boundary between the target and the dead zone. They then shuttled hack and forth along that boundary until they found the minimum distance from which they could still see the target--at the edge of the dead zones on the left or fight of the target--and hovered there, as if to follow the target object if it moved.

"This solution has also been proven to scale up," Nolfi claimed, citing an experiment with his mentor, Domenico Parisi, in which the evolved robots not only found objects but also could pick them up and carry them to the edge of the environment.

PHOTO (COLOR): Researcher Stefano Nofti

~~~~~~~~

By R. COLIN JOHNSON


Copyright of Electronic Engineering Times is the property of CMP Media Inc. and its content may not be copied without the copyright holder's express written permission except for the print or download capabilities of the retrieval software used for access. This content is intended solely for the use of the individual user.
Source: Electronic Engineering Times, 5/13/96 Issue 901, p37, 2p, 1c.
Item Number: 9605190227