Title: The bad boy of robotics.
Subject(s): BROOKS, Rodney; MASSACHUSETTS Institute of Technology (Cambridge, Mass.). -- Artificial Intelligence Laboratory; ROBOTS
Source: Popular Science, Jun95, Vol. 246 Issue 6, p88, 5p, 1 diagram, 1c
Author(s): Langreth, Robert
Abstract: Profiles Rodney Brooks, a professor at the Massachusetts Institute of Technology's Artificial Intelligence Lab in Cambridge. Educational background; Career history; Criticism of Brooks' work in robotics; Focus on his project, Cog, the humanoid robot.
AN: 9505263251
ISSN: 0161-7370
Database: Academic Search Elite

 

THE BAD BOY OF ROBOTICS

The world's controversial roboticist aims to create the first conscious android.

You know you've found Rodney Brooks' office when you read what's on the door. It's covered with damning reviews of his research by other scientists.

One roads: "This paper is an extended, wandering complaint that others do not view the author's work as the salvation of mankind. There is no scientific content here."

Another says: "No doubt [Brooks' robots] amuse the MIT graduate students, but they have a number of fundamental limitations: no non-trivial communication, no memory, no intelligence, and no inference capability. All you get is dumb, blind locomotion."

The object of all this scientific wrath is a shaggy-halted Australian roboticist who looks a decade younger than his 41 years. Rushing in late for a morning interview, the stocky professor first apologizes ("I was working until 2:30 in the morning; that's the only time I can get any quiet"); then, with a little prompting, he launches into a commentary of his door-side critics. That "wandering complaint" of a paper, he reports with a mischievous grin, was eventually published in a prominent journal. "The editors asked me to expand it, too." He points to another disparaging quote. "I found out who wrote that one," he goes on, prancing about the foyer as he speaks. "He lost his job a couple of years ago. . .nothing to do with this, of course."

A professor at the Massachusetts Institute of Technology's Artificial Intelligence Lab, Rodney Brooks has made a career of bucking the scientific establishment. He's done it by building robots that violate the most sacred tenet of artificial intelligence (AI): Before a robot can do anything useful (like navigate through a cluttered room), it first must be able to reason. Nonsense, Brooks began arguing in the early 1980s. To prove his point, in the following years Brooks built a series of wheeled and insectlike robots that, without any reasoning abilities, did everything from steal soda cans off desks to traverse boulder-strewn fields. Brooks' success--not to mention his unflinching candor--has made him one of the most controversial figures in his field.

There's only one problem, says Brooks, whose students gave him a T-shirt labeled, "The Bad Boy of Robotics": These days, too many people agree with him. His insect robots, once scorned, are now all the rage. Everyone seems to be building one, whether they credit Brooks or not. "He's very innovative," trumpets mind researcher John Haugeland of the University of Pittsburgh. "Everyone in [Brooks' lab] is breathtakingly smart," agrees Tufts University's Daniel Dennett. Dennett, a philosopher and author of 1991's Consciousness Explained, often collaborates with Brooks.

So to remain on the cutting edge, Brooks has moved on to a much more radical project: Cog, the humanold robot. On a tabletop in a lab one floor above his office, Brooks' graduate students are assembling Cog's skeletal body. Cog already has eyes, a torso, one arm, and a brain; soon students will attach the other arm, hands, ears, and a mouth, everything but legs and a nose. After enough of Cog's hardware is completed, the robot will gradually be turned on over a period of several years. It will be "born" without reasoning abilities, just a set of pre-programmed "desires." But Brooks hopes it will learn, just like a human infant, how first to move, then to interact with humans, and finally to listen and speak. Brooks even debates with his graduate students over whether the robot will be conscious.

Cog, in sum, is the most ambitious automaton ever attempted. And Brooks likes it that way.

Rodney Allen Brooks mastered unorthodox methods early in life. As a science-fiction-inspired computer buff grOWing up in remote Adelaide, Australia, he had to: There were no computers in Adelaide in the 1960s. So the precocious Brooks built his own.

When he was 12, he created a tic-tac-toe playing machine with old telephone switchboard switches, light bulbs, and soldering wire; the gadget mystified Brooks' parents, who couldn't understand why the computer won so often. (The computer always went first.) Later, Brooks concocted a simple "learning machine" from ice cube trays filled with liquid copper sulfate, a conductor. When he passed electric currents through different ice cube slots, copper would solidify along the route of the current, essentially burning" that pattern in the gadget's memory.

At Flinders University in south Australia, Brooks gained access to the school's lone computer, an IBM mainframe, and promptly reprogrammed its entire operating system because he "didn't trust" how IBM had done it. Other users never figured out why the computer suddenly started booting up more efficiently. After completing a masters degree in mathematics, it dawned on him that he "was never going to make it on the world stage with a Ph.D. from Flinders." Luckily, Brooks' thesis impressed admissions officers at Stanford ("They didn't understand it, so they figured that I must be smart") and in 1978 Brooks shipped off to California.

It was here that the young Aussie developed one of his basic research strategies. "I figure out the fundamental, un-stated assumption in everyone else's work, then negate that assumption." At the time, everyone else was struggling to teach mobile robots how to recognize obstacles so they could then avoid them; Brooks concluded it would be much simpler to train them how to identify open spaces.

At Stanford, and then as a junior researcher at MIT in the early 1980s, Brooks grew disenchanted with the direction in which AI research was going. Since the early days of robotics, vision had remained a problem. Tasks humans do without thinking--like recognizing a coffee cup on a cluttered desk, then picking it up--turned out to be amazingly difficult for robots.

Over the years, practitioners of "traditional AI" had developed increasingly elaborate "mental maps" to help robots understand their surroundings. Problem was, automatons relying on mental maps would often spend minutes, or even hours, "thinking" about what they saw before they made a single move. Even so, most couldn't meander through unfamiliar surroundings without crashing into things.

Real-world intelligence can't work this way, Brooks thought. While flying back home from a conference in 1984, he scanned a scientific paper on carrier pigeons. Pigeons obviously don't maintain human-style maps of where they have been, yet somehow they deploy a variety of instincts to find their way home.

An idea fermented in Brooks' head. The concept was to eliminate robots' mental maps and replace them with a hierarchy of simple preprogrammed behaviors, or intuitions. Like baby animals, robots would react to the world based on these instincts. Brooks dubbed his idea "intelligence without reason."

Over the next several years, Brooks developed a series of insect robots with then-student Colin Angle to demonstrate this principle. These robots didn't think, they just traveled. Take the six-legged beast called Ghengis, Brooks' favorite robot: Its primary form of behavior was to chase after anything that moved. If it bumped into an object while in hot pursuit of, say, a person who happened to be walking by, a lower-level behavior-- such as "step over the obstacle"-would take over. And if the object was too large to step over, an even lower-level behavior would direct the robot to back off and try another direction. Like ants wandering through a forest floor, Ghengis and the other mechanical arthropods could march adroitly through a rock-strewn space without the benefit of planning or reasoning.

The concept shook up the AI establishment--and for good reason. "If what I'm arguing [about the nature of intelligence] turns out to be true," Brooks explains, then some other roboticists' careers "will have been a waste of time . . . . But that's the way it always is in science." Nor did Brooks stop at insects: He claimed his behavior-based theory of robotics could be used to create higher intelligence as well. Intelligence, he argued, arises not so much from abstract reasoning, as AI researchers had usually assumed, but from interacting with, reacting to, and learning from the physical world. If one forged robots capable of complicated enough interactions with their environment, intelligence would eventually happen. "Humans," Brooks states, "may not be a whole lot smarter than dogs."

The roboticist's original plan was to progress from robot insects to automatons modeled on increasingly advanced animals: first a mechanical lizard, then perhaps a dog, and finally a human. But all that changed on January 12, 1992. For sci-fi neophytes, that's the day in the movie 2001: A Space Odyssey that the renegade computer Hal is switched on. Brooks was showing the film at his house to celebrate. Suddenly, "it started eating away at me: We don't have a space station like in the movie . . . we don't have a moon base . . . and it looked like we weren't even going to have Hal by 2001."

Over the next few days, "I started thinking, 'Gee, I want to build Hal.'" Deeper motivations also goaded him: Critics had attacked his insect robots for not doing anything useful; designing a humanoid would prove once and for all that his theories applied to higher intelligence. A few weeks later he shocked his graduate students by announcing that they would do just that.

"I like to build cool things," Brooks says. "What could be cooler than the great science-fiction dream of a human-equivalent robot?"

It is late afternoon. Brooks is meeting with his team of about 12 students to discuss what it means to be conscious. How will one be able to tell if Cog is conscious? Is that even an answerable question? A visiting philosopher does most of the talking, responding to students' questions with terms like "token physicalism" and "homunculus" in a heavy Germanic accent. "If it walks like a duck and quacks like a duck, then it is a duck," Brooks volunteers at one point.

The meeting drags on, like a late-night PBS documentary that hasn't been sufficiently edited. Brooks is restless. He twirls an orange pen around his finger. He pulls a curl of hair across his forehead. He grips his left shoulder with his right hand. He tugs at the puffy bags under his eyes. Finally, he pipes up again. "All of this speculation is way too premature. Maybe by building something we can open up another can of worms."

Crafting Cog is something that Brooks finds little time to do these days. Between speeches, classes, supervising grad students, and "you guys"--Brooks points accusingly at a reporter--"I can't get anything done." Spending a day with Brooks means watching a mad rush to complete these tasks and free a few precious minutes alone at the end of the day.

Still, Cog is behind schedule. This isn't merely because Brooks is short on time, or money. (No major U.S. funding agency dares support the radical project directly.) It's also because of Cog's sheer complexity. Brooks' theory that robots learn from interacting with the real world requires Cog to mimic humans as much as possible. And that means hardware of unprecedented sophistication. For example, Cog's forearms will contain springs that sense resistance, to prevent them from breaking things.

Like a lenient orchestra conductor, Brooks draws the best from the team's individual strengths. Each player is given a different part to play (in this case, assembly and design of Cog's body parts), but Brooks lets them determine how best to play those parts. The result is a sort of intellectual polyphony, with lines of research going in many directions at once. "Rod doesn't like telling people what to think," explains Cog team member Joanna Bryson.

But it's Brooks, with MIT colleague Lynn Stein, who'll oversee Cog's progression from wired-together chunks of hardware to a functioning humanoid. How this will happen remains rather hazy. Brooks and company will use what is known about infant development as a guide. Cog will start with simple tasks, like moving its hands; then it will learn to play with simple objects and toys. Several years from now, it might learn to understand simple speech.

Like humans, Brooks says, "Cog will have hormones" built into its brain. In this case, though, the brain is a collection of computer chips mounted on a rack next to its body. For example, Cog will probably recognize someone in the lab as its mother and try to attract her attention. Cog might develop a pain-avoidance instinct after someone hits it in the face a few times.

Beyond that, the plans get vague fast. A paper Brooks and Stein wrote on Cog devotes 11 pages to hardware but offers just a few short paragraphs on Cog's mental development. But Brook isn't worried. "I will call the project successful," he says, "if we can get to a point where we can leave Cog switched on for 24 hours, and Cog knows it has been on for 24 hours."

Not surprisingly, many critics give Brooks little chance of success--most prominently, AI godfather and fellow MIT professor Marvin Minsky. On the surface, Minsky seems an unlikely antagonist, since his theory of intelligence bears striking similarity to Brooks' own. But Minsky, like Brooks, loves verbal sparring, and apparently hates sharing the spotlight--something he's had to do more often recently.

One day last spring, Minsky wandered into a seminar where one of Brooks' students was describing Cog, and began attacking the project as a waste of time. Hearing of the commotion, Brooks rushed to the room, and a no-holds-barred shouting match raged for an hour, according to students present. Things settled down when the two agreed to a formal debate.

Although the debate didn't happen (it was canceled when the media found out about it), Brooks--who never wanted a fight with his boyhood idol--insists the dispute is history. "Really, we're friends now," he maintains earnestly. Then, like a child testing how much he can get away with, he drops a bombshell: "Marvin is going to join the project." But Minsky himself, accosted at an MIT cocktail party, refutes this claim soundly. "I'll help out whenever they have an interesting idea," Minsky says mockingly. "That could happen aaa-ny day now." He adds, "Cog is not a project, it's a press release."

The latter comment hits closer to the mark. While most people fret about rocking the boat, Brooks worries about not rocking the boat enough. "When too many people agree with me, I worry I'm not trying something radical enough."

With Cog, Brooks has accepted the ultimate challenge. "I know I'm not going to, but I'd like to get to the bottom of what is to be human," he says. Perhaps, along the way, Cog might also shed some light on the outrageous nature of one particular human--the Bad Boy himself.

THE WORLD ACCORDING TO COG

APPEARANCE. To encourage humans to interact with it, Cog will wear a plastic mask with molded humanoid features somewhat reminiscent of E.T. Tactile sensors on the mask will tell Cog when someone is touching it.

VISION. Small color cameras can tilt or pan independently. Like human eyes, they'll be able to focus on one object while retaining a wide-angle peripheral vision.

FLEXIBILITY. "Intelligence is in the whole body," is how Cog's chief builder, graduate student Cynthia Ferrell, phrases Rodney Brooks' theory of intelligence. Consequently, Cog's waist, shoulders, elbows, wrists, and neck contain enough joints and actuators to twist and turn with close to humanlike flexibility.

MOBILITY. Even if Cog someday gets fed up with its creator, it can never run away: Its torso is bolted to a tabletop. "The project's hard enough without legs," quips Ferrell. In any case, a big red emergency-off button sits prominently on the wall a few feet from Cog.

HEARING. Eventually Cog will learn to associate sounds with objects. Several ears--sensitive microphones with earlike sound guides--will record incoming noises, while a digital signal processor will filter out background noise and decide the direction and other important characteristics of incoming noises.

ARMS. Most robots smash clumsily into things when they try to wave their arms. When it has both of its arms attached, Cog will be a bit more sensitive: Torsional springs built into the joints will sense resistance so Cog can stop and then try something else.

BRAINS. Talk about disembodied: Cog's brain sits exposed on a computer rack (not shown) next to its body. It consists of several dozen parallel processing units that can do different things at the same time. Groups of processors will simultaneously coordinate the various parts of Cog's body. A bank of TV monitors next to the brain allows researchers to visualize activity in each of the processors.

PHOTO (COLOR): Wanted Professor Rodney Allen Brooks, creator of the renegade robot Cog.

~~~~~~~~

By ROBERT LANGRETH


Copyright of Popular Science is the property of Times Mirror Magazines and its content may not be copied without the copyright holder's express written permission except for the print or download capabilities of the retrieval software used for access. This content is intended solely for the use of the individual user.
Source: Popular Science, Jun95, Vol. 246 Issue 6, p88, 5p, 1 diagram, 1c.
Item Number: 9505263251