Intelligent Robotics

Navigation behaviors represent some of the most intriguing examples of complex and intelligent strategies visible in nature. As outlined below, we used digital evolution to investigate the navigation strategies that emerge in central place foraging environments, such as that experienced by striped hyenas and honeybees. As we show, evolution, unburdened by the biases of human engineers, can discover non-obvious and efficient approaches to solving environment challenges. When applied to the engineering of autonomous navigation controllers, this capacity enables the discovery of effective, deployable and lightweight algorithms that, due to their simplicity, can provide cost-, time- and resource-saving solutions to otherwise complex engineering problems.

As an example of an application of such controllers, survey and maintenance of underwater oil wells currently requires extremely expensive remotely operated underwater vehicle (ROV) contractors, including a ship, its crew, the ROV, and its operator. In many situations, it would be ideal, instead, to deploy autonomous underwater vehicles (AUVs) in simple ‘program, go, return’ missions: e.g. have an AUV launch from the nearest platform, independently navigate to the targeted location and, once there, conduct surveys (or fix a problem). Moreover, current AUVs are completely dependent on sophisticated and very expensive sensor arrays. Consequently, developing robust, light-weight, agile and truly autonomous navigation algorithms that use simple, low-cost sensory systems could revolutionize AUV robotics.

The project below shows that computational evolution can facilitate the development of deployable naturalistic navigation algorithms when those efforts are biologically well informed, and the systems tractable. Critically, the evolved behaviors were readily transferred to a simple robotic system. Furthermore, because they are responsive to, not dependent on, the details of the environment, the behaviors are robust to the issues of noise and scale that commonly plague such attempts (the so-called ‘reality gap’). Finally, our robot navigates using only a simple magnetic compass and ultrasonic sensor, sensory limitations that would cripple currently deployed AUV control systems. Thus, for engineering, a critical value of this approach is that it could greatly simplify software and hardware requirements, with associated reductions in development costs and massive reductions in deployment costs.



Above: An Avida population evolved to travel from a central birth den (large black circle) out to distant food resources (large grey and white circles), returning home to reproduce. Colors of the organisms (small points) reflect evolved food preferences. Shading of the food resources reflects the current balance between consumption and resource regrowth. While the organisms can evolve to look for and process information about food resources, the den is not detectable from afar.

While other studies have attempted to evolve complex navigation strategies de novo, the resulting algorithms have generally not been reflective of natural patterns. In contrast, we evolved strategies that are highly congruous with those seen in nature: into a single foraging strategy, organisms integrate periods of directed travel, fixed pattern search, cue response, and recalibration and reorientation when outcomes do not match expected results – just as bees, hyenas and even humans do.

axr3000-different_avida_org-displaced_nest-org61

Above: the path of a single organism’s foraging route. The organism’s outbound path is shown by the heavy black line, while the inbound path (after feeding) is white. Cumulative travel paths for the semi-random moving food resources are shown here as colored bands. In this example, the den we displaced after the organism was born. Due to the displacement, the organism makes multiple passes before discovering the den.

For the organism above, the navigation strategy can be broken into four spatial-context-dependent stages (outlined with black boxes): (a) Traveling southeast while resources are visible ahead and to it’s upper left. (b) Heading due north, feeding along the way, while resources are consistently visible ahead. (c) Alternating between stepping north, west, and south while resources are visible to the south. (d) Heading southwest toward the den provided that resources remain visible to the east. If the den isn’t reached by the time all food resources are completely out of sight, the organism recalibrates and reorients itself by returning to the resource field. While this behavior is functionally complex, all actions are governed by a novel way of integrating only two sensory values: the current heading and the visibility of resources.

Above: using only the evolved rules, a basic compass and an ultrasonic distance sensor, a robot reliably reaches the resource field and returns home (black box outlined on floor) even though transference to the physical world challenges the evolved algorithm to operate appropriately in an infinitely more noisy environment with vastly different spatial scales than that of the digital environment. The size of the arenas we show here were limited only by the size of the available rooms, not the behavioral algorithms.

The algorithm underlying these navigation strategies is remarkably lightweight, both behaviorally and computationally, and so was readily transferable to autonomous robots, proving to be robust and immune to common ‘reality gap’ issues.



Above: the evolved behavior is fully responsive to environmental context and robust to disturbance: when placed unexpectedly into the resource field, the robot recognizes the contextual change, and responds with the behaviors appropriate for getting it home from that region…not with the behaviors it would have been exhibiting had it not been moved.

While aspects of the pure evolved algorithm are a bit rough (e.g., frequent turns for orientation), they clearly illustrate how evolved ‘simple’ responsive rules create surprisingly robust and flexible behavioral strategies, including appropriate responses to unexpected change in environmental context.



Above: just as the robot responds correctly to being displaced, it operates correctly in worlds of any scale, limited only by the range of its ultrasonic ‘eye’.

A next goal for this project is to introduce multiple, competing robots and incorporate some simple software and hardware tools that will allow ‘live’ replication and evolution of behavioral control algorithms within the robot population – just as occurs in Avida.

Finally, in another example of a targeted application, medical service robots that navigate hospital corridors currently require development of complete maps in advance. During operation, robots like these commonly utilize computationally heavy simultaneous location and mapping (SLAM) approaches to navigate. Effective and efficient on-board robotic spatial learning would provide a flexible alternative that greatly reduces deployment costs. Accordingly, we have begun projects using mazes to evolve behavioral algorithms and ‘brains’ for responsive spatial learning.