Biorobotics Laboratory BioRob

Robotics Applications of Vision-Based Action Selection

The conception of an autonomous robot is a challenge involving the application of knowledges provided by various research fields.
Bio-inspired robotics is a relatively recent paradigm according to which biology and robotics can be mutually helpful.

In the scope of bio-inspired robotics, the modular robots Amphibot II and Salamandra Robotica have recently been developed. Their look and locomotion system are concieved to mimick the ones of a snake and a salamander respectively. Results show that complex behaviors needing tight synchronisation and adaptability to the everchanging environmental conditions can be easily obtained by inspiring the control system to the way an animal Brain Stem and spinal cord generate the various locomotion patterns.

This project aims at the development of a reactive terrestrial navigation system for such robots.
In detail, we want our system to control the locomotion system already implemented such that the robot is able to perform several simple behaviors (obstacle avoidance, fleeing from a predator, prey hunting) and manage correctely the priorities between them.
Inputs are provided by two cameras generating a stereo information and by motor feedback.

Visual Field Mapping

Since robot's movement is obtained through body oscillations, the camera's filmed area does not forcely correspond to the area in front of the robot. Thus a mapping from camera field to visual field is needed. To map correctily cameras position in the real vision field  we decompose the robot movement in two components: the sinusoidal oscillation of the body and the angular speed caused by a turning movement.

Control Architecture

We adopted a Subsumption-like architecture since it seems well adapted to face a dynamic and unpredictable environment in real-time using just limited sensing capaticities.

Unlike the "classic" subsumpition architecture, our architecture also takes into account that the state of low priority action may still influence the output of an higher priority action. We consider this feature as desirable since it may allow our robot to produce more performing and realistic behaviors. For instance when an obstacle is  detected while hunting a prey, the robot will try to avoid it in a direction favorable to its hunting (i.e. will try, if possible, to select the direction selected by the prey). Therefore, in this use case the hunting performance is increased.
The implementation of this feedback system enriches the robot performances by still preserving the independance of the different behaviors (in principle their addition and removal remains easy).

Several behavioral constants control the robot behavior (i.e. the switch between different behaviors, the way behaviors influence each other and the way a behavior acts): reactivity, panic, haste, confidence, daring, fear, persistence.

The image on left shows the complete control flow. First camera input is mapped on the real robot visual field (see previous chapter), then a set of independent behaviors react to it. One within those behaviors is selected as the one controlling the robot during the current step. Selection is performed by polling the different behaviors according to a fixed priority. A behavior may require the control of the robot under conditions determined by the visual input, the output of the other behaviors and a set of behavioral constants.

Obstacle Avoidance

Robots should be able to avoid static obstacles (e.g.walls), moving obstacles (e.g.feet) and escape from a dead-end. Also, robot speed is modulated according to obstacle distance. if obstacles get too close, backward locomotion can be performed. Robot direction is determined by the most obtacle free zone. Robot speed is determined by an obtacle repulsion function shaped upon a sigmoid. The more an object is close, the more speed gets small (thus allowing sharper turns). Finally, the switch from forward to backward locomotion is controlled by a simple state machine.

Prey and Predator Tracking

Prey an Predator are represented as two spheres of different sizes. Tracking is performed using a Circular Hough Transform and using stereo information to distinguish a prey from a predator according to their size.

The leftmost image shows the result of a circular Hough transform run on one of the two cameras. Aliases are also detected. The central image shows that by comparing the circles found in the two cameras, it is possible to extract only the real circles (only aliases found on both cameras at the same time survive to this check). The rightmost image shows that a specific circle can be detected by comparing the apparent radius and detected distance of each circle with previously recorded data about the target we want to track.


Using a simulation we show how the behavioral constants presented above influence robot behavior. Interested readed is addressed to chapter 5 of the report.

Obstacle avoidance has been tested on Amphibot II. It turned out that the visual field mapping phase introduced above produces a more realistic behavior compared to a direct reaction to camera inputs.

The prey and predator tracking system, though sensitive to lightning conditions, is able to correctly detect a determined target once calibrated.

Movies and Report