Planning and Control of Humanoid Robots

Omnidirectional humanoid navigation in cluttered environments
based on optical flow information

The method that we show here achieves safe navigation in an office-like evnrinoment, built as a maze of corridors. We defined a visual-based controller that maximizes the clearance from the images of the surrounding obstacles. In obstacle-free conditions, this is equivalent to walk at the center of the corridor. If the incoming path is sufficiently large, the robot moves as a unicycle, as usual. However, if the passage is too tight to achieve safe navigation by moving as a unicycle, our controller separately controls the gaze direction and the torso of the robot, to allow sideways walking.


Operational setup

For the validation of the proposed navigation algorithm, we used the small humanoid NAO. The images were acquired from a video stream with a 25 Hz frame rate and a resolution of 320X240 pixels. The image processing was performed on a ROI of 320X120 pixels placed at the bottom of the image.
We implemented a RANSAC-based dominant plane estimation for the detection of the ground plane in the image. For image processing we have used the OpenCV library to compute the optical flow, find the image contours and compute the corresponding centroids. In particular, we used the GPU-based version of the pyramidal Lucas-Kanade optical flow. The following figure shows the relevant features involved in the control strategy.

Image processing steps

Image processing at four particular phases of the four presented simulations. Green curves highlight the contours of the detected obstacles. Blue dots are the corresponding centroids, red dots are the filtered centroids, the purple dot is the middle point. Gray dots simply highlight when left and right centroids are getting close to the image borders.


NAO Omnidirectional Navigation: Simulations

In all the presented simulations, the module of the linear velocity is constant, but the x and y components may vary as functions of the head yaw angle, that is provided by the proposed visual control law, along with the angular velocity.

Straight corridor

Simulation 1, straight corridor

First simulation: NAO starts off the center of the corridor, as shown in the first snapshot, and then progressively recovers the center (second, third and fourth snapshots).

Negotiating a turn

Simulation 2, negotiating a turn

Second simulation: NAO negotiates a curve. In the first snapshot, the robot is getting close to a right curve. The second and third snapshots show the robot turning in the correct direction. In the fourth snapshot, the robot recovers the center of the corridor after exiting the curve.

Turning at a T-junction

Simulation 3, turning at a T-junction

Third simulation: NAO turns at a T-junction. The first snapshot shows the robot getting close to the junction. In the second and third snapshot, the robot turns in the direction showing the smaller part of the facing wall (i.e., the turning direction is not commanded nor determined through a logic but determined on the basis of visual information). In the fourth snapshot, the robot recovers the center of the corridor.

Avoiding an obstacle

Simulation 4, avoiding an obstacle

Fourth simulation: NAO navigates along a straight corridor and avoid and obstacle. The first snapshot shows the robot initially converging to the bisector of the corridor. In the second snapshot the robot has detected the plant and it is moving in the gaze direction, that is not aligned with the torso. In the third snapshot the robot is maximizing the clarance to the obstacle by walking sideways. The fourth snapshot shows the camera re-aligned with the torso and the robot recovering the center of the corridor.



NAO Omnidirectional Navigation: Experiments

Avoiding an obstacle

Experiment, avoiding an obstacle

Experiment: avoiding an obstacle. The camera detects a narrow passage and rotates to reach the desired value in the first snapshot. When the camera orientation is equal to the desired value, the robot reaches the passage, as shown in the second snapshot. The third and the fourth snapshots show the robot re-aligning the camera with the torso, in correspondence of a larger space to navigate.


Video Clip

The four simulations and the experiments with NAO are integrally shown in the video below.


Documents

[1] M. Ferro A. Paolillo, A. Cherubini, and M. Vendittelli, Omnidirectional humanoid navigation in cluttered environments based on optical flow information , Submitted to 16th IEEE-RAS International Conference on Humanoid Robots, November 15-17, Cancun, Mexico.


Robotics Laboratory Homepage