In this period, the SPQR@Work team is actively working in improving the performances of its robot. In particular the team is planning to completely reconfigure the structural layout of the hardware platform starting from the adoption of new and powerful sensors to the development of completely new hardware components. In particular the team is now working on:
- adopting a new Hokuyo laser scanner in the rear of the robot in order to improve its navigation ad obstacle avoidance capabilities
- mounting a new vision sensor that will replace the current kineckt RGB-D sensor
- reconfigure the layout of the robot base changing firstly the position of the arm in order to improve and increase the actual workspace of the manipulator
- designing and building a completely new gripper at the end-effector of the arm
A novel gripper design is under development. A preliminary draft of this gripper is shown in the figures below. The new mechanism is able to grasp bigger objects with a maximum finger opening equal to 135 mm. Motion is actuated by a Robotis AX-12A Servomotor, which offers higher torque when compared to the stock motor and allows for improved accuracy thanks to its embedded encoders and control. Most of the non-commercial parts of the novel design are optimized for production through 3D printing.
The SPQR@Work robot is a KUKA youBot (see image below) with the following sensor suite:
- a frontal Hokuyo laser scanner, to be used for navigation and obstacles avoidance tasks
- a Kinect RGB-D sensor whose the area viewed includes the working area of the arm in order to perform object manipulation tasks without robot motions
- an on-board laptop (other than the internal standard Intel Atom PC) running Linux Ubuntu 14.04, to be used to execute the perception and planning tasks (e.g., navigation and object recognition)
- a color USB camera on the 5th joint of the manipulator for accurate object localization
The SPQR@Work software infrastructure is based on the ROS Indigo middleware running on Ubuntu 14.04, nevertheless most of the developed software tools are standalone, middleware-agnostic: packages are integrated within ROS by means of suitable wrappers.
The navigation stack is based on particle filter based localization and mapping algorithms, provided by the ROS AMCL and GMapping packages, the latter implemented by one member of the laboratory Ro.Co.Co. The decision making and planning stacks are based on a finite state machine implemented with the ROS Actionlib and MoveBase packages, while manipulation tasks are performed using the MoveIT! ROS package. 3D environment reconstruction is obtained using an effective and efficient registration method called PWN (Point With Normals). PWN takes advantage of the 3D structure in determining the data association between two clouds by considering each point along with the statistics of the local surface and of the measurement noise. A novel formulation has been proposed to address the computation of the alignment that takes advantage of point position and surface normals.
Object recognition and localization is obtained using reliable algorithms that exploit both vision features and depth information. This module includes a pipeline of processing block: the pipeline includes processing blocks for sub-sampling, clustering, noise reduction, and 2D and 3D features detection and descriptors. The model matching step is performed associating the extracted features with features pre-computed from a rigid template (e.g., the CAD model) of the object, sample consensus methods is used to reduce outliers. The result is a set of object candidates, that is refined using an iterative optimization strategy.
Both the mapping and the object recognition modules have been internally implemented by the team members. Moreover, the SPQR@Work software includes a semantic mapping system in order to obtain a rich and meaningful representation of the environment.
Currently, the SPQR@Work robot is able to:
- localize itself and safely navigate toward a selected target area
- build detailed 3D maps of the surrounding environment
- detect QR code-like markers using vision (using the datamatrix finder ROS package, modified by the SPQR@Work team members)
- recognize and localize objects using RGB-D sensors
- perform simple visual servoing and manipulation tasks (e.g., pick and place task, “cutting a string” task, … )
- capture and understand simple audio signals from the environment
Here some videos of our past developments. Future improvements will be done to our software stack, this section will be continuously updated: