This was part of my capstone and included some of the work I did for CS4610: Robotic Science and Systems.
The IBVS Controller combines the computer imaging concepts of depth perception and image recognition along with a servo controller to allow a manipulator to recognize and pick up different objects. We use the Point Cloud Library (PCL) and Robot Operating System (ROS) in combination with RGBD sensors to explore our environment and determine the depth of different objects in the environment. “YOLO: Real-Time Object Detection” is used to recognize the objects in the environment and “Dialogflow” to recognize user commands for selecting which object to track in a scene. We also utilize the Visual Servoing Platform (ViSP) and ROS for end-effector control. SMACH, a state machine ROS library, is used to integrate all the pieces together.
IBVS is unique in that it allows a manipulator robot to both recognize and pick up an object and can be run on both commercial and hobby sensors.
There a few learnings to take from this:
- Adapting the feedback frequency
- Get an actual gripper (!)
- Pose estimation of object for better servoing (active research area)
- Collision detection