Created on 23 Oct 2018
A CV-oriented solution for autonomously controlling a buggy.
Computer vision has been long held a strong utility for autonomous vehicles. CV is used to identify lane markers, cars, traffic lights, and many other unique objects on the road. By identifying these features, vehicles can decide how to act in a certain situation: should it turn left, should it turn right, should it move faster, should it stop?
We wish to bring a small part of CV's utility into the ever-growing field of robotic buggy. To do this, we will be implementing a vision-based controller for our RD18 autonomous platform, BabyBuggy.
Our pipeline is fairly straightforward, though the complexity is most certainly in the details. First, a convolutional network will be trained to perform image segmentation, the task of splitting an image into solid portions that identify different key areas in a scene. For our purpose, the network will distinguish between road and not road.
Then, with this segmented image, we will write a controller to steer BabyBuggy depending on its calculated position and offset from the sides of the road.
Our implementation will utilize an Intel D435 camera, important primarily for its cheap access to a global shutter. This drastically reduces the distortion we get from cheaper rolling shutter cameras. As a stretch goal, we are also planning to use state-of-the-art CV algorithms like YOLO and Mask-RCNN to perform real-time object detection.