Real-Time Drone Tracking and Orientation Detection Using RGB-D Camera

C++/ Openframeworks/ Kinect/ AR Parrot Drone

Team: Isabella Gonzalez, Oytun Olutan, Mohit Hingorani


Object detection and tracking is one of the most fundamental problems in computer vision. In recent years drones have become widely available and are being used for purposes ranging from military, air delivery as well as for artmaking. This project aims at developing a novel way of tracking drones and their orientation with respect to the camera.It can essentially replace more expensive tracking systems such as opti-tracking. The algorithm consists of a depth based contour finder to detect clean contours. As the Kinect data is noisy, the depth pixels are aggregated into strips for a more robust depth estimation. After this aggregation, we calculate the difference between adjacent strips to find where the sign changes. This change records the edge of the drone. A similar mechanism was implemented in the color domain for more robust detection. Lastly using the drone’s depth value and its pixel position, we calculate its position in real world space.

Download Research Paper Here

 

 The project had multiple goals related to tracking the drone’s state:

1. Spatial Drone Tracking ( in world coordinates) To detect the drone’s real world position with respect to the camera.

2. Angle detection using depth ( angle w.r.t drone) To estimate the drone’s angle with respect to the camera

3. Orientation detection (NSEW) based on color to estimate which side of the drone faces the camera using color (as the drone is symmetric)

4. Creating a real time system enlisting the above with minimum constraints.

To evaluate the performance of our system, we flew the drone in a large room about 1 meter in front of the Kinect, which was on a pedestal off the ground. The Kinect wasconnected to a computer, which was on an adjacent pedestal.

The Kinect used in our experiments was version 1414. The drone was the Parrot 2.0.  Additionally, the OpenFrameworks project was operating on a Macbook Pro 2014 run- ning the Yosemite OS.

1. The system could accurately detect the drone in 3D space and visualization its position in 3D virtual space. It was prone to some noise, which could possibly be fixed by using a better depth camera such as the ASUS xtion sensor. Figure 7 shows the detected drone in the space and the location of it in the 3D visualization.

2. Drone pose detection using only the depth information was implemented.  As the drone is symmetrical, it is impossible to detect which side it is facing. However the angle can be predicted.  Again this function was prone to noise, specifically because the drone is not a perfect rectangle shape. Moreover, the kinect data has noise upto5 cm and the drone’s maximum possible detected difference was 15 cms.

3. Drone pose detection using color was implemented with limited success.  The detection worked only for a limited distance and for certain colors.  Due to the change in illumination with depth, the colors would often fall out of range resulting in incorrect detection. In addition, the color attachments for the drone (shown in 4) reduced the flying ability of the drone.

Figure shows two experiments with the colored papers. In the first one, first row , peak locations in both depth space and the color space is estimated accurately. Peak location is indicated by a blue line. The second experiment, second row, shows a failure in the colors space.  The HSV values of the blue and green are close to each other and the illumination has significant effect on the detected color hence the algorithm failed to detect the peak location.