Lab #2: Simple Vision

       Introduction

The main objective of this lab is to familiarize you with the vision capabilities of the ARDrone. Your goal is to locate an object of a certian color. Code you develop in this lab will be used in lab 3, where you will fly the quadcopter toward is beacon of a certian color and then land at a set distance from the beacon. This lab can be completed in groups consisting of 1-3 people, but I recommend groups of 2.

Objectives

Upon successful completion of this lab, you will be able to:

  1. Understand the tutorial code for the ARDrone from RoboHub.
  2. Process images from the ARDrone camera.
  3. Demonstrate basic image processing techniques.
Warning

You do not know enough to complete this project. I have a number of suggestions below, but I have not given you all the details. You will have to do some searching on-line and some independent thinking/hacking. Additionally, I encourage you to ask questions via piazza. I may not give you a full answer, but hopefully I can keep you heading in the right direction!

Also, there are some optional steps. I encourage you to consider doing them if you think they sound fun. If done well, you will recevie a bit of extra credit.

Assignment
  • If you haven't already, read the first tutorial at robohub: http://robohub.org/up-and-flying-with-the-ar-drone-and-ros-getting-started/. Install this code if you don't already have it.
  • Connect to the drone with the keyboard:
    roslaunch ardrone_tutorials keyboard_controller.launch
    Check out the video feed. You can practice flying if you want, but have someone spot for you.
  • Modify drone_video_display.py (e.g., change the title of the video screen) and verify that the change shows up.
  • Now, your goal is to locate either the yellow box or the orange marks on the middle door of the lab. I suggest you install and use OpenCV, but you could also directly manipulate the image file self.processedImage.
  • There are multiple ways to locate a particular color in an image. My suggestion is that you use HSV rather than RGB. Probably the easiest way to do this in openCV is to
    1. import CvBridge into drone_video_display.py
    2. Inside of ReceiveImage(), create a new image in open CV by converting the ROS image:
      cv_image = self.bridge.imgmsg_to_cv(data, "rgb8")
      create a second open CV image that's the same size:
      cv_img_HSV = cv.CreateImage(cv.GetSize(cv_image), 8, 3)
      and then use this new image to store the image converted to HSV:
      cv.CvtColor(cv_image, cv_img_HSV, cv.CV_RGB2HSV)
      figure out the ranges of HSV values you want to focus on (the following would filter none of the image and everything would be shown as white):
      lowerBound = cv.Scalar(0, 0, 0)
      upperBound = cv.Scalar(100, 255, 255)

      create a black and white image that will show only the pixels that aren't filtered out:
      cv_thresh = cv.CreateImage(cv.GetSize(cv_img_HSV), 8, 1)

      and, finally, create a thresholded image that excludes the pixels you don't want:
      cv.InRangeS(cv_img_HSV, lowerBound, upperBound, cv_thresh)
      Now you can display this new image (instead of, or in addition to, the unchanged image). Pixels in white are "active," i.e., are within lowerBound and upperBound, and those in black are not.
  • How should the thresholds be chosen? I suggest you start by going to http://colorizer.org/ and use the color picker to find (roughly) the HSV values that correspond to either the orange or yellow you are trying to identify. Note that this website uses values of HSV that range from 0-360, 0-100, and 0-100. However, Open CV uses ranges from 0-180, 0-255, 0-255. You'll have to convert the values you find (e.g., a H value of 250 on the colorizer website corresponds to a value of 125 in Open CV).
  • Once this all works and the HSV values are tuned well (so that the objects are corrrectly detected), figure out how to output the average x and y coordinates of the identified color.
  • Extra Credit: Draw a line around the region you detect or look at blurring / smoothing the image to make a smoother detected region or consider what happens if you have multiple recognized colors in a single image and how you could identify them seperately.
Turn in
  1. Submit any python code you modified via Angel.
  2. In less than half of one page, in either plain text or .pdf, list:
    • Name of group member(s)
    • High-level approach to the project (as differed from the suggestions above)
    • What you could have done differently to make things easier on yourselves
    • What extra credit assignments you did, if at all
  3. Additionally, please upload a (short) video showing your program working. You can either use screen capture or an external camera (e.g., iPhone, the GoPro3, etc.), but I should be able to see a picture of active/inactive pixels correctly corresponding to the targeted object, as well as the average x and y pixel values of the targeted object. If you did any of the extra credit assignments, please highlight them in the video also.
  4. Extra credit: A 5-10 second video is suffient to show that your code works. However, if you're able to make a 1-2 minute video that's informative and interesting (e.g., explaining what you did and how you did it to another computer science major), you will get some extra credit.