Lab #3: Vision and Control

       Introduction

The main objective of this lab to leverage code from the previous lab to control an ARDrone. This lab can be completed in groups consisting of 1-3 people, but I recommend groups of 2.

Objectives

Upon successful completion of this lab, you will be able to:

  1. Successfully control the ARDrone using ROS.
  2. Estimate distances from the ARDrone based on landmarks.
  3. Demonstrate basic reactive control.
Warning

The drones can be fickle. I suggest you keep the code so that spacebar will do an emergency stop. You really don't want to hit something/someone. If a drone breaks, we'll be able to fix it, but I'd prefer to keep them intact! Please always use the foam bodies on the drones when flying them. This will make them less likely to break if you hit something, and easier to grab. Remember that you can always turn off the drone by grabbing it and turning it upside down --- just don't grab the rotor!

Assignment
  • By the end of this lab, your robot should:
    1. Take off at some point within 10 feet of a target (the yellow box, markers on the middle door, or another of your choosing.
    2. Rotate until you find your target.
    3. Move towards the target.
    4. Land, three feet (horizontally) from the target location
  • To get credit, please submit all modified code and a video via Angel.
  • To get full credit, your video should show your robot landing 1-5 feet from the target (you can show a tape measure in the video to verify this).
Turn in
  1. Submit any python code you modified via Angel.
  2. In less than half of one page, in either plain text or .pdf, list:
    • Name of group member(s)
    • High-level approach to the project
    • What you could have done differently to make things easier on yourselves
    • What extra credit assignments you did, if any
    • A video showing a successful run, inculding any extra credit parts.
  3. Possible extra credit:
    • Show that your robot can successfully find the target, and land in front of it, from multiple starting positions.
    • Show that you can "kidnap" your robot. Once the robot has located the target, grab it with your hand and move it's position / angle so that it has to re-identify the target.
    • Extra credit: A 5-10 second video is suffient to show that your code works. However, if you're able to make a 1-2 minute video that's informative and interesting (e.g., explaining what you did and how you did it to another computer science major), you will get some extra credit.
    • There will also be an in-class competition. The person who gets closest to the correct landing position wins!


Update (9/18/13)
  1. It looks like lab 3 was a bigger jump in difficulty than I intended. In particular, people are having more trouble with the control than I had anticipated. Because of this, I've **pushed back the deadline by one week**. If you are able to turn in Lab 3 by the original due date, you'll get a non-trivial number of bonus points. However, you'll be better off turning in a fully working project next week than a "sort of kind of " working project tomorrow.
  2. This lab is focusing on control, not vision. If you're not happy with your lab 2 code, you may also consider using the built-in functionality. I hadn't noticed this before, but this later tutorial http://robohub.org/up-and-flying-with-the-ar-drone-and-ros-handling-feedback/ mentions using keyboard_controller_with_tags.launch which you may find useful, as well as the discussion of controlling the drone.
  3. If you haven't played with rosbag (http://wiki.ros.org/rosbag) it, along with Faustino's recorded bag file (see Piazza), may be useful. In particular, this will let you make sure your vision is working without worrying about controlling the drone. If your vision doesn't work, it doesn't matter how fancy your control code is!