The objective of this lab is to pratice kinematic control of the Turtlebot and to better understand vision processing on the Turtlebot. You are welcome to copy code from the internet, but make sure to credit where it came from. You can work in teams if you'd like.
Upon successful completion of this lab, you will be able to:
- Have the simulated turtlebot trace a squareunder open-loop control
- Have the simulated turtlebot identify and respond to different objects in the environment
To help you get started, James has provided a few files:
In addition, note that there is no way to directly control the wheel velocities. I suggest you:
1. Do the math to figure out the desired velocities / time
2. Create a twist message based on this
3. Publish the twist message and stop after a certain amount of time.
To show me that you understand the inverse kinematics of the turtlebot, I suggest you do something like the following:
def spinWheels(u1, u2, time):
linear_vel = (???) * (u1 + u2)
ang_vel = (???) * (u1 - u2)
twist_msg = Twist()
twist_msg.linear.x = linear_vel
twist_msg.angular.z = ang_vel
#while haven't reached time
#publish a new twist message to make the robot stop
Where the "???" means you need to enter *something* (I don't want to give everything away). It could be a formula or a variable, and they'll be different in these two lines.
The goal of this lab is for you to get a better understanding for controling the turtlebot and understanding vision. If you're stuck on something, don't spend hours banging your head against it - post to Piazza! James has told me that people are asking him questions (which is good!) but I'm not seeing much activity on the message board.
- Your goal is to have the turtlebot drive in a square that is 2m per side. First, use inverse kinematics to figure out the speeds you would want to drive the wheels and for how long for (a) a turn and (b) an edge. You may be able to look up the relevant measurements on line, or you can fall back on using a tape measure.
- Hhave your turtlebot trace out the square as calculated above. After the turtlebot finishes, how close did it come to ending at the correct location?
- Place an object in the environment (I'd recommend a sphere). Use openCV to find the center of the image. Write a reactive controller that can drive the robot towards the sphere.
- Enhance your code so that the turtlebot stops some known distance from the sphere (e.g., by counting the number of pixels wide the sphere is).
- By placing four objects in the environment, write a reactive controller so that the turtlebot traces out a square, 3m per side. How far away from the start point does the turtlebot end up? Is this better or worse than when you used inverse kinematics, and why?
- For extra credit, as the turtlebot drives towards an object, display the turtlebot's view of the object. Draw the midpoint of the object on this image and update it as the turtlebot moves. Also add some text saying 1) whether the robot is to the right, to the left, or centered on the object and 2) the estimated distance from the object.
In Blackboard, everyone member should submit a text file with:
If you work in a team, one member of the team should submit the following. (If you don't work in a team, you should still submit the following yourself.) In a text or pdf file:
- Your name
- The name(s) of any teammates, if any
- The name of the one teammember who is submitting the code and video (if working in a team)
- What you thought the hardest part of the assignment was
- Any on-line references / websites you found particularly useful. Include the addresses of pages you copied code from, if any.
- The answers regarding how close to the start state the turtlebot ended for both controlers, and your speculations about why one is better than the other.
- A link to the video or screen capture of your turtlebot tracing the square via inverse kinematics and via a reactive vision controler.
- The calculations you performed for the inverse kinematics.
- The world file you used.
- The code you wrote for the assignment and any instructions needed to execute it.
- Text file content: 10
- Inverse kinematics calculations: 10
- Code for kinematics-based driver: 20
- Code and world file for vision-based driver: 30
- Videos of both squares:30
- Extra Credit: 20