The Tumbleweed robot

Introduction

This page is dedicated to my first robot, Tumbleweed. It is a wheeled robot, based on a Stampade R/C car from Traxxas. The goal is kind of similar to the Darpa Grand Challenge, or the Robomagellan competition: navigation along a pre-determined course or to a specified goal, recognizing and avoiding obstacles along the way. Of course the Grand Challenge is much more demanding on the HW side, and requires an off-road vehicle of some kind, but I wasn't particularly interested in that aspect of the problem - read: I don't have the time nor the money let alone the crew to even consider entering the real Grand Challenge. So, I've scaled down the problem to a level where many things are readily available without spending thousend of dollars.

I'm planning on using mostly vision as the main sensory input. I might integrate a GPS as well but that won't work indoors, one of the closest playgrounds for this hobby.

Current status

I'm trying to make the whole robot as flexible and modular as possible. I know I will learn a lot along the way and I don't want to be restricted by unnecessary constraits of the platforms. That means that many parts of the system can be replaced with less or more capable components should the requirements change.

R/C car body

The base of the robot is a modified R/C car from Traxxas. I've choosen this platform because it was relatively cheap (around $100), and had an easily accessible power train gear for the slipper clutch:

The plan originaly was to mount an optical encoder on this gear and use it as a speed-feedback. I still might end up doing it but for now I'm using back-EMF for the same purpose.

Another positive thing about this car was that the gear ratio is pretty flexxible which means that I could gear down the car more. One of the main problems with using R/C car bases as robot platforms is that those cars are usually built for speed (sometimes up to 40mph (65km/h)!) while the robot usually will go one or two orders of magnitude slower. The flexible gear ration can help somewhat there.

These cars usually use a standard 540 type motor, which is available in many speed and power combinations. I bought a couple form All Electrongics to experiment with. I finally ended up using a 12V motor instead of the original 6V one.

Batteries

The original R/C car runs from a single 7.2V 6-pack NiMH or NiCad battery. Those packs currently are available up to 3600mAh. However the battery-holder of the car can accomodate a second battery pack of the same size. This gives me the option of 7.2V or 14.4V power with 3600mAh. In other words I have either 26W or 52W of power to burn.

Power supply

I didn't want to lock myself down to either of the above mentioned options, which meant that every electronics on board should be able to run from either 7.2V or 14.4V. Since at least for starters I wanted to use a PC as the central brain of the robot, I needed a DC-DC ATX power supply that can work from either of these voltages. Most available power supplies require 12V on their input, and while will most likely work off of 14.4V without any demage, 7.2V is way off spec. I finally have found one power supply with a wide enough input range to be used. It is the M1-ATX power supply from mini-box. Apart from powering the motherboard, this power supply is used to power the hard drive and all other electronics (servo controller, the logic portions of the H-Bridge) that are on the robot. Of course the power portion of the H-Bridge is powered directly from the batteries. Due to the wide input range of the power supply, the transients that the motor drive puts on the main power lines are kept away from the sensitive digital electronics. This solution makes it unnecessary to use a separate battery for the motors and the electronics. Unfortunately the steering servo works off of the same 5V supply at the moment as the rest of the electronics. That's not the best solution and I'm thinking of ways to change that, but so far it didn't cause any problems.

Central brain

As I mentioned it earlier, right now, the central processing power comes from a PC. It is a 1GHz Pentium III industrial PC, that I bought on eBay. It is the same size as a CD-ROM drive and fits well on the top of my R/C car. The board seems to be rather popular and is available from multiple sources, one is iBT technologies. The board has a single PCI slot that I will be using later on. It also has all the usual PC peripherals and some unusual ones, like an interface for Smart-Card readers or flat panels. The board is based on the intel i815e chipset and an it8712f peripheral controller. This chip will be important latter on. The special feature of it is that it supports switching over many of its pins from their primary functions to an alternative GPIO functionality. In that mode those pins can be used as general input or output pins. Since a robot computer generally need some special interfaces to the outside world I was very happy to find out this functionality and that it can easily be utilized from an application.

Of course there are downsides as well to use a PC as a robotic brain. The most significant one is power consumption, the next in the line is size and weight. My experiments show that the board consumes 35-45W of power depending on load. That is a lot. Above I've mentioned that I have about 50W of power to burn if I use two NiMH battery packs. This means that the PC alone would run dry the batteries in an hour. And of course we would also have to budget for motors and sensors and all sorts of other things that would tap on the battery. All in all this will limit the runtime of my robot to about 30-45 minutes. That I can live with for the first trials but latter on I would definitely want to think about less power-hungry solutions.

You can see the motherboard, the hard drive and the power supply mounted on the main deck of the robot here:

Data Storage

That is rather simple once you have decided to stick a PC in the system. Right now I'm using a laptop hard-drive to store all the data I need, latter on I'll probably go for a compact-flash solution. Those cards can emulate an IDE hard-drive and can be directly attached to the IDE port of the motherboard. They have no moving parts which is good for a mobile robot, consume sigificantly less power, but have limitied write-times (in the order of 10-100 thousend per sector) which makes them impractical for development.

Communication

That's another easy task, if you use a PC: for high-speed communication, when mobility is not a concern, I will just use the ethernet connector, and when the platform is moving, I will stick in a USB wireless ethernet dongle and connect to the computer that way. I currently have a 801.11b interface which is not too fast but does the job. If bandwidth will ever become a problem the 801.11g standard would be an easy step up.

One feature of these wireless interfaces is their rather limited range. That however comes handy in this case. If I make the robot to stop if it looses connection to the controlling station (a laptop for example) it will essentaly prevent the thing from getting loose and do demage.

Operating system

Right now the OS is simply a Windows XP installed on the hard-drive. This solution has many benefits for rapid development but of course not the best solution once the platform is more or less ready. It's large, and can't really work off of a Compact-Flash card (too many write transactions). On the other hand it makes developement so much easier: I can have the source and the compiler on board, I can access it through terminal services, have the best driver support and so on. I also keep my eye on other solutions as well however. I might give a try to Linux, Windows CE, or Windows XP Embedded or even QNX. You can spend uncountable hours just testing and evaluating the various OSes for your problem. However that all isn't that interesting. The real challenge is to solve the navigation, localization, image recoginition and other related robotic problems and not to select the 'perfect OS'. I guess that means that at least for a while I will stick with XP since that's what I'm most familiar with.

Motion control

Starting from an R/C car base the robot has some uncommon features: one main motor that propels the vehicle forward and one servo that controls steering. It turns like a car and not like a tank as many common robotics designs do. In other words it uses Ackermann steering. This is an interesting problem on its own, because the robot cannot turn in place. This also means that it can get into a situation where it has to back out since there's no forward turning route exists.

The two motors need very different drive. The main motor as discussed above is a standard 540 type DC motor. Its no load RPM is in the 10000-20000 range depeding on the model and the short-circuit current can easily be well over 20A. This beast requires a good H-Bridge drive. I'm using my own design from the H-Storm project, the H-bridge µModule. It is quite an intelligent H-Bridge, with over-current protection, integrated PID loop, back-EMF measurement. It even supports distance measurement and 'go-to-distance' functionality with trapezoid speed profiles. The controller interfaces through a TWI (a.k.a. I2C) interface and supports run-time address association through a plug-and-play discovery protocol.

The H-Bridge mounted on the lower deck of the robot, between the wheels looks like this:
You can see the thick black and red power cables that take the power to the switching transistors and than from them to the motor in the back. The MOSFET themselves can be seen to the left of those wires. The white telephone wire in the front delivers power to the logic portion of the circuit and also connects the I2C interface to the host computer. The second phone jack connector would allow daisy-chaining multiple modules on a single bus.

The other motor, that controls the steering is a standard R/C servo motor. It has all the drive electronics built into it and all it needs is a so-called pulse-train to set its target position. If you don't know anything about these servos this link (http://www.epanorama.net/documents/motor/rcservos.html) might help you understand the basics. To control the steering servo I again used one of my boards from the H-Storm project, the 32 channel Servo controller µModule. Right now I control only a single servo so it's way too much but, for one it's not any more difficult to build a 32-channel controller than a single channel one, and for two I think I will need more servos latter on for other functions. This module also interfaces through the same I2C interface and supports tha same plug-and-play protocol for discovery and address assignment. Both modules attach to the same bus.

On this picture you see the Servo controller mounted on the lower deck of the robot, opposite to the H-Bridge:
Here you can see two white telephone wires. That's how the two modules are daisy-chained together. You can also see the black and red wires that deliver power to the servo and the three-colored ribbon cable that connects the servo to the controller board. In the background you can see one of the battery packs placed sideways into the original battery-bay of the R/C car.

Both modules are capable of running from both 3.3V and 5V power supply. Since the controlling unit is a PC at the moment I chose to use 5V power supply. Latter on if a lower power solution will be used, they could be switched to 3.3V power.

The control bus

The I2C control bus has a single master, which is the PC at the moment. I considered using the LPT port on the motherboard to connect to the I2C bus of the µModules through something like this but than I had a better idea. The peripheral controller on the motherboard, the it8712f has a number of pins that can be turned into GPIO lines. It also has a smart-card interface that is brought out to an 8-pin header on the board with power and ground lines as well. I decided to use two of those pins to control my modules. The solution works well, and due to some magic in the it8712f chip, even the rising edges are surprisingly clean. One problem with the I2C bus is that it is driven actively to low level, but pulled up to high by a resistor. While this solution provides many unique features of the bus, like multi-master mode, and clock-streching it also limits the speed of the bus and makes rising edges rather slow (it basically depends the aggregate capacitance on the bus line and the pull-up resistor). The it8712f seems to pull up its pins actively for a moment when it relases the bus, effectively making the transient very clean. This might however cause some problems in multi-master mode, but so far I haven't seen any problems with it.

This image illustrates how the control bus is connected to the motherboard.
The two red-circled areas are the two end of the converter cable. On one end it connects to the smart-card reader connector on the motherboard while the other end is a phone jack that is the starting point of the daisy-chanied control bus. In the middle you can see a portion of the power supply. The small bread-board circuit on the left contains a small power-distribution network. It allows the motherboard to be powered from an external 12V power supply (that how it's pictured here) or to the batteries. This is convinient for development work when the robot doesn't need to move around. It also contains an ATX power button for added convinience - the motherboard can be configured to turn on automatically when power is applied so this button is not absolutely necessary.

Future work

Now that the base is more or less done, I can concentrate on adding sensors and latter on intelligence. As I said before my plan is to highly depend on stereo vision as the primary sensory input. For that I will use the H-Storm hcPCI PCI interface card with a special FPGA douthercard to capture the signals of two image sensors simultaniously. This setup will allow me to keep the left and right images from the two cameras in tight sync which is important when the platform of the cameras is moving. The cameras themselves will be based on the MT9M001 sensor from Micron. Information on it is not available from the website any more but it is very similar to the MT9M011 sensor, only somewhat faster. It is a 1.3 megapixel color CMOS sensor. I'm planning on using it in 256x256 resolution and in monochrome which would give me a 2x digital zoom (not 4x due to the way pixels are counted in these sensors: a pixel is either red, green or blue and not all three together as on displays). It would also give me digital panning that I hope I can use with wide-angle lens. The data from these sensors will arrive to the FPGA which will do some pre-processing on them (I have to figure out exactly what) and will be made available to the PC through the PCI interface. The bandwidth of the interface will be in the 60MBps range. This will not be enough for high-speed uncompressed video but high frame-rate is not my inital goal.

The stereo pairs of images will be processed into a disparity map. This is a 2-D image where every pixel represents how much the left image is apart from the right at that location. In other words, if I take the left image and I shift every pixel left or right in it by the amount in the corresponding disparity-map pixel, I should get the right image. The information in the disparity map can easily be converted to a elevation-map using the geometry of the cameras and their optics - the base distance, their viewing directions, the focal length and so on. This is the first step where the 3D information (distance form the cameras) become explicit. This map than can be converted to a top-down view of what the cameras see. This converted elevation map can than be stiched into the existing world model. At this step we can correct the world model with the new information we've learned and at the same time find the robots location and orientation inside that world model. For that some clues can be used from the previous location plus the travelled distance, and the optional GPS readings.

Once this updated world model is created and the robot is localized within that model, the route to the target can be planned. During this planning obstacles can identified and avoided. Deciding what is an obstacle is actually not an easy problem, but in general anything that the robot cannot pass for some reason, for example the terrain is too rough, or the slope is too steep should be marked as an obstacle.

After a plan is created for the move, the robot starts moving and the next iteration of image acquisition and processing starts. In theory the robot never follows the planned route for too long. An updated plan should be ready much sooner than the robot reaches the end of the planned route.

Well, that's more or less the high-level plan for the future of Tumbleweed. Off course there are many details to this, both known and unknown, that I will discover along the way.

Additional images

Tumbelweed poses for another picture:

Front, back and side images of the robot:


© 2004 Andras Tantos