Wednesday, June 17, 2015

[HEX] Navigating like A Star

This is the fourth post in my Hexapod Project series. The point of this project is to build a robot that allows me to try out a few robotics concepts. For a listing of each post in this series, click here.

The previous posts have gone through how the hexapod was built, how the different processors communicate with each other, how it is able to walk smoothly, and how it can both keep track of obstacles and where it is relative to those obstacles. In this post, I'll talk about allowing the hexapod to walk around autonomously.

The first question we need to answer is: what do we mean by walking around autonomously? Do we tell the hexapod to walk forward and it does so without help? Do we tell the hexapod to walk to 'the kitchen' and it figures out how to do that? Does the hexapod wake up in the middle of the night to search for a midnight snack? Enabling the hexapod to act out the last two scenarios is well past my skill and interest level, so I'll settle for something a little simpler. My goal is to be able to give the hexapod a location in space (as 2D coordinates) and have it walk there without running into anything. To do this, it needs to use all of the algorithms developed in the previous posts to avoid obstacles, keep track of where it is, and step with smooth confidence.

The work from the previous posts allows for a fair amount of abstraction in solving this problem. From the work on inverse kinematics, we can fully control the motion of the hexapod through two parameters: speed and turning. From the SLAM algorithm, we have a discrete grid of points representing a map of the local environment. Autonomous navigation is then just a problem of deciding where on the map the hexapod should go, and then setting the speed and turning values to make it go that way.


Unlike the problem of Simultaneous Localization and Mapping from the previous post, pathfinding has plenty of easy-to-digest literature floating around. However, this does not excuse me to simply point to some code I've written and call it a day. Further, many naive implementations of common pathfinding solutions lead to code that runs well in theory, but not quickly enough for my relatively simple-minded hexapod.

A classic pathfinding algorithm is Dijkstra's algorithm. In this algorithm, we represent every possible location as a 'node' that is connected to other 'nodes' (in this case, neighboring locations). Each connection between nodes is assigned a distance value so that the algorithm can compute distances between far-away nodes by summing up the connection distances between the two. Given a start and end node, Dijkstra's algorithm finds the shortest path (series of connections) between the two by traversing the network of connected nodes and assigning a distance-from-start value to each. If a node is reached multiple times through different paths, the shortest distance takes precedence. When the target node is reached, we simply trace the network in reverse to find which nodes gave the shortest path from start to end. In this way, Dijkstra's algorithm can find the shortest path from point A to point B (assuming you describe positions in space as a network of connected nodes).

Pathfinding on a grid of nodes from point A (green) to point B (red).

In our application, we also want the pathfinding algorithm to avoid sending the hexapod through walls or other obstacles. A simple way to achieve this is to modify the pathfinding algorithm to include a 'cost' of travelling between nodes. This cost is large for connections that land you inside a solid object. We are then not finding the shortest path between points A and B, but instead the path that minimizes distance + cost. Since we are free to pick the cost, we can pick how much the algorithm prefers short distances versus avoiding obstacles.

Pathfinding on a grid with obstacles.

One of the drawbacks of this method is the amount of time needed to find the optimal path. On a large grid, the algorithm will spend a significant amount of time exploring the multitude of potential paths that may not provide an optimal result, but must be checked to make sure. A modification to Dijkstra's algorithm can further improve the results and leads us to the A* algorithm (pronounced A-star). In this version, we explore connections that lead closer (by some measure) to the target before considering any other paths. This is like making a bet that the path between points A and B is the most direct one. If you win, the algorithm finds the path in a very short amount of time. If you lose, the algorithm has to go back and search all of the other less direct paths. I've decided to go with the A* algorithm, since I know the environment in which the hexapod will navigate is not too maze-like.


My implementation of the A* algorithm for the hexapod (which I call "Autonav") can be found here. As with the SLAM algorithm, the actual solver runs in a thread separate from the main thread that controls the hexapod servos. Given a copy of the SLAM map, the current location of the hexapod, and a target location, the solver searches through the map to find a sequence of positions that connects the two locations. Since the SLAM map is updated every few seconds and may contain new obstacles, the Autonav solver needs to be able to solve for a new path every few seconds as well. In my implementation (which is admittedly not very optimized), the solver can search through around 2000 grid points in the 1-2 seconds between SLAM updates. For typical grid setups, this unfortunately only gives a physical search range of a few meters. In order for the Autonav solver to quickly and reliably find paths through the environment, the target position can only be a few meters away. Far-away targets can be reached by splitting the distance up into a series of evenly-spaced waypoints.

To visualize the path to which the Autonav sends the hexapod, I've written a short code that parses the logfile written from a run and creates a reconstruction of the scene. The render shows the SLAM map (with distance to objects as a smooth gradient) in red, the hexapod location in blue, and the Autonav path in green:

Scale is 5 cm per pixel.

For a given SLAM map, the Autonav solver does a pretty good job finding a path around obstacles. To turn this path into commands for the hexapod to follow, I set the hexapod speed to be a constant (0.3) and modified the turning value based on the path. The low resolution of the Autonav path can cause the hexapod to act a bit jumpy, as the direction the path takes only changes in 45 degree increments. To smooth the path, the hexapod picks a 'tracking point' at least 20 cm away from its current position along the Autonav path. It computes the heading to that point and sets the turning parameter proportional to how far away the hexapod's heading is from that point. In this way, the hexapod continually tracks the Autonav path.

To test how well the whole system works, I set up different scenarios for the hexapod to navigate through. After each test, I compiled the reconstruction renders into a movie so I could play back what happened. The first test was for the hexapod to move to a target 1.5 meters in front of it in an empty room, then turn around and go back to the starting point.

From the video, we see that everything seems to be working well. The SLAM algorithm updates the map with LIDAR scans made from new perspectives, the Autonav solver continually updates the intended path, and the hexapod smoothly strolls from point A to point B. Through the eyes of a human observer, this kind of test looks pretty neat:

Once again, everything works pretty smoothly. Of course, not every test I did wound up being such a simple success. Here's an example of a test where the hexapod got turned around due to what I call "SLAM-slip", where the SLAM algorithm fails to properly match new LIDAR scans to the map.

Here's a more complicated test where I added a box along its return path to see if it could adapt to the new environment:

With a bit of tuning, the hexapod was able to successfully navigate around both permanent and temporary obstacles and reach the waypoints I gave it. The biggest issue I found was in timing. By the time the SLAM algorithm updated the map and the Autonav solver found a new path, the hexapod would have moved enough to invalidate any new solution for where it should go. Sometimes there would appear to be almost a 5 second delay between useful updates to the hexapod turning. Considering all of the math behind what the hexapod is trying to run on such low-powered hardware, a 5 second delay isn't too surprising. It's unfortunate that it seems just barely underpowered for the task set before it. A simple solution to this problem is to just let the hexapod walk slowly so it can get updates to where it should go before walking too far off course.

UPDATE: more testing, so here's a longer navigation test through my apartment!

Race Prep

At this point, the hexapod is a pretty fleshed-out robot. It can do some basic autonomous navigation through my apartment, given a set of waypoints to follow. While the end result is conceptually fairly simple, the process of getting there has been considerably involved. It would be a shame to end the process here, but there isn't a whole lot more I can reasonably do to enhance the hexapod's capabilities at this point.

To give the hexapod a final task, I've entered it into the Sparkfun Autonomous Vehicle Competition (AVC) on 20 June 2015. I attended the previous two AVCs and had a lot of fun watching both the ground and aerial races. The sophistication of the entries in the ground race has been extremely varied, from LEGO cars relying on dead-reckoning to go-karts with RTK GPS. I figure that a LIDAR-equipped hexapod falls comfortably between these two ends.

Unfortunately, there are three main issues with my hexapod that make it woefully underprepared for such a competition. The first is the speed. The fastest I've gotten the hexapod to move is around 1.2 feet per second, but the navigation has issues above 0.5 feet per second. Robots in the competition are expected to travel almost 1000 feet in less than 5 minutes, giving an average speed of 3.3 feet per second. So at best, the hexapod will only make it about 20% of the way around before time is called. The second issue is sunlight. The LIDAR unit I'm using is intended for indoor use, and can't make reliable measurements in direct sunlight. This means that the hexapod might not be able to see any of its surroundings to get a position reference. The third issue is that the course doesn't really have many natural obstacles to use for position reference. In previous years, the outer boundary of the course was a chain-link fence, which is likely all but invisible to LIDAR. Even if the LIDAR could work in sunlight, there might not be any objects within range to sense. With these significant issues, I'm still going to race the hexapod. It won't win, and it might not even lose gracefully.

One of the requirements for entries into the AVC ground race is that the robot must not only be autonomous, but must start the race with a single button press. So far I've been ssh-ing into the main processor through WiFi to initiate any actions, so I need one last physical addition to the hexapod.

The LED+button board connects to the Due, but since the Due pins are shared with the main cpu, both can access them. The orange LED (#1) blinks to show that the PROG_HWMGR code is running on the Due, and the button below it does nothing. The rest of the LEDs and buttons are controlled by the main cpu when running in fully-autonomous mode. When the code is ready to enable the servos and spin up the LIDAR unit, the white LED (#2) flashes. Pressing the button below it allows the code to continue. The blue LED (#3) flashes when the LIDAR is spun up and the SLAM algorithm is ready to make an initial map. Pressing the button below it starts the integrating process. When the initial integration is done, the green LED (#4) flashes, indicating the hexapod is ready to race. Pressing the final button starts the race, and pressing it again ends the race early (in case of wayward robot).

So with that, the hexapod is built, programmed, and ready to compete. I'll post a write-up of whatever happens during the race sometime next week, along with some pictures of it racing!

Tuesday, June 16, 2015

[HEX] Come on and SLAM

This is the third post on my Hexapod Project series. The point of this project is to build a robot that allows me to try out a few robotics concepts. For a listing of each post in this series, click here. In this post, I'll talk about using laser ranging to let the hexapod keep track of where it is relative to its environment.

Robots are Dumb

It's not hard to find examples of robots failing to perform tasks that a human might deem trivial. Even the entrants to the DARPA Robotic Challenge--while amazingly sophisticated in their design--sometimes fail to turn handles, walk through doors, or even just stand upright for very long. If you are unfamiliar with the subtle challenges of robotics, these failures might make you wonder why it takes so much money and time to create such a flawed system. If a computer can calculate the N-millionth digit of pi in a few milliseconds, why can't it handle walking in a straight line? The general answer to this is that the fundamental differences between the human mind and a computer cpu (neurons vs transistors, programming vs evolution, etc) create vast differences in the intrinsic difficulty of many tasks.

One such task is spatial awareness and localization. Humans are capable of integrating various senses (sight, touch, balance) into a concept of movement and location relative to an environment. To make my hexapod robot capable of autonomous navigation, it also needs to have a sense of location so that it can decide where it needs to go and where it should avoid going.

Arguably the most common way of letting a robot know where it is in the world is GPS. Satellites in orbit around the earth broadcast their position and the current time, and a GPS receiver receives these broadcasts and triangulates its position. A GPS-enabled robot can figure out its location on Earth to within a few feet or better (depending on how much money you spend on a receiver). The biggest issue with GPS is that a robot using GPS needs a clear view of the sky so that it can receive the signals being beamed down from the GPS satellites. GPS also doesn't give you any information about your surroundings, so it's impossible to navigate around obstacles.

For the hexapod, I wanted to avoid using GPS altogether. I chose to use a LIDAR unit for indoor localization, mostly because it seemed like an interesting challenge. LIDAR uses visible light pulses to measure the distance to objects just like RADAR uses radio waves bouncing off objects. A LIDAR unit contains a laser emitter/detector pair that can be swept across a scene to make measurements at different angles. At each angle, the unit emits a pulse of laser light and looks for a reflected pulse with the detector. The delay between emission and detection (and the speed of light) gives the distance to the reflecting object at that angle. High-quality LIDAR units (like those used in self-driving vehicles) can quickly give an accurate 3D scan of the surrounding environment.

More Lasers

The LIDAR unit I picked is taken from the XV-11 robotic vacuum cleaner from Neato Robotics. You can find just the LIDAR unit by itself as a replacement item on various sites; I got mine off eBay for around $80. The XV-11 LIDAR unit has a bit of a following in the hacking community, as it offers 360-degree laser ranging at 5 Hz for much cheaper than anyone else. While the unit isn't open source, there are some nice resources online that provide enough documentation to get started. It only scans in a 2D plane, but what you lose in dimensionality you gain in monetary savings.

Laser module in the lower left, sensor and optics on the upper right.

The LIDAR unit has just 6 wires to control the whole thing. Two are connected directly to the motor that spins the unit; two provide power to the laser and other electronics that do the ranging; and the last two provide a serial connection to the processor. Upon powering the main processor, the following text barrels down the data line at 115200 baud:

Piccolo Laser Distance Scanner
Copyright (c) 2009-2011 Neato Robotics, Inc.
All Rights Reserved

Loader V2.5.14010
CPU F2802x/c000
Serial WTD38511AA-0056881
LastCal [5371726C]
Runtime V2.6.15295

It's hard not to trust any device that sends a smiley face as an opening line. The introduction message isn't too informative, but the fact that it is sent as plain ASCII is comforting. The unit doesn't send any laser ranging data until it is spinning at close to 5 Hz, and the processor has no way of controlling the spin motor. By applying an average of around 3 V to the motor (I did PWM from a 12 V line), the unit spins up and raw data starts flooding down the line. The resources link above provides documentation for how the data packets are formatted, but the key points are that they contain some number of laser ranging measurements, error codes, and the current spin rate of the unit. This measured spin rate can be fed into a feedback loop for the motor controller so that it stays spinning at a nice constant 5 Hz.

I decided to have the Arduino Due on the hexapod handle communication with the LIDAR unit and keeping the motor spinning at the correct rate. The Due already handles communication between the main cpu and the Arbotix-M, so what's one more device? I soldered up a simple board that included an N-channel MOSFET for PWM control of the motor, and a LM317 voltage regulator to provide the LIDAR processor with around 3.3 V.

Motor connects on the left, LIDAR controller on the right. Bottom connects to Due.

The hexapod kit came with a mounting board for adding accessory hardware, but the mounting holes on the LIDAR didn't match up. I 3D-printed a little bracket to both attach the unit to the hexapod body and provide a little space for the board I had just made.

Credit to my Single Pixel Camera project for making me buy a better printer.

Attached to the top mounting panel of the hexapod.

With the small interface board mounted below the LIDAR unit, I connected everything up to the Due. A PWM line from the Due is used to control the speed of the motor, and the Serial2 port is used to receive data from the LIDAR processor. The 12 V needed to power the motor and processor come from whatever source already powers the main UDOO board. In testing, this was either an AC adapter or a 3-cell lipo battery.

I have no idea what I'm testing, but damn does it look technical.

The stream of data from the LIDAR unit is unfortunately pretty non-stop, so I had to write a code that parses little bits at a time to make sure none would get lost in the small input buffer of the Due. The Due bundles up bits of the data into packets and sends them off to the main cpu for further processing. To start out, I just plotted the results in gnuplot:

Rounded object at (50,100) is my head.

Each point is a single laser ranging measurement, and they span the full 360-degrees around the unit. A map like this can be made five times a second, allowing for a pretty good update rate.

A Sense of Self

At this point, the hexapod has the ability to continually scan its surroundings with lasers and accurately determine its distance from obstacles in any direction. But we still haven't solved the problem of letting the hexapod know where it is. By looking at the plot above, we can clearly see that it was sitting near the corner of a rectangular room. If we moved the robot a few feet in any direction and looked at the new map, we would be able to see that the robot had moved, and by comparing the two plots in detail we could even measure how far it moved. As humans, we are able to do this by matching similarities between the before and after plots and spotting the differences. This is one of those tasks that is relatively easy for a human to do and very tricky for a computer to do.

Using only the LIDAR scans, we want the hexapod to be able to track its movement within its environment. By matching new scans to previous ones, we can both infer movement relative to the measured obstacles and integrate new information about obstacles measured from the new location. The process of doing so is called Simultaneous Localization and Mapping (SLAM). There are many ways of solving this problem using measurements like the LIDAR scans I have access to. Some methods involve big point clouds, some involve grids. Some are 3D, some are 2D. One of the most common traits of any SLAM algorithm that I've found is that it is complicated enough to scare away amateur robotics enthusiasts. So in keeping to my goal of writing most (if not all) of the software for my hexapod, I set out to write my own algorithm.

My algorithm is not great, but it kind of works. I decided to do a 2D grid-based SLAM algorithm because a) my LIDAR scans are only in 2D, and b) point clouds are hard to work with. As the name suggests, a SLAM algorithm involves solving two problems simultaneously: localization and mapping. My algorithm keeps a map of the surroundings in memory, and given a new LIDAR scan performs two steps: matching the new scan to the existing map and infering where the scan was measured from; and then adding the new scan to the map to update it with any changes. As the Wikipedia article on the subject suggests, we have a bit of a chicken-and-egg problem, in that you can't localize without an existing map and you can't map without knowing the location. To solve this problem, I let the hexapod know its initial coordinates and let it collect a few scans while standing still to create an initial map. Then, it is allowed to step through the full SLAM algorithm with a map already set up.


For testing, I typically use a map with 128 pixels on a side, and each pixel represents 10 cm. After the initial set up where the hexapod is allowed to create a map in a known location, we might end up with a map like this:

My 350 by 400 cm workroom. My head is still at (50,100).

The value in each pixel can vary between [0,1], and roughly represents how confident we are that the pixel contains an obstacle. I'll go into how this map is made using LIDAR scans in the next section, so just assume it's representative of the environment. A new scan is made and needs to be matched onto this map with some shift in horizontal position and angle that represent the new position and heading of the hexapod. To do this, I've expressed it as an optimization problem. The algorithm tries to find the horizontal shift ($x$,$y$) and angle ($\theta$) that minimize the distance between each new scan point ($x_i'$,$y_i'$) and an occupied map pixel ($M(x,y)$):

\[ \Psi = \sum_i^N D(\left \lfloor{\tilde{x}_i}\right \rfloor, \left \lfloor{\tilde{y}_i}\right \rfloor) + a (x-x_g)^2 + b (y-y_g)^2 + c (\theta-\theta_g)^2 \]
\[ \tilde{x}_i = x_i' \cos \theta - y_i' \sin \theta + x \]
\[ \tilde{y}_i = x_i' \sin \theta + y_i' \cos \theta + y \]
\[ D(x,y) = \sum_{x'} \sum_{y'} M(x',y') \sqrt{(x-x')^2 + (y-y')^2} \]

Here, we project each LIDAR measurement ($x_i'$,$y_i'$) on to the SLAM map, adjusting for the current best guess of ($x$,$y$,$\theta$). At each projected point, we sum up the distance from that point to every occupied pixel of the SLAM map. This gives us an estimate for how 'far away' the projected scan is from matching the existing map. The three extra terms on the $\Psi$ equation are to bias the solution towards guess values for ($x$,$y$,$\theta$).

In this way, we are finding the location of the hexapod so that the new LIDAR scan looks most like the existing map. The assumption being made here is that the new scan is similar enough to the existing map that is can be matched with some confidence. Solving the above equation is a problem of non-linear optimization, similar to the inverse kinematics solved in the previous post. The code to solve this problem is a little dense, so I won't try to explain all of the details here. The relevant code is here, and the relevant method is slam::step(...);.

In words, we compute the $\Psi$ equation above and how it changes if we modify each of the parameters ($x$,$y$,$\theta$) by a small amount. Using this information, we can nudge each parameter by an amount that should get us to a lower value of $\Psi$. Since the problem is non-linear, we aren't guaranteed that this gets us to the lowest possible value, or even a lower one than before. To help make sure we end up in the right place, we initialize the solver with a guess position based on how the hexapod legs have moved recently. Since we went through so much trouble in the previous post to plan how the feet move, we might as well use that knowledge to help the localization solver. From there we iterate the nudging step over and over again with a smaller nudge until we find there is no way of nudging it to a lower value of $\Psi$. This is when we stop and say we have found the optimal values of ($x$,$y$,$\theta$). With that, the localization step is done!

For computational efficiency, I keep three versions of the SLAM map other than the normal one shown above. The first extra one is the original map convolved with a distance kernel, which at any position gives us an approximate distance to occupied pixels. The next two are the gradient of this distance map, one for the x-component and one for the y-component. These maps allow us to quickly evaluate both the $\Psi$ function and its derivatives with respect to ($x$,$y$,$\theta$). The distance map is computed in Fourier space using the convolution theorem, using threaded FFTW for computational speed. This method doesn't actually give us the correct distance measure for $\Psi$, but it's close enough for this basic algorithm.


The companion to localization is mapping. Once we have a solution to where the new scan was measured from, we need to add it to the existing SLAM map. While we have assumed the new scan is close enough to the existing map to be matched, it will have small differences due to the new measurement location that need to be incorporated so that the following scan is still similar enough to the map. In my SLAM code, the method that does the mapping step is slam::integrate(...);.

Each new laser ranging measurement from the new scan is projected on to the SLAM map given the estimated hexapod location from the localization step. The pixel below each point is set to 1.0, meaning we are fully confident that there is some object there. We then scan through every other pixel in the map and determine whether it is closer or farther away from the hexapod than the new scan measurements. If it is closer, we decrease the map value because the new scan measured something behind it, meaning it must be free of obstacles. If the pixel is farther, we leave it alone because we don't have any new information there; the new scan was blocked by something in front of it.

Once this mapping step is done, we have completed the two-part SLAM algorithm and are ready for another LIDAR scan. It's not the best or most accurate method, but it is easy to understand and can run on fairly low-level hardware in real-time. I've written the algorithm to run asynchronously from the main hexapod code, so new scans can be submitted and the hexapod can still walk around while the SLAM algorithm figures out where it is. On the UDOO's Cortex-A9, I can step a 1024x1024 map in around 2-3 seconds. With a 10 cm resolution, this gives over 100 meters of mapping. In practice, I've found that 10 cm is about the coarsest you can go in an indoor environment, but anything less than 3 cm is a waste of computing time.


To demonstrate graphically how my SLAM algorithm works, I've written a little JavaScript app to show how LIDAR scans relate to the SLAM map. It doesn't actually go through the whole algorithm from above, but it does show roughly what happens. The GLOBAL view shows the simulated robot moving through an environment, measuring distances to obstacles with a simulated LIDAR scanner. The ROBOT view shows what these LIDAR measurements look like from the minds-eye of the robot. It doesn't know that it is moving; instead, it looks like the world is moving around it. The SLAM view shows a simulated SLAM map and the approximate location of the robot. Move the ERROR slider back and forth to simulate varying amounts of noise in the measurements. These reduce the accuracy of both the localization and mapping methods.

Canvas not working!

I've also tested this algorithm out in real life with the hexapod. The following SLAM maps were collected by driving the hexapod around my apartment with a remote control. I started the hexapod out in one room, and it was able to walk into a different room and keep track of its position. The maps are pretty messy, but acceptable considering the simplicity of the algorithm being used.

Noisy map of my apartment.

It's not the best SLAM algorithm in the world, but it's relatively easy to understand and compute in an embedded setting. It seems to do best when the hexapod is inside a closed room and can see at least two of the walls. It has some issues keeping track of position when transitioning between rooms, mostly due to the sharp changes in the LIDAR scans when passing through a doorway. Still, it does a reasonable job at keeping track of the hexapod location within my apartment.

In the next post, I'll sort out how to get the hexapod to navigate autonomously. With the algorithms presented so far in this hexapod series, it becomes a straightforward procedure of using the SLAM maps to find optimal paths to pre-determined waypoints.

Monday, June 15, 2015

[HEX] Inverse Kinematics

This is the second post on my Hexapod Project series. The point of this project is to build a robot that allows me to try out a few robotics concepts. For a listing of each post in this series, click here. In this post, I'll go over the steps (!) needed to get a hexapod robot walking.

At this point, I have a robot with six legs, three servos per leg, and the electronics and code to control each servo independently. But with no guidance for how to move the servos, the hexapod is useless. With their above-average leg count and sixfold symmetry, hexapods can move around in all kinds of unique ways. While dancing around is certainly a possibility for my hexapod, I'm really only interested in getting it to walk around. So to begin the process of getting it mobile, let's start with the basics of getting a robot to walk.

Inverse Kinematics

In order for the hexapod to walk, it needs to be able to independently move each foot up and down, forward and back. But how can we tell it to move its foot? All we have control over are the three servos for each leg, none of which independently determine where the foot ends up. The position (and angle) that any given foot is at is determined by the angles of each of the three leg servos together. We can describe the relationship between leg position and servos angles as such:
\[ r = l_c + l_f \cos(\theta_f) + l_t \cos(\theta_f+\theta_t) \] \[ x_f = x_0 + r \cos(\theta_c+\theta_0) \] \[ y_f = y_0 + r \sin(\theta_c+\theta_0) \] \[ z_f = z_0 + l_f \sin(\theta_f) + l_t \sin(\theta_f+\theta_t) \]
Here, $\theta_c$, $\theta_f$, and $\theta_t$ are the servo angles for the coxa, femur, and tibia joints, respectively, and $l_c$, $l_f$, and $l_t$ are the distances between the joints. The position and angle at which the leg is connected to the body are represented by $x_0$, $y_0$, $z_0$, and $\theta_0$. This set of equations represent the forward kinematics of the leg. Each leg has an identical set of equations, but with different values for the initial position and angle.

These equations can tell us where the foot is, given the angles of the servos, but we need to do the opposite. Unfortunately, there isn't any way to rearrange the equations above so that we can plug in the foot position and solve for the servo angles (go ahead and try!). Fortunately, this doesn't mean that it's an impossible task! The process of inverting these equations is called inverse kinematics, and I've done a project on it before. My other post explains how to go about solving an inverse kinematics problem, so if you're interested in the details, check that out.

In short, the inverse kinematic solver takes a target foot position and outputs the servo angles that it thinks are appropriate. Starting with the servo angles as they are, the algorithm uses the forward kinematic equations to see which way each servo needs to turn so that the foot ends up slightly closer to the target. It takes many small steps like this until the forward kinematics equations say the new set of servo angles put the foot in the right place. This kind of procedure has its flaws, though. Imagine you tell it to find the servo angles that put the foot a mile away? The algorithm has no way to achieve this since the legs aren't nearly that long. In situations like this, it often goes haywire, giving you a nonsensical result for the servo angles. So careful attention to the iteration procedure is important.

The code I've written for the hexapod inverse kinematics solver is in my LIB_HEXAPOD library. The library also contains code used in the rest of this post along with some other bits. The procedure for determining foot position is as follows:

This bit of code gets called once for each leg, giving it a target position for that leg and getting back the optimal servo angles to use. Assuming I give it reasonable target positions, it can spit out servo angles quickly enough to solve each leg in just a few milliseconds (running on the Cortex-A9). The inverse kinematics procedure allows me to effectively ignore the fact that each foot is positioned by three servos, and instead concentrate on where I want the feet to actually go. A simple test is to just tell the feet to move up and down repeatedly:

Impressing Alan the cat with robot weightlifting.

Smooth Stepping

Six legs seems to be more stable than two or four (or an odd number?), but that doesn't make my hexapod impervious to falling over. An object will fall over (or at least be unstable) if it is not supported by at least three points of contact that surround the center of mass. If we assume the center of mass of the hexapod is roughly in the center of the body, this means that we need at least three feet touching the ground at all times. Further, if we were to draw a triangle between where the feet touching the ground are, the center of mass needs to be directly above anywhere within this triangle. Since we have six legs in all, these rules lead us to one of the simplest gaits for a hexapod, the tripod gait:

The six legs are broken up into two groups which trade off being the support for the body. The legs within each group lower to the ground in unison, move towards the back of the body, then lift up and move back to the front. The two groups do this exactly out of phase with each other so that there are always exactly three feet on the ground at any one point. For my hexapod, I've modified this a bit so that the three legs within each group hit the ground at slightly different times. I've done this to reduce the repetitive jolting that occurs from moving each leg simultaneously.

Even so, an issue I ran into while designing the gait is that whenever a foot hits the ground, the sudden transition from free-moving to weight-bearing caused the whole hexapod to jerk around. To specify the position of each foot as a function time, I was using a cosine bell with a flat bottom:

Notice the sharp change in direction at the start and end of when the foot is in contact with the floor. The transition to weight-bearing doesn't happen instantaneously (imagine carpeted floors), so the sudden transition when the foot goes from moving down to moving back creates problems. To create a smoother path for the feet to follow, I turned to Bezier curves.

B├ęzier curves are smooth functions that are completely determined by a sequence of points that I will call anchors. These anchors specify generally what shape the curve has, so tweaking the shape of the curve just involves moving around the anchor points. Going from a set of anchor points to the resulting Bezier curve involves a series of linear interpolations. Given some intended distance along the total path between 0 and 1, we start by linearly interpolating between each adjacent anchor points. So if we want to know where the Bezier curve is halfway along the path, we start by linearly interpolating halfway between each pair of adjacent anchors. If we have $N$ anchors, this gives us $N-1$ interpolated points. We then linearly interpolate again halfway between these $N-1$ points to get $N-2$ doubly-interpolated points. We continue this procedure until we are left with a single interpolated point, and this is the position of the Bezier curve at the halfway point.

The procedure for generating Bezier curves is a little difficult to describe, so I've made a little interface to help explain it. Drag the grey anchor points around to see how the Bezier curve changes, then increase the Guide Level slider to see the various levels of linear interpolation.

Canvas not working!

To make sure the hexapod stays steady when walking, I've kept the straight part of the foot path where the foot touches the ground, but set the return path to be a Bezier curve. I wrote a simple Bezier curve class to handle the computations on the hexapod.


Applying this Bezier curve stepping method to each leg in an alternating pattern gets the hexapod to walk forwards, but it can't yet handle turning. To implement turning, my first instinct was to simply adjust the amount by which each foot sweeps forward and back on each side of the body differently. This would cause one side to move forward more than the other, and the hexapod would turn. The problem with this method is that it isn't particularly physical. If you try it, you'll find that the hexapod has to drag its feet sideways to compensate for the fact that it is turning sidways but the feet only move forward and back. In order to let the hexapod turn naturally, you need to go into a different frame of reference.

If you tell all of the feet to move up towards the sky, the hexapod moves closer to the ground. Tell the feet to move down, the hexapod moves up. It can get confusing to figure out how to move the feet to get the body to move a certain way. I've found it's best to think that the hexapod body stays still in space and the ground just moves around relative to it. Then all you need to do is make sure the feet are in contact with that moving floor and they don't slide on it. For straight walking, we can just see it as a treadmill-like floor that continually moves backwards, and the feet are just trying to match the treadmill speed. For turning, we can think about the hexapod sitting above a giant turntable. How close the hexapod body sits to the axis of rotation determines how sharp of a turn it makes, or what the turning radius is. In order to keep the feet from sliding around on the turntable, we need to make sure each foot travels along a curve of constant radius from the axis of rotation. If we set it so the axis of rotation is directly underneath the body, the hexapod will stay in one place and just turn around and around. If the axis of rotation is set very very far away, there will barely be any curvature to the foot-paths, and the hexapod will basically walk forward in a straight line.

To help explain this concept, I've made another little interface for seeing how the hexapod feet need to move. Move the bottom slider around to change the turning radius, and enable the helper lines to see how each foot follows a specific path relative to the axis of rotation.

Canvas not working!

To incorporate the Bezier curve method from above into this view of walking, I convert the foot positions into polar coordinates around the axis of rotation and use the Bezier curve to pick the $\theta$ and $z$ coordinates as a function of time. In the hexapod code, I've parameterized the turning by a single parameter that relates to the turning radius. Between the turning parameter and a single speed parameter, I have full control over the movement of the hexapod.

At every time step, the code considers the current values for speed and turning, and decides where along the Bezier curve each foot should be. It then computes the actual positions in space from the curves and feeds these positions into the inverse kinematic solver. The solver outputs servo angles for each joint of each leg, and the main code packages these up and sends them off to the lower-level processors. This whole procedure is fairly quick to compute, so I can update the servo positions at about 50Hz.

I wrote a test code that takes input from an Xbox 360 controller and turns it into speed and turning values. This allowed me to treat the whole hexapod like a remote-control car. After driving it around and chasing my cats, I made a little video to show of the build:

At this point, the hexapod can walk around freely, but does not know where to go. In the next post, I'll go into giving the hexapod a sense of awareness through laser ranging.

Sunday, June 14, 2015

[HEX] Heterogeneous Hardware

This is the first post in my Hexapod Project series. The point of this project is to build a robot that allows me to try out a few robotics concepts. For a listing of each post in this series, click here.

In this post, I'll cover building the hexapod body, the various levels of processing needed to keep the hexapod on its feet, and some of the code I wrote to tie various levels of processing together.

Building the Body

Instead of designing the hexapod from scratch, I decided to go with a plexiglass kit for the body. This kit (and others from the same company) seem aimed at researchers and high-level hobbyists: higher quality than kids' toys, but not expensive enough to scare absolutely everyone away. I actually stumbled across this kit while looking for high-quality servos, which would play a key part in the success of the robot. My servo experience from the Single Pixel Camera project left a bad taste in my mouth (I did buy the cheapest servos I could find..), so I opted for a set of AX-12As for this project. These servos use a serial communication protocol to not only set the target angle, but also to set angle limits, torque limits, failure modes, and request the current internal status (temperature, voltage, load, speed, etc.). They are significantly more expensive than your run-of-the-mill micro servos, but I've learned a few times now that the money you save on parts you pay later in headaches (with interest!).

The fancy boxes are a nice touch.

It's like LEGOs, but for people with more money and less imagination.

The body consists of two flat panels separated by an inch or so with various mounting holes. This leaves a good bit of space in the middle for wires and other electronics. Each of the six legs consists of a coxa (hip) joint, a femur joint, and a tibia joint. This gives allows each foot to move around in all three dimensions, although with some limitations in position and angle. The kit also comes with an upper deck with a grid of mounting holes, providing even more space for accessories. I've been a little worried about shattering the plastic components, but have yet to have any problems. The range of motion of each leg is impressive, and the overall design is nicely menacing.

Sitting pretty on a wooden stand.

Controlling the Servos

The servos receive both power and control data through a three-wire cable system. They can be daisy-chained together, so in theory, all 18 of my servos can be powered and controlled with only three wires. The same site I got my kit and servos from offers an Arduino-based controller board for these kinds of servos called the Arbotix-M. It sports an ATMEGA644p processor running at 16MHz and 5V, and most importantly, has the basic accessory electronics needed to handle communication over the servo data line. The unfortunate number of data lines on the servos dedicated to bi-directional communication (one) means extra electronics, which I am happy to let someone else figure out.

Almost like the hexapod was designed to have one! (it was)

Note, hroughout this project, I'm dealing with the servos as black boxes. Not that they are particularly mysterious -- the documentation is actually pretty good -- but more that I don't want to adjust their firmware at all or consider the possibility of improving them. So the Arbotix-M controller is the first and lowest level of hardware I need to deal with in terms of programming.

Eighteen servos are a lot to deal with, so I decided to use the Arbotix-M as a kind of aggregator for servo commands and information. Other hardware can send the Arbotix-M a single command to set all of the servos to various angles, and it handles splitting up the commands and sending them to the servos one by one. It also monitors the servos and report any statistics or faults to other interested processors. The full code I wrote to run on the Arbotix-M can be found here. I set up a small test where I sent the Arbotix-M commands over FTDI and found I could update all 18 servos at around 100Hz, which was plenty fast for nice, smooth motion.

Higher Level Hardware

So far, the hexapod had nice servos and a controller that could keep the limbs moving, but nothing giving any it interesting commands. For the next two levels of processing above the Arbotix-M, I decided to go with a UDOO Quad board. The UDOO Quad is a small single-board computer with a quad-core ARM Cortex-A9 processor alongside an Arduino Due (ARM Cortex-M3), connected to a slew of peripherals including WiFi, HDMI, Ethernet, USB, Audio, and SATA. Instead of choosing this board after hours of deliberating between the numerous single-board computers out there, I went with this one because I had one sitting around in my graveyard of processors from failed projects. The Arduino Due would act as a hardware interface between the main cpu and the various hardware components (like the Arbotix-M), and the main cpu would do the heavy lifting required to get the hexapod moving around.

With the goal of only using my own software comes the goal of utilizing multiple cores in the single-board computer I have chosen. Throughout the project I've decided to use threading in C++ to achieve this.

UDOO board with optional touch screen attached.

Packets of Information

Because the quad-core cpu shares a board with the Ardinuo Due they share a dedicated serial connection (/dev/ttymxc3 in unix-land), so there's no need to worry about keeping them connected. I added a pair of wires to connect the Due to the Arbotix-M with another serial connection (Serial1 on the Due). These three processors needed a standard way of communicating data with each other. Since I wanted to allow for any processor to send data to any other processor, I needed a standardized method of packaging data and transmitting it so that it could be directed, parsed, and understood by any processor.

I created a packet class in C++ to handle this. A packet object could be created by any processor, filled with data, and sent along with a desination tagged in the header. The key properties of the packet class are as follows:
  • Arbitrary data size. (As long as it is an integer number of bytes).
  • Optional data padding. Some processors have difficulty with tiny packets.
  • Checksum. I only programmed for a single-bit XOR checksum, but it's marginally better than nothing.
  • Optional pre-buffered data. When memory is scarce, allow new packets to take the place of old ones in memory.
  • Destination Tagging. Single bytes indicating the destination, which is useful for packets bouncing between multiple processors.
The C++ class I ended up creating to achieve this looked something like this:
The buffer holds all of the relevant information for the packet, including the size, destination, and checksum. If you don't care for code examples, you can view these packets of information as just a string of bytes, each with a specific purpose. The data it contains can be any length, but the header and footer are always the same:

Each block represents a single byte.

The three processors are connected in series and use serial communication. Each monitors the serial port that connects it with the other processor(s) and decides what actions to take when a valid packet is received. For the quad-core processor, I wrote a simple serial port monitor that runs in its own thread. It both monitors incoming traffic for packets, and accepts new packets from other threads to send. The interface presented to other threads allows for either blocking or non-blocking sends and receives. The threaded serial library can be seen here. The lower level processors can't use threading (or at least, I'm not going to try), so they just periodically monitor their serial ports and pause other functions to act on incoming packets.

The whole point of this serial communication path with standardized packets is so that you can do something like send information from the quad-core processor, have the Due see that it is meant for the Arbotix-M, send it along, and have the Arbotix-M see it and act on it. Here's an example of what this kind of action looks like in code:

First, the quad-core processor initiates communication with the Due, creates a packet with some information for the Arbotix-M, then sends it through the serial port.

The Due is continually running a code like the following, which looks for valid packets and sends them on to the Arbotix-M if the destination tag is correct (this looks a lot more complicated, but that's because I haven't made as much of an effort to hide the complicated parts behind libraries and such):

Code snippets are nice and all, but at the end of the day we need to make sure the code is actually sending the correct information. When determining whether data is being sent between bits of hardware correctly, my favorite method is to just snoop on the data lines themselves and see what the ones and zeroes look like. I hooked up a digital logic analyzer to the data lines between the quad-core processor and the Due, and between the Due and the Arbotix-M to look at the data being passed around.

If you aren't familiar with this kind of view, we are looking at the voltage of each data line as a function of time. Within each horizontal band, the thin white line shows the voltage on the line. I'm using a digital analyzer, so it can only tell me if the voltage is low or high, hence the rapid switching up and down. The horizontal axis is time, and the values above all of the lines shows the timescale being plotted.

As the probe trace shows, the three processors are successfully passing information back and forth. The quad-core cpu sends a packet of ones and zeros to the Due, then a short time later the same packet of information gets sent from the Due to the Arbotix-M. Zooming in to one of the packets, we can even see the individual bytes being sent along with what they mean:

The three levels of processing can pass information back and forth, so the next step is to decide what that information should be. The quad-core processor can tell the servos where to go through this setup of bouncing packets, but the servos don't know what they should be doing. In the next post, I'll talk about using inverse kinematics and some other tricks to decide how each servo should move such that the hexapod can walk around smoothly.