Wednesday, February 5, 2014

LED Projection Mapping with Math


I have a love-hate relationship with RGB-LED strips. Before starting my Daft Punk Helmet project last year, I was getting used to using individual 4-pin RGB-LEDs for my colored-lighting needs. Once I realized I would need 96 of these LEDs for the gold helmet (384 leads, 288 channels to multiplex), I broke down and ordered a strip of WS2812s. These are surface-mount RGB-LEDs with built-in memory and driving. With three wires at one end of the strip to provide power and data, you can easily control the color of each individual LED on the strip with an Arduino or similar controller. I was in love.



But then I started to see the dark side of easy-to-use LED strips like this. Like so many other LED strip projects floating around the internet, my ideas started to lack originality. Instead of exploring the visual capabilities of over 16 million unique colors possible per pixel, I was blinking back and forth between primary colors and letting rainbow gradients run wild. Maybe I was being overwhelmed with possibilities; maybe the low difficulty made me lazy. Either way, the quality of my projects was suffering, and I hated it.

Zero-effort xmas lights

It's entirely possible that I'm being overly dramatic, but this is probably not the case. Still, something has to be done to break the cycle of low-hanging fruit projects. To do this, I've decided to embark on a project to create an arbitrary-geometry LED display. I want to be able to bend, wrap, and coil an LED strip around any object and still be able to display an image with little distortion. I expect this project will lead me on a mathematical journey that will be covered in at least two (2!) posts.

To start this off, let's differentiate the goal of this project from the projection mapping commonly used in art and advertising these days. In standard projection mapping, one or more regular projectors are used to display a 2D image on an arbitrary 3D surface. Specialized software determines how to warp the projected images so that they appear normal on the projection surface. For my project, I have pixels that are placed arbitrarily on the arbitrary 3D surface. This added complication is like placing a glass cup in front of the projector in standard projection mapping and expecting the warped image to still yield a comprehensible picture. In a sense, two mappings have to be done. 1 - Find the mapping of pixels to real space, 2 - find the mapping of real space to source image space.

While I'm using my laptop to do the heavy lifting of computing the appropriate pixel mapping, I need to use a microprocessor to interface with the LED strip. I'm using my prototyping Arduino Mega to interpret commands sent by the laptop via USB and send data to the LED strip. For my first arbitrary-geometry LED display, I wrapped about 4 meters of LEDs (243 total) around a foam roller.

Nothing says professional like duct tape and styrofoam.

The first step in the process is to find the physical coordinates of each LED in real space. Luckily, I chose a configuration that can be described mathematically. The strip wraps around a cylinder in a helix with some spacing between each successive loop. In my projection code, I slowly march up and around an imaginary cylinder placing LEDs every now and then.

We can plot the locations of every LED to confirm this works:



Next, we have to project the real-space location of each LED back on to the source image we want to display. The real-space locations are three-dimensional while the source image is two-dimensional, so we need a way of 'flattening' the LEDs down. The way in which we chose to do this depends on both the display geometry and how we want to view the display. For our cylinder example, we can start by unwrapping the LEDs:

Plotting the location of each LED on the source image gives:

Source, Image Coordinates, Result

Here, I've chosen the LED colors based on the nearest color found in the SMPTE color bars. Displayed on the LED strip, this kind of mapping shows the source image stretched around the cylinder.



From any one direction, the source image is difficult to see. Another method of projecting the LED positions on to the source image is to flatten them along a single direction perpendicular to the cylinder axis.

Source, Image Coordinates, Result

This allows the viewer to see the entire image from one direction.



Of course, the image is also projected on to the other side of the cylinder, but appears backwards. With this more useful projection, we can try to watch a short video:


In this example, I'm using OpenCV to grab frames from a source video, compute the projection on to display pixels, and send the pixel values out to the Arduino Mega. Without spending too much time on optimization, I've been able to get this method to run at around 15 frames per second, so the recording above has been sped up a bit to match the source video.

An important thing to note in that recording is the low amount of image distortion. Even though the LEDs are wrapped around a cylinder and directed normal to the surface, the projection prevents the source image from appearing distorted when viewed from the correct angle. However, if you look at the various color bar tests I've done, You can see that vertical lines in the source image do not always appear completely vertical when projected. This is not due to any inherent flaw in the projection math, but instead caused by improper assumptions on the LED locations. I've assumed that I wrapped the LED strip perfectly around the cylinder so that my mathematical model of a helix matches the locations perfectly. In reality, the spacing and angle of each loop around the cylinder was slightly off, introducing slight variations between the real positions and the programmed positions that got worse towards the bottom. To increase the accuracy of this projection exercise, these variations need to be dealt with.

There are three ways I see of correcting for the issue of imperfect LED placement:

  1. Place them perfectly. Use a caliper and a lot of patience to get sub-millimeter accuracy on the placement of each LED.
  2. Place them arbitrarily, but measure the positions perfectly. Use accurate measurements of each LED position instead of a mathematical model.
  3. Place them arbitrarily, get the computer to figure out where they are.
Since options 1 and 2 would most certainly drive me insane, I'm going with option 3. It's like I always say: why spend 5 hours doing manual labor when you can spend 10 hours making a computer do it?

At this point, I've already made a decent amount of progress on letting the projection code figure out the location of each LED. I'm using a few stripped down webcams as a stereoscopic vision system to watch for blinking LEDs and determine their positions in 3D space. I'll save the details for later, but here's a quick shot of one of the cameras mounted in a custom printed frame:


The code for this project (including the stereoscopic vision part) can be found on my github.

Thursday, January 16, 2014

Autonomous Vehicles: Basic Algorithms


Last June, I stopped by the Boulder Reservoir to watch the 2013 Sparkfun Autonomous Vehicle Competition (AVC). There were two races going on, an air race and a ground race. The basic idea is that teams design some kind of vehicle to compete in either race that is capable of navigating the course autonomously. At the start of the race, the teams are allowed to do something simple like pressing a "go" button on their vehicle, but the rest of the race must be completed without human intervention. Scores are based on time to complete the course, as well as additional points for special actions such as avoiding obstacles or hitting small targets.

Chasing your robot, laptop in hand. A common sight that day.

The air race was actually not as exciting as I had anticipated. There were a couple custom flying rigs that gave a good show, but due to the recent popularity of GPS-enabled quadcopters, there were a few cookie-cutter entries. I'm not saying it's easy to get a quadcopter to fly in a specific pattern (I spent months tuning my own hexacopter before going insane getting tired of it), but the availability of advanced pre-made flight controllers for quadcopters reduced some of the excitement of the event. It's a relatively new event, so I'm hoping next year will see some changes.

Ready to fly over the water.

The ground race, however, was hilarious. I've never seen so many home-made robots try so hard to go around a parking lot at top speed while avoiding fences, barrels, and each other. The sheer variety of vehicles running the course offered some interesting scenes. The amount of experience and money needed to make each vehicle was fairly apparent, from a tiny LEGO Mindstorms car with pre-programmed directions to a sponsored RC car equipped with RTK GPS (super-accurate GPS). In the middle were groups of students trying to do their best with limited budgets, hacking apart RC cars and adding Arduino controllers for brains and ultrasonic sensors to avoid obstacles. Most groups seemed capable of building a sturdy ground vehicle to run the course, but that was only the first step.

Needs more LEGO people.

How do you get a robot to win a race autonomously? Easy! You have to write up some code that the robot brain runs during the race that allows it to continually make progress despite obstacles. But how does the robot know what "progress" is? And how does it know what an "obstacle" is? What if it runs into a wall? What if it gets turned around? All of these questions must be considered when writing an algorithm for a race like this.

I'm interested in building a robot to compete in the next AVC, but before I commit any real resources, I figured I should see how hard it is to write an algorithm for an autonomous robot. Without a physical vehicle to test the algorithm on, I need a simulation that will run my code and give a decent guess as to how well it performs. This way I can start with a simple algorithm and work my way towards more advanced solutions while seeing the effect each addition has on the race outcome.

To make the implementation of the algorithm as realistic as possible, I went for a simulation environment that allows the use of pseudo-Arduino code. Each algorithm is written as if it were running on a generic Arduino controller with a setup() and loop() structure:



The simulation initializes a robot at the starting position and runs the setup() method. At every timestep, the simulation moves the robot around the course based on the values for the motor speeds. It checks for collisions with the course boundaries and obstacles to make sure the robot stays on track. Throughout the course of my simulations, I'll add various physical effects that may change the behaviour of the robot. The loop() method is run as often as possible, and when the robot calls the delay() method, control is given back to the simulation for some number of time steps.

Algorithm 1: Pre-Programmed Route
The first method of autonomous racing I wanted to try was where you tell the robot how far to go in each direction and just let it run. Assuming the robot starts facing the correct way, all you need to do is move forward and turn left a few times. How hard could that be?




I've drawn black lines showing the inner boundary of the course. It's not exactly the Sparkfun AVC course, but it's close enough for now. The outer boundary is at 0 and 75 in each direction. I say this, but it's not like any of the boundaries matter for this algorithm. It's perfect. You tell it where to drive and it goes there. The only way this method can fail is if you give it the wrong directions, right?

In a perfect world, this would be true. But a few days ago I stepped in a puddle of ice water thinking it was frozen solid, so I know for a fact that this is not a perfect world. In the real world, you tell a robot to drive straight and it will drive mostly straight. There will be small deviations from the intended path due to motor imperfections, wheel imperfections, ground imperfections, etc. Imagine the robot driving over a rough patch of concrete: one wheel might lose traction a bit, turning the robot by a few degrees. To model these issues in my simulation, I made it so that every now and then, the heading of the robot is altered slightly in a random direction. Then, instead of testing the robot just once, I ran thousands of simulations to see how often the robot would still succeed.

Success=100%, Avg Time=149s

Here, I'm showing a logarithmic heatmap of the paths taken by 10,000 simulated robots. The warmer the color, the more often the robots went there. With a small amount of continuous deflection added, there is a bit of spread in where the robots end up, but 100% of them still make it to the finish line. What if our robot is not so robust and the deflection is larger?

Success=45%, Avg Time=147s

Much better. Less than half of our pre-programmed robots made it to the end. Of course, the real AVC course has a few obstacles, so what happens when we place some barrels on the first stretch?

Success=29%, Avg Time=147s

Even though the barrels are placed away from the pre-determined path, a significant fraction of the robots were deflected into them. The barrels create neat shadows in the data where no robots ended up travelling. Another real-world effect that had a profound impact on some of the robots I saw in the 2013 AVC was collisions with other robots. I've modelled this as an event that happens randomly, but more likely near the start of the race. When it happens, the robot is spun around in a random direction and moved by a few centimeters. This is somewhat optimistic, since a few robots in the AVC broke apart on impact with a competitor.

Success=13%, Avg Time=147s

I could keep adding more real-world effects that might lower the success rate of this first algorithm, but I think you get the point. Anything could go wrong and completely throw off the pre-determined path. It's time for a better algorithm.

Algorithm  2: Random Walk
The first algorithm was too strict. It didn't allow for any freedom, and subsequently did poorly. It's time to give our robot a mind of its own and let it soar. In this algorithm, the robot drives forward for a few seconds, then proceeds to change it's speed and direction randomly every now and then. It's perfectly adaptable to the course in that it doesn't care about anything. Even winning.



Success=0%, Avg Time=N/A

Well that didn't go well. None of the 10,000 robots I sent out ever made it to the finish line. They all either got stuck on walls or ran over their 5 minute time limit. On the plus side, they didn't seem to be concerned with the barrels. Based on these results, I estimate that around one in a billion might make it. So there's a small chance that a robot set to move completely randomly could complete a course like this. As fond as I am of this algorithm, it's probably time to try something a little more intelligent.

Algorithm 3: Bumper Cars
Let's try a classic algorithm. Let the robot drive in a straight line until it hits something. The hit can be detected a few ways in real life, either with a bumper sensor or even an ultrasonic proximity sensor. When the robot detects an obstacle, it turns itself around to point in a random new direction and continues from there. It repeats this process until it either wins or dies.



Success=11%, Avg Time=445s

This time, we get some robots making it to the finish line. Still, this algorithm acts pretty oblivious when it comes to taking corners the right way. How can we inform our robot as to which way is the correct way to go?

Algorithm 4: GPS Positioning
So far the algorithms presented have been pretty dumb. They don't know anything about the course and they just guess which way is the right way to go. Of course, we can't expect much else since we haven't enabled the robots with any kind of sensors except one that sends an alert when an obstacle is hit. It's time to allow the robot to sense the environment a little.

GPS is a common method of giving robots the ability to know where they are outside. If we know the position of the course we want to race around, then we can make the robot know how far it is from various parts of the course. For this algorithm, the robot has a set of way-points (one near each corner of the map) that it will head towards. When it gets close enough to the first one, it targets the next one. If the robot hits an obstacle, it backs up and tries again.



Success=100%, Avg Time=154s

And now we see why all those entries from 2013 had GPS on them. With the ability to know exactly where it is at any point in time, the robot is perfectly capable of finding its way around the course. But is this realistic? The GPS system is neither exact nor instantaneous, so what we have so far is a gross overestimation of the capabilities of this algorithm. I've seen claims of <1 meter accuracy with a standard GPS unit, but I'll put a random noise of 2 meters on the GPS position to be safe. The GPS heading isn't truly measured, but instead just estimated from comparing the current position to the previous one. Finally, I'll set the update rate to 1 Hz, a typical value for cheap GPS units.

Success=99%, Avg Time=275s

With more realistic values for the GPS unit, the results are more believable. The success rate is still very high, but the amount of time the robots typically take has almost doubled. With less accurate measurements of the current position and heading, the robots tend to bounce around the course a little more and spend more time going around.


Other Methods
So far I've only discussed a few simple algorithms for autonomously navigating a known course. While I've tried to add in a few realistic effects like obstacles, continuous deflections, and collisions with competing robots, there is an endless list of physical effects I could try out to see if they are important. The model I've created for the robots is also very simple with two motors, a method of detecting collisions (but no information on direction), and a GPS module.

At some point I'd like to continue this simulation project to include more physical effects of driving a robot around a course and more ways for the robots to sense their surroundings. Some of the sensors I'd like to model are an optical flow sensor, an IMU, and possibly a stereoscopic vision sensor. Eventually, I'd also like to actually build a simple wheeled vehicle and try my hand at implementing one of my algorithms on it.

Friday, December 20, 2013

Robot Arm: Reaching for the Stars



It's the holiday season, and this is a blog that has a history of talking about RGB LEDs, Arduino controllers, and simple algorithms. As such, I will now spend three hours writing up how I spent 20 minutes hooking up my randomized Christmas lights:


Just kidding, Santa's bringing you coal this year. You get a post on the math behind Inverse Kinematics and Regularized Least Squares along with some code examples applying these concepts to an Arduino-controlled robot arm. Pour yourself a glass of eggnog with an extra shot of bourbon, it's going to be a long night.


Months ago I received a robot arm as a gift. I immediately set out to wire up an Arduino Prop Mini and some H-bridges to control the motors, along with a couple potentiometers to provide live feedback on the motor positions. I did a short write up here. At the time I wrote some basic code to control the arm. All it would do is take a set of target angles and move the motors until the measured angles were close to the targets. You could tell the arm do do things like "point straight up" and it would do it with ease. Since the arm is not capable of rotating very far around any joint, I implemented a check that would prevent the arm from ever moving out of bounds. If it did find itself in an especially contorted state, it would slowly restore itself to a more proper state and then resume whatever it was attempting to do before.

This method works just fine if all you want to do is tell the arm what angles to have. It's not super useful if you want the gripper to move itself to a specific position in space. Since all the arm knows so far is the angle each joint is at, it can't work backwards to determine how to get the gripper to a specific spot. Fortunately, there is a solution to this problem: Inverse Kinematics.

To understand how inverse kinematics works, first we must look at forward kinematics. We know the current angles of each joint based on our live measurements, and we know the shape and connectivity of the arm. Each segment of the arm can be seen as a simple rod of length $l_i$ connected to the previous joint and rotated by some angle $\theta_i$:


From these quantities we can work out the position of the gripper ($\vec{p_{\mathrm{g}}}$) in physical space. For now I will restrict the math to two dimensions ($x$, $z$) since motion in the $y$ direction is determined by the base rotation motor, and is trivial to work out. However, I will soon generalize to vector positions and states so that it can be extended to any number of dimensions and angles.
\[ \vec{p_{\mathrm{g}}} = \vec{f}(\vec{\theta}) = \begin{pmatrix} l_3 cos(\theta_1 + \theta_2 + \theta_3) + l_2 cos(\theta_1 + \theta_2) + l_1 cos(\theta_1) \\ l_3 sin(\theta_1 + \theta_2 + \theta_3) + l_2 sin(\theta_1 + \theta_2) + l_1 sin(\theta_1) \end{pmatrix}. \]
Solving for the position of the gripper based on knowing the angles of the joints is the forward problem. We want the arm to solve the opposite, or inverse problem. Given a target position for the gripper, determine the angles that each joint must be at.

If the function $f$ were simpler, we might have a chance of finding the inverse function and applying it to any target position $\vec{p_t}$ to find the target state $\vec{\theta_t}$. Unfortunately, we can't do this for our case so we must resort to other methods of solving inverse kinematics. Suppose our target position is not very far from our current position so that we may approximate the target as a small step in angle away from the current position:
\[ \vec{p_{\mathrm{t}}} = \vec{p_{\mathrm{g}}} + J \vec{\delta \theta} + \epsilon \]
where the $\epsilon$ indicates additional terms that will be neglected. This is a multi-dimensional Taylor expansion of the forward kinematics equation. The quantity $J$ is the Jacobian matrix containing the partial derivatives of each component of the function $f$ with respect to each component of $\theta$:
\[ J_{ij} = \frac{\partial f_i}{\partial \theta_j}. \]
We know the target position, the current position, and the derivatives of $f$ at the current location, so we are able to rearrange our approximation to get an equation for the step in angle required.
\[ \vec{\delta \theta} = J^{-1} (\vec{p_{\mathrm{t}}} - \vec{p_{\mathrm{g}}}) \]
Since this is just an approximation, we need to apply this formula to our current position many times in order to settle on the target position. The first iteration will hopefully provide a $\delta \theta$ that moves the gripper towards the target position, but will not be exact. Successive applications of a new $\delta \theta$ computed at each new position should cause the gripper position to converge on the target position. This is sometimes called the Jacobian Inverse method.

The robot arm I'm dealing with has 5 motors. I haven't settled on a good way of providing feedback on the gripper motor, so I'm ignoring that one for now. The base rotation motor determines the direction the arm is pointing in the $x$-$y$ plane, but reformulating the math in cylindrical coordinates ($r$, $\phi$, $z$) makes the target angle of this motor trivial ($\theta_{base} = \phi$). This leaves two non-trivial dimensions for the positions ($r$, $z$) and three joint angles that need to be determined ($\theta_1, \theta_2, \theta_3$). For various math and non-math reasons, I added one further constraint on the position vector. To specify the angle that the gripper ends up at, I added a third 'position' $\psi$ defined as
\[ \psi = \theta_1 + \theta_2 + \theta_3 \]
From this, we can formulate our forward kinematic function $f$ as:
\[ \vec{f}(\vec{\theta}) = \begin{pmatrix} l_1 cos(\theta_1) + l_2 cos(\theta_1 + \theta_2) + l_3 cos(\theta_1 + \theta_2 + \theta_3) \\ l_1 sin(\theta_1) + l_2 sin(\theta_1 + \theta_2) + l_3 sin(\theta_1 + \theta_2 + \theta_3) \\ \theta_1 + \theta_2 + \theta_3 \end{pmatrix}. \]
The partial derivatives of each of these three components with respect to each joint angle gives the analytic form of the Jacobian matrix $J$.

I coded this up in C as an IKSolve() function which reads and write to global variables for the current and target positions and angles. The function performs a single iteration of updating the target angle and allows the rest of the code to work on moving the arm to those angles in-between iterations.


The 3-by-3 matrix inversion is performed using a function adapted from a post found on stackexchange.

I added this to my robot arm code and uploaded it to the Arduino Pro Mini. The target position has to be hard-coded, so each new target required uploading the code again. When given a target position that is within it's reach, the arm typically does a good job moving to it. Here's a gif of the arm moving a pen down to touch a piece of paper:


The algorithm performs fairly well. The arm may not have taken the most optimal path to get to the target position, but we have not put any constraints on the path. The arm can also be told to move to a position that is outside it's reach. Here's a gif of the arm trying to place the pen directly above the base, slightly out of it's reach:


As you can see, the algorithm isn't perfect. The arm seems to struggle to rotate itself to the correct angle, and instead of converging on a final resting place it seems to oscillate around the target. Finally, it goes unstable and wanders off. The Jacobian algorithm used in this example is attempting to converge on the exact answer provided, and when it physically cannot, it starts doing strange things. It will get the arm as close as it can get, but will keep iterating, thinking that it will just need a few more steps to get to the answer. While testing the arm I found this instability to show up much more often than I would have liked. If the arm could not move itself to exactly the correct position, it would start to shake and eventually drift away from the target completely. I needed a new approach.

To fix this instability, I resorted to using a technique I am familiar with for my grad school research. I do helioseismology, which involves measuring oscillations on the surface of the sun to infer that is happening on the inside. Specifically, I'm interested in the flow velocities deep in the solar interior. Every flow measurement we make is related to the actual flow inside the sun by some reasonably convoluted math. The process of turning a set of measurements into a flow map of the solar interior involves solving an inverse problem. This is similar to what I have outlined for the robot arm, except the math for the forward problem is very different. We have a few methods of solving the inverse problem, one of which is using Regularized Least Squares.

Least squares is often used in data analysis to 'fit' an analytical model of data to the measured data. The model is reliant on a user-defined set of numerical paramaters that are optimized during the fitting to produce a model that appropriately matches the data. The measure of how appropriate the match is depends on the sum of the difference between the data and the model squared. This way, the optimized parameters are the ones that provide the least deviance between the model and the data. In the case of solving inverse problems, we want to find the angles (the parameter set) which will position the arm (the model) closest to our target position (the data). Additional terms can be added to the model to bias the optimization to find a solution that we prefer (regularization).

The wikipedia page for non-linear least squares provides a nice derivation of how to turn the problem into matrix algebra which can be iterated to find an optimal solution. In the end, you can express the iteration technique in a similar way to the Jacobian Inverse method outlined above:

Jacobian Inverse: \[ \vec{\delta \theta} = J^{-1} (\vec{p_{\mathrm{t}}} - \vec{p_{\mathrm{g}}}) \]
Least Squares: \[ \vec{\delta \theta} = (J^T J)^{-1} J^T (\vec{p_{\mathrm{t}}} - \vec{p_{\mathrm{g}}}) \]
Regularized Least Squares with Weighting: \[ \vec{\delta \theta} = (J^T W J + R)^{-1} J^T W (\vec{p_{\mathrm{t}}} - \vec{p_{\mathrm{g}}}) \]

Without regularization or weighting, the least squares method looks strikingly similar to the Jacobian method. Once the fancy extras are added back in things look a little messier, but the essence is still there. Since I already had code set up to solve the Jacobian method, it was easy enough to modify it to handle a few more matrix operations:


The weighting vector simply weights one or more components of the target position more or less than the others. If you want to make sure the gripper is in the right location but you don't care so much for what angle it ends up at, you can de-weight the third component. I set the regularization to penalize solutions which caused excess bending at any of the joints. This way, if there happened to be multiple ways of the arm positioning itself correctly, this would bias the algorithm to pick the solution that is 'smoothest'.

With this more advanced approach, the robot arm was acting much more sane. I could still confuse it with particularly troublesome targets, but overall it was much more stable than before. If a target position was out of reach or required moving out of bounds for any joint, the arm would settle on a position that was fairly close. With high hopes, I stuck a pen in the gripper and told it to draw circles on a piece of paper.


The code will need a little more tuning before I sent the arm off to art school. For now, the dunce hat remains. I've been able to confirm that the arm generally works, but I think the next step is to see how well I can fine tune both the code and the sensors to achieve a modest level of accuracy in the positioning (<1 cm).

I don't always like to look up the solution to my problems right off the bat. Often I will see if I can solve them myself using what I know, then sometime near the end check with the rest of the world to see how I did. For this project, I was happy to find that different types of modified least squares methods are commonly used to do inverse kinematics. I certainly didn't expect to create a novel method on my own, but even more, I wasn't expecting to land on a solution that the rest of the world already uses.

Saturday, November 30, 2013

12 Hour Project: Arduino Mechanomyography


It's the weekend after Thanksgiving. Not only are my grad student responsibilities minimal, but the servers I get most of my data from are down over the holiday. I've tried sitting around doing nothing, but that was too hard. Instead I've decided to challenge myself to a 12-hour electronics/programming project.

Seeing as it's just past Thanksgiving, leftovers are the highlight of the week. I've gathered quite a few miscellaneous electronic parts over the last year, mostly from projects that never got off the ground. I've got leftover sensors, microcontrollers, passive components, various other integrated circuits, and a couple protoboards. To keep myself busy and to get rid of a couple of these spare parts, I've decided to do a short 12-hour project. I set out a few rules for myself:
1) I could only use parts I had sitting around.
2) I had 12 hours to complete it
3) I had to build something useful (OK, maybe just interesting)

I did allow myself to think about the project before I started working. One topic I've been interested in for a long time but haven't yet pursued is bio sensing. This is basically the act of measuring some kind of biological signal (heart rate, blood pressure, brain waves, skin conductivity, etc.). Real medical equipment designed to measure these things is often very accurate, but is fairly expensive. As the cost of the sensor goes down, the accuracy usually goes down too. The goal is then to find a cheap method of measuring these biological signals well enough. While a hospital may need to know your O2 sats to two or three significant digits in order to save your life, an outdoor enthusiast might just be generally curious how their sats change as they hike up a 14,000 foot mountain.

As an avid boulderer, I've been particularly interested in muscle use while climbing. A big thing that new climbers notice is how much forearm strength is required to get off the ground. I wanted to see if I could measure forearm muscle activation using whatever I had lying around. The first method of measuring muscle activity that came to mind was electromyography (EMG). This method measured the electrical potential generated by muscle cells. I figured this would be difficult to do without special probes, so I went on to mechanomyography (MMG). This method measures vibrations created by activated muscles. The vibrations can be picked up using many different sensors, including microphones and accelerometers. Luckily, I had a few of the latter sitting around from a previous project. The night before I planned on starting any work on this, I set up a camera to take a time-lapse of my work area for the next day.



I started working bright and early at 10 am. I found an old MPU-6050 on a breakout board and soldered wires to the power and I2C lines so I could begin collecting data for analysis using an Arduino Mega. I wrote a quick code for the Mega that would grab raw accelerometer readings from the MPU-6050 and print them directly to the serial port every few milliseconds. I simply copy and pasted the data over to a text file that I could read with my own analysis codes.


For this project, I relied heavily on wavelet transforms to help interpret the sensor readings. Since the expected muscle signal would be a transient oscillation with unknown frequency and unknown amplitude, I needed a method that would break down any wave-like signal in both time and frequency. I was lucky enough to find an excuse to learn about wavelets for my grad school research (it ended up not being the best method, go figure) so I started this project with a few analysis codes to work from. Before I get into the results, I can try to give a quick run-down of why wavelets are useful here.

Interlude on Wavelet Transforms
The wavelet transform is an integral transform in that it maps our signal from one domain into another. In many analysis problems, a signal can be difficult to interpret in one domain (time) but easier in another (frequency). An example would be a signal comprised of two sine waves of different frequency added together. The addition creates a strange looking signal that is difficult to interpret, but when mapped on to the frequency domain, the two component sine waves appear distinct and separated. Why is this useful for our problem? Well, I should start by talking about a simpler transform, the Fourier Transform.
Interlude on Fourier Transforms (FTs)
Just kidding, if I follow this I'll fall down a rabbit hole of math. All you need to know is the following: any signal can be thought of as the combination of many sine waves (oscillations) that are at many different frequencies and different amplitudes. An FT is a way of displaying which particular frequencies are prominent in the signal. The issue is that it analyzes the entire signal at once, so if the composition of the signal changes over time, this change gets mixed up with the rest of the signal and you get just a single measure of the components.
Back to Wavelets
Suppose we have a signal we want to analyze in terms of simple oscillations (like a muscle vibrating), but the oscillating component we are interested in changes with time. Maybe the frequency changes, maybe the amplitude changes. A wavelet transform allows us to see the individual frequency components of the signal as a function of time. For an example, I'll use a microphone recording I did for the Music Box project of a few notes played on a violin.


I pulled the data into a little IDL code I adapted from examples here and plotted the results.


The warm/white colors show parts of the signal that have a large amount of power, or are particularly strong. The vertical axis gives the frequency of the component, and the horizontal axis gives the time during the original signal that it occurs. As time progresses, you can see that the microphone picked up a series of notes played at successively higher frequencies. For any of these notes played, you can see multiple prominent components of the signal showing up at distinct frequencies. These are the harmonics of the base note that show up while playing any instrument. If you are used to seeing plots like this and are wondering why it looks so awful, that's because I 'recorded' this by sampling a microphone at about 5kHz using an Arduino, printing the analog value to the serial port, then copy and pasting the results into a text file. I also may have gotten the sampling rate wrong, so the entire plot might be shifted a little in frequency.

Now that we've got the math out of the way, how does this relate to the project at hand? Muscles vibrate at particular frequencies when moving. The frequency at which they vibrate depends slightly on which muscle it is and how much force the muscle is exerting. For more information on these oscillations, check out this paper. It's an interesting read and has quite a few useful plots. My hope is to identify both the frequency and amplitude of the signal I'm interested in by looking at the wavelet transform of the accelerometer output.

Going all the way back to my simple MPU-6050 + Arduino Mega setup, I set the sensor down on my workbench next to me and recorded data for a few seconds. Near the end of the recording, I dropped a pen a few inches away from the sensor to see what it would pick up.


I've broken the sensor analysis into the three distinct directional components from the accelerometer (x,y,z). The colors once again show relative amplitude of the signal, and anything near the dark cross-hatching is not to be believed.

For the first 15 seconds, we see the noise signature of the sensor. There appears to be a slight frequency dependence, in that there is a little more power near 25 Hz and 150 Hz than in other places. I'm not sure where this is coming from, but it's low enough amplitude that I'm not concerned. At 15.5 seconds, the pen hits the desk and we see a spike in power at high frequency, along with a few of the bounces the pen made in the second after first impact.

Next I duct taped the sensor to my forearm with the x-axis pointing down the length of my arm, the z-axis pointing out from my skin, and the y-axis obviously perpendicular to both. I started by recording data with my arm as limp as possible.


Similar to the previous plot, but with a constant signal between 2 and 10 Hz. I don't really know what this is coming from wither, but it's roughly consistent with my heart rate when I have a sharp metal object glued tight to my arm.

Next I recorded data while squeezing a hand exerciser twice to see if there was a significant difference between that and my relaxed arm.


Luckily, it was obvious when I was flexing and when I wasn't. There was a significant increase in the power around 13 Hz whenever I engaged my forearm muscle, and a spike in power right when I relaxed it. After a few more tries, I found this signal to be fairly robust. Even when I was moving my whole arm around to try to confuse the accelerometer, I could see a distinct signal around 13 Hz whenever I activated the muscles underneath the sensor. I decided that the x-component of the accelerometer data was giving the highest signal-to-noise ratio, so for the rest of my analysis I only used that.



Since the power at 13 Hz wasn't the only thing present in the accelerometer data, I had to filter out everything around it. I used a 4th-order Butterworth band-pass filter centered on the frequency I wanted. I found this nice site that calculates the proper coefficients for such filters and even gives filtering code examples in C. After implementing the filter, I compared the new signal to the unfiltered one.


With the signal I wanted extracted from the rest of the accelerometer data, I implemented a quick algorithm for determining how much power was in the signal at any given time. Over a 0.1 second window, the Arduino would record the maximum amplitude of the filtered signal and feed that in to a running average of every previous maximum from past windows. This created a somewhat smooth estimate of the amplitude of the 13 Hz oscillations. I modified the Arduino code to only print this amplitude estimation to the serial port and recorded the result while flexing a few times.


I finally had a simple quantity that gave an estimate of muscle activation. The next step was to add some kind of visual response. I had an extra RGB LED strip sitting around from my Daft Punk Helmet project, so I went with that. The strip of 7 LEDs I planned on using only needed three connections, Vcc, Gnd, and Data. I had plenty experience getting these strips to work with the helmet project, so that was a breeze.


Next was to transfer the code over to a little 16 MHz Arduino Pro Mini and wire everything to it. To power the setup I added a little Molex connector so it would be compatable with the 5V battery packs I made for my helmets (a 3-cell LiPo dropped to 5V with a switching regulator).


I added a simple conversion between signal amplitude and LED color to the code and tried it out. Unfortunately, I couldn't find any plastic or cloth straps that would work for securing the sensor and display to my arm, so I duct taped them on. Over the course of the day I had ripped around 5 pieces of tape off my arm so I was getting used to the pain.


At this point I declared the project a success. Not only had I built a contraption to measure muscle activity using components I had sitting around on my desk (and found a use for wavelet transforms), but I had done it in my 12-hour limit (actual time ~10.5 hrs). I've placed the final code on github for those interested.

I've got close to 10,000 still frames from the time-lapse I recorded while working, but it will take me a day or two to get that compiled and edited. I'll update once it's live!

Sunday, November 17, 2013

Daft Punk Helmets: Controllers and Code


I'm hoping this will be my last post about the Daft Punk helmets that I made for Halloween. While they were an incredibly fun project to design, build, and program, it has been a fairly draining process. I'm ready to move on to a new project, but first I'd like to wrap this one up.

Before I get into how I controlled the numerous lights embedded in each helmet, I should take a moment to discuss the power requirements. This is one area that I was uncertain about coming from a background of knowing absolutely nothing about electronics. The first problem was what the power source would be. I listed by options and their pros/cons:


After much deliberation I settled on using some LiPo batteries I had sitting around from another project. I had two 3-cell (11.1V nominal) 4500mAh rechargeable batteries and two helmets to power. A perfect match. Unfortunately, all of the LED drivers needed 5V to operate. Since connecting them to the batteries as-is would most definitely end painfully, I needed a way of dropping the voltage from 11.1V to 5V. The beginner electrical engineer in me immediately thought of using the ubiquitous LM7805, a linear voltage regulator set up to drop an input voltage to a steady 5V on output. A quick check with the datasheet for the pinout and I was ready to go.

Uh oh.

What does this mean? The datasheet for any electrical component typically provides a large amount of information on how to operate the component, what the standard operating characteristics are, and what the limits of operation are. Above, I've highlighted in appropriately fire-orange the thermal resistance of the component. The units for this are degrees Celsius per Watt, and the quantity describes how much the device will heat up when dissipating some amount of power. Why is this of any concern to me? Well, linear voltage regulators drop the input voltage down to the output voltage while maintaining some amount of current by dissipating the excess power as heat. If you don't understand that last sentence, either I'm explaining it poorly or you should read up on basic electronics (V=IR, P=IV). I prefer to believe the latter.

So how much power will I need to dissipate? I know the voltage drop (6.1V), but what is the current draw? Let's consider the pseudo-worst case where every LED in each helmet is on full brightness, but the LED drivers and other components don't require any power. For the red LED panels, that makes 320 LEDs using 20mA of current each, or 6.4 Amps total. For the RGB LEDs, there are 96 LEDs with three sub-components each drawing 20mA resulting in 5.8 Amps. So in dropping 6.1V while powering every LED, we need to dissipate up to 40 Watts of power (~45% efficiency). Our handy datasheet tells us that the regulator will set up a temperature differential relative to the ambient air temperature of 65 degrees Celsius for every Watt we dissipate. This leaves us with... 2600 degrees. That's hot enough to melt iron. It's also hot enough to completely destroy the regulator, so using a linear voltage regulator is not an option. There is the option to use a heatsink to help dissipate the waste heat more efficiently, and the datasheet helpfully tells us that in that case, we only heat up by 5 degrees C per Watt. This gets us to a toasty 200 degrees C, still too hot to handle. We need another option.

Enter the DC-DC switched-mode regulator. I'm not enough of an expert on these circuits to give an intuitive explanation for how they work, but the end result is a drop in voltage with high efficiency (75-95%). The higher efficiency means we won't be dissipating nearly as much energy as heat compared to the linear regulator. I grabbed a couple cheap step-down converters (a switched-mode regulator that can only step voltages down) from eBay and made two battery packs. Each converter had a small dial for picking the output voltage, so I tuned that to output 5V and glued it in place.

Top: finished battery pack. Bottom: Just the switching regulator.

The rest of the circuitry for each helmet was fairly simple. I used an Arduino Pro Mini 5V in each helmet to control the lights and hooked up a tiny push button to each to provide some basic user input. The LED drivers for the red LED panels needed three data lines connected to the Arduino, and the RGB strips just needed one per strip. With all of this hooked up, the helmets were ready to be programmed.



Before I get into the details of the code, the whole thing can be viewed on my github.

There are a couple steps to thinking about the code for each helmet. I think it's easiest to work out the code from the lowest level to the highest. The first step is figuring out how to talk to the LED drivers. Luckily I had fairly complete drivers in each helmet that handled power, PWM, and had an internal data buffer. The first bit of code I wrote just interfaced with the different drivers. The second step is figuring out how to do an internal data/pixel buffer in each Arduino so that you don't have to compute which LEDs to turn on while attempting to talk to the LED drivers. This causes significant lag, and for the RGB LED drivers, is impossible (more on that later). The third step is deciding how to fill the internal pixel buffers. This is where I get to decide what patterns will show on each helmet, how quickly they update, etc. The code will basically 'draw' an image in the pixel buffer and expect it to be sent out to the LED drivers by whatever code was written for step two. The fourth and final step is writing the overall flow of the code. This is what handles which drawing mode is running, how the push button is dealt with, and how often the pixel buffers and LED drivers should be updated.

While the overall theme of both codes roughly follow these steps, there were differences in implementation for each helmet. I'll go through a few of the key points for each step.

The Silver Helmet (Thomas)
Step 1: The drivers were essentially acting as shift registers with 16 bits of space in each. The first 8 bits were an address and the second 8 bits were the data payload. The addresses 0x01 to 0x08 pointed to the 8 different columns of pixels attached to each driver. Sending a data payload of 0x24 (binary 00100100) with an address of 0x02 would set the third and sixth pixels of the second column on.
Step 2: Since each pixel is either on or off, only a single bit is needed for each pixel. The display was conveniently 8 pixels high, so a single byte for each of the 40 columns was used as the internal buffer.
Step 3: There ended up being 6 possible modes for this helmet. Some displayed text by copying individual characters from memory onto the pixel buffer, while others manipulated each pixel to create a unique effect. Below I have a gif of each mode except the 'blank' mode, which you may guess is not very interesting.
Step 4: The button was coded up as an internal interrupt in the Arduino, so at any point, pressing the button would increment the mode counter. Every few milliseconds, the main loop would allow the code pertaining to the current mode to take a 'step' in modifying the pixel buffer, then the main loop would go ahead and push the pixel buffer out to the LED drivers.


Robot / Human


Random Bytes


Around the World


Starfield


Pong

The red pixel near the bottom of the display is not a stuck pixel. It's actually the on-board LED for the Arduino shining through from the backside of the helmet. A few notes on the modes. The Robot/Human mode would actually choose which of those two words to display at random after the previous one had finished scrolling by. The random bytes was done through bit-shifting, byte-swapping, and random bit-flipping. The starfield was boring so I never left it on. The game of Pong actually kept score with itself on the edges and the ball would get faster as the game progressed.

The Gold Helmet (Guy)

Step 1: The LED drivers found on each RGB LED require precise timing in order for them to accept data. Luckily, there is a wonderful library put out by the lovely folks at adafruit that handles this timing. Their library uses hand-tunes Assembly code to ensure the timing is accurate and that the Arduino plays nice with the drivers.
Step 2: The NeoPixel library sets up a pixel buffer on its own, but I decided to also keep one outside the library. Each pixel requires 8 bits of data per color channel and I was using 8 strips of 6 LEDs on each side of the helmet. This is 96 LEDs (288 individual color channels), but I was only interested in having a single color for each strip of 6 LEDs. This limits the number of color values that need to be held in memory to 48. I kept a single array for each color and side (right-red, right-green, ..., left-red, ...), each 8 bytes long.
Step 3: There were only four modes (including a blank one) that I came up with. The gifs are below.
Step 4: Again, the push button acted as an interrupt for the Arduino to switch modes. The main loop stepped forward the appropriate mode and copied my own pixel buffers to the NeoPixel buffers so the data could be sent out.


Classic Blocks


Cycling Colors


Slow Beat

The Classic Blocks mode was created based on some videos I had seen of other similar helmets in action. I felt it had a nice 90s feel to it. The Cycling Colors mode was copied directly from my Music Box project, but sped up by a few times. The Slow Beat mode used the colors from the Classic mode, but kept them mostly steady with a slow pulsation of the brightness.

With that, the code to drive both helmets had been finished. The LED drivers had been wired to the LEDs; the LEDs had been soldered together; the visors had been tinted and melted into place; the helmets had been painted and glossed, sanded and molded, reinforced with fiberglass and resin; and some pieces of paper had been folded into the shape of two helmets. It had been a hectic but incredibly fun project for the eight weeks it took to get to completion. Not everything went as planned and some of the edges were rougher than intended, but I personally think the end result was well worth the effort. I hope these few posts covering the build progress have been interesting or even useful to you. To wrap this up, here are a few snapshots of the helmets being worn for Halloween 2013: