Friday, December 20, 2013

Robot Arm: Reaching for the Stars



It's the holiday season, and this is a blog that has a history of talking about RGB LEDs, Arduino controllers, and simple algorithms. As such, I will now spend three hours writing up how I spent 20 minutes hooking up my randomized Christmas lights:


Just kidding, Santa's bringing you coal this year. You get a post on the math behind Inverse Kinematics and Regularized Least Squares along with some code examples applying these concepts to an Arduino-controlled robot arm. Pour yourself a glass of eggnog with an extra shot of bourbon, it's going to be a long night.


Months ago I received a robot arm as a gift. I immediately set out to wire up an Arduino Prop Mini and some H-bridges to control the motors, along with a couple potentiometers to provide live feedback on the motor positions. I did a short write up here. At the time I wrote some basic code to control the arm. All it would do is take a set of target angles and move the motors until the measured angles were close to the targets. You could tell the arm do do things like "point straight up" and it would do it with ease. Since the arm is not capable of rotating very far around any joint, I implemented a check that would prevent the arm from ever moving out of bounds. If it did find itself in an especially contorted state, it would slowly restore itself to a more proper state and then resume whatever it was attempting to do before.

This method works just fine if all you want to do is tell the arm what angles to have. It's not super useful if you want the gripper to move itself to a specific position in space. Since all the arm knows so far is the angle each joint is at, it can't work backwards to determine how to get the gripper to a specific spot. Fortunately, there is a solution to this problem: Inverse Kinematics.

To understand how inverse kinematics works, first we must look at forward kinematics. We know the current angles of each joint based on our live measurements, and we know the shape and connectivity of the arm. Each segment of the arm can be seen as a simple rod of length $l_i$ connected to the previous joint and rotated by some angle $\theta_i$:


From these quantities we can work out the position of the gripper ($\vec{p_{\mathrm{g}}}$) in physical space. For now I will restrict the math to two dimensions ($x$, $z$) since motion in the $y$ direction is determined by the base rotation motor, and is trivial to work out. However, I will soon generalize to vector positions and states so that it can be extended to any number of dimensions and angles.
\[ \vec{p_{\mathrm{g}}} = \vec{f}(\vec{\theta}) = \begin{pmatrix} l_3 cos(\theta_1 + \theta_2 + \theta_3) + l_2 cos(\theta_1 + \theta_2) + l_1 cos(\theta_1) \\ l_3 sin(\theta_1 + \theta_2 + \theta_3) + l_2 sin(\theta_1 + \theta_2) + l_1 sin(\theta_1) \end{pmatrix}. \]
Solving for the position of the gripper based on knowing the angles of the joints is the forward problem. We want the arm to solve the opposite, or inverse problem. Given a target position for the gripper, determine the angles that each joint must be at.

If the function $f$ were simpler, we might have a chance of finding the inverse function and applying it to any target position $\vec{p_t}$ to find the target state $\vec{\theta_t}$. Unfortunately, we can't do this for our case so we must resort to other methods of solving inverse kinematics. Suppose our target position is not very far from our current position so that we may approximate the target as a small step in angle away from the current position:
\[ \vec{p_{\mathrm{t}}} = \vec{p_{\mathrm{g}}} + J \vec{\delta \theta} + \epsilon \]
where the $\epsilon$ indicates additional terms that will be neglected. This is a multi-dimensional Taylor expansion of the forward kinematics equation. The quantity $J$ is the Jacobian matrix containing the partial derivatives of each component of the function $f$ with respect to each component of $\theta$:
\[ J_{ij} = \frac{\partial f_i}{\partial \theta_j}. \]
We know the target position, the current position, and the derivatives of $f$ at the current location, so we are able to rearrange our approximation to get an equation for the step in angle required.
\[ \vec{\delta \theta} = J^{-1} (\vec{p_{\mathrm{t}}} - \vec{p_{\mathrm{g}}}) \]
Since this is just an approximation, we need to apply this formula to our current position many times in order to settle on the target position. The first iteration will hopefully provide a $\delta \theta$ that moves the gripper towards the target position, but will not be exact. Successive applications of a new $\delta \theta$ computed at each new position should cause the gripper position to converge on the target position. This is sometimes called the Jacobian Inverse method.

The robot arm I'm dealing with has 5 motors. I haven't settled on a good way of providing feedback on the gripper motor, so I'm ignoring that one for now. The base rotation motor determines the direction the arm is pointing in the $x$-$y$ plane, but reformulating the math in cylindrical coordinates ($r$, $\phi$, $z$) makes the target angle of this motor trivial ($\theta_{base} = \phi$). This leaves two non-trivial dimensions for the positions ($r$, $z$) and three joint angles that need to be determined ($\theta_1, \theta_2, \theta_3$). For various math and non-math reasons, I added one further constraint on the position vector. To specify the angle that the gripper ends up at, I added a third 'position' $\psi$ defined as
\[ \psi = \theta_1 + \theta_2 + \theta_3 \]
From this, we can formulate our forward kinematic function $f$ as:
\[ \vec{f}(\vec{\theta}) = \begin{pmatrix} l_1 cos(\theta_1) + l_2 cos(\theta_1 + \theta_2) + l_3 cos(\theta_1 + \theta_2 + \theta_3) \\ l_1 sin(\theta_1) + l_2 sin(\theta_1 + \theta_2) + l_3 sin(\theta_1 + \theta_2 + \theta_3) \\ \theta_1 + \theta_2 + \theta_3 \end{pmatrix}. \]
The partial derivatives of each of these three components with respect to each joint angle gives the analytic form of the Jacobian matrix $J$.

I coded this up in C as an IKSolve() function which reads and write to global variables for the current and target positions and angles. The function performs a single iteration of updating the target angle and allows the rest of the code to work on moving the arm to those angles in-between iterations.


The 3-by-3 matrix inversion is performed using a function adapted from a post found on stackexchange.

I added this to my robot arm code and uploaded it to the Arduino Pro Mini. The target position has to be hard-coded, so each new target required uploading the code again. When given a target position that is within it's reach, the arm typically does a good job moving to it. Here's a gif of the arm moving a pen down to touch a piece of paper:


The algorithm performs fairly well. The arm may not have taken the most optimal path to get to the target position, but we have not put any constraints on the path. The arm can also be told to move to a position that is outside it's reach. Here's a gif of the arm trying to place the pen directly above the base, slightly out of it's reach:


As you can see, the algorithm isn't perfect. The arm seems to struggle to rotate itself to the correct angle, and instead of converging on a final resting place it seems to oscillate around the target. Finally, it goes unstable and wanders off. The Jacobian algorithm used in this example is attempting to converge on the exact answer provided, and when it physically cannot, it starts doing strange things. It will get the arm as close as it can get, but will keep iterating, thinking that it will just need a few more steps to get to the answer. While testing the arm I found this instability to show up much more often than I would have liked. If the arm could not move itself to exactly the correct position, it would start to shake and eventually drift away from the target completely. I needed a new approach.

To fix this instability, I resorted to using a technique I am familiar with for my grad school research. I do helioseismology, which involves measuring oscillations on the surface of the sun to infer that is happening on the inside. Specifically, I'm interested in the flow velocities deep in the solar interior. Every flow measurement we make is related to the actual flow inside the sun by some reasonably convoluted math. The process of turning a set of measurements into a flow map of the solar interior involves solving an inverse problem. This is similar to what I have outlined for the robot arm, except the math for the forward problem is very different. We have a few methods of solving the inverse problem, one of which is using Regularized Least Squares.

Least squares is often used in data analysis to 'fit' an analytical model of data to the measured data. The model is reliant on a user-defined set of numerical paramaters that are optimized during the fitting to produce a model that appropriately matches the data. The measure of how appropriate the match is depends on the sum of the difference between the data and the model squared. This way, the optimized parameters are the ones that provide the least deviance between the model and the data. In the case of solving inverse problems, we want to find the angles (the parameter set) which will position the arm (the model) closest to our target position (the data). Additional terms can be added to the model to bias the optimization to find a solution that we prefer (regularization).

The wikipedia page for non-linear least squares provides a nice derivation of how to turn the problem into matrix algebra which can be iterated to find an optimal solution. In the end, you can express the iteration technique in a similar way to the Jacobian Inverse method outlined above:

Jacobian Inverse: \[ \vec{\delta \theta} = J^{-1} (\vec{p_{\mathrm{t}}} - \vec{p_{\mathrm{g}}}) \]
Least Squares: \[ \vec{\delta \theta} = (J^T J)^{-1} J^T (\vec{p_{\mathrm{t}}} - \vec{p_{\mathrm{g}}}) \]
Regularized Least Squares with Weighting: \[ \vec{\delta \theta} = (J^T W J + R)^{-1} J^T W (\vec{p_{\mathrm{t}}} - \vec{p_{\mathrm{g}}}) \]

Without regularization or weighting, the least squares method looks strikingly similar to the Jacobian method. Once the fancy extras are added back in things look a little messier, but the essence is still there. Since I already had code set up to solve the Jacobian method, it was easy enough to modify it to handle a few more matrix operations:


The weighting vector simply weights one or more components of the target position more or less than the others. If you want to make sure the gripper is in the right location but you don't care so much for what angle it ends up at, you can de-weight the third component. I set the regularization to penalize solutions which caused excess bending at any of the joints. This way, if there happened to be multiple ways of the arm positioning itself correctly, this would bias the algorithm to pick the solution that is 'smoothest'.

With this more advanced approach, the robot arm was acting much more sane. I could still confuse it with particularly troublesome targets, but overall it was much more stable than before. If a target position was out of reach or required moving out of bounds for any joint, the arm would settle on a position that was fairly close. With high hopes, I stuck a pen in the gripper and told it to draw circles on a piece of paper.


The code will need a little more tuning before I sent the arm off to art school. For now, the dunce hat remains. I've been able to confirm that the arm generally works, but I think the next step is to see how well I can fine tune both the code and the sensors to achieve a modest level of accuracy in the positioning (<1 cm).

I don't always like to look up the solution to my problems right off the bat. Often I will see if I can solve them myself using what I know, then sometime near the end check with the rest of the world to see how I did. For this project, I was happy to find that different types of modified least squares methods are commonly used to do inverse kinematics. I certainly didn't expect to create a novel method on my own, but even more, I wasn't expecting to land on a solution that the rest of the world already uses.

Saturday, November 30, 2013

12 Hour Project: Arduino Mechanomyography


It's the weekend after Thanksgiving. Not only are my grad student responsibilities minimal, but the servers I get most of my data from are down over the holiday. I've tried sitting around doing nothing, but that was too hard. Instead I've decided to challenge myself to a 12-hour electronics/programming project.

Seeing as it's just past Thanksgiving, leftovers are the highlight of the week. I've gathered quite a few miscellaneous electronic parts over the last year, mostly from projects that never got off the ground. I've got leftover sensors, microcontrollers, passive components, various other integrated circuits, and a couple protoboards. To keep myself busy and to get rid of a couple of these spare parts, I've decided to do a short 12-hour project. I set out a few rules for myself:
1) I could only use parts I had sitting around.
2) I had 12 hours to complete it
3) I had to build something useful (OK, maybe just interesting)

I did allow myself to think about the project before I started working. One topic I've been interested in for a long time but haven't yet pursued is bio sensing. This is basically the act of measuring some kind of biological signal (heart rate, blood pressure, brain waves, skin conductivity, etc.). Real medical equipment designed to measure these things is often very accurate, but is fairly expensive. As the cost of the sensor goes down, the accuracy usually goes down too. The goal is then to find a cheap method of measuring these biological signals well enough. While a hospital may need to know your O2 sats to two or three significant digits in order to save your life, an outdoor enthusiast might just be generally curious how their sats change as they hike up a 14,000 foot mountain.

As an avid boulderer, I've been particularly interested in muscle use while climbing. A big thing that new climbers notice is how much forearm strength is required to get off the ground. I wanted to see if I could measure forearm muscle activation using whatever I had lying around. The first method of measuring muscle activity that came to mind was electromyography (EMG). This method measured the electrical potential generated by muscle cells. I figured this would be difficult to do without special probes, so I went on to mechanomyography (MMG). This method measures vibrations created by activated muscles. The vibrations can be picked up using many different sensors, including microphones and accelerometers. Luckily, I had a few of the latter sitting around from a previous project. The night before I planned on starting any work on this, I set up a camera to take a time-lapse of my work area for the next day.



I started working bright and early at 10 am. I found an old MPU-6050 on a breakout board and soldered wires to the power and I2C lines so I could begin collecting data for analysis using an Arduino Mega. I wrote a quick code for the Mega that would grab raw accelerometer readings from the MPU-6050 and print them directly to the serial port every few milliseconds. I simply copy and pasted the data over to a text file that I could read with my own analysis codes.


For this project, I relied heavily on wavelet transforms to help interpret the sensor readings. Since the expected muscle signal would be a transient oscillation with unknown frequency and unknown amplitude, I needed a method that would break down any wave-like signal in both time and frequency. I was lucky enough to find an excuse to learn about wavelets for my grad school research (it ended up not being the best method, go figure) so I started this project with a few analysis codes to work from. Before I get into the results, I can try to give a quick run-down of why wavelets are useful here.

Interlude on Wavelet Transforms
The wavelet transform is an integral transform in that it maps our signal from one domain into another. In many analysis problems, a signal can be difficult to interpret in one domain (time) but easier in another (frequency). An example would be a signal comprised of two sine waves of different frequency added together. The addition creates a strange looking signal that is difficult to interpret, but when mapped on to the frequency domain, the two component sine waves appear distinct and separated. Why is this useful for our problem? Well, I should start by talking about a simpler transform, the Fourier Transform.
Interlude on Fourier Transforms (FTs)
Just kidding, if I follow this I'll fall down a rabbit hole of math. All you need to know is the following: any signal can be thought of as the combination of many sine waves (oscillations) that are at many different frequencies and different amplitudes. An FT is a way of displaying which particular frequencies are prominent in the signal. The issue is that it analyzes the entire signal at once, so if the composition of the signal changes over time, this change gets mixed up with the rest of the signal and you get just a single measure of the components.
Back to Wavelets
Suppose we have a signal we want to analyze in terms of simple oscillations (like a muscle vibrating), but the oscillating component we are interested in changes with time. Maybe the frequency changes, maybe the amplitude changes. A wavelet transform allows us to see the individual frequency components of the signal as a function of time. For an example, I'll use a microphone recording I did for the Music Box project of a few notes played on a violin.


I pulled the data into a little IDL code I adapted from examples here and plotted the results.


The warm/white colors show parts of the signal that have a large amount of power, or are particularly strong. The vertical axis gives the frequency of the component, and the horizontal axis gives the time during the original signal that it occurs. As time progresses, you can see that the microphone picked up a series of notes played at successively higher frequencies. For any of these notes played, you can see multiple prominent components of the signal showing up at distinct frequencies. These are the harmonics of the base note that show up while playing any instrument. If you are used to seeing plots like this and are wondering why it looks so awful, that's because I 'recorded' this by sampling a microphone at about 5kHz using an Arduino, printing the analog value to the serial port, then copy and pasting the results into a text file. I also may have gotten the sampling rate wrong, so the entire plot might be shifted a little in frequency.

Now that we've got the math out of the way, how does this relate to the project at hand? Muscles vibrate at particular frequencies when moving. The frequency at which they vibrate depends slightly on which muscle it is and how much force the muscle is exerting. For more information on these oscillations, check out this paper. It's an interesting read and has quite a few useful plots. My hope is to identify both the frequency and amplitude of the signal I'm interested in by looking at the wavelet transform of the accelerometer output.

Going all the way back to my simple MPU-6050 + Arduino Mega setup, I set the sensor down on my workbench next to me and recorded data for a few seconds. Near the end of the recording, I dropped a pen a few inches away from the sensor to see what it would pick up.


I've broken the sensor analysis into the three distinct directional components from the accelerometer (x,y,z). The colors once again show relative amplitude of the signal, and anything near the dark cross-hatching is not to be believed.

For the first 15 seconds, we see the noise signature of the sensor. There appears to be a slight frequency dependence, in that there is a little more power near 25 Hz and 150 Hz than in other places. I'm not sure where this is coming from, but it's low enough amplitude that I'm not concerned. At 15.5 seconds, the pen hits the desk and we see a spike in power at high frequency, along with a few of the bounces the pen made in the second after first impact.

Next I duct taped the sensor to my forearm with the x-axis pointing down the length of my arm, the z-axis pointing out from my skin, and the y-axis obviously perpendicular to both. I started by recording data with my arm as limp as possible.


Similar to the previous plot, but with a constant signal between 2 and 10 Hz. I don't really know what this is coming from wither, but it's roughly consistent with my heart rate when I have a sharp metal object glued tight to my arm.

Next I recorded data while squeezing a hand exerciser twice to see if there was a significant difference between that and my relaxed arm.


Luckily, it was obvious when I was flexing and when I wasn't. There was a significant increase in the power around 13 Hz whenever I engaged my forearm muscle, and a spike in power right when I relaxed it. After a few more tries, I found this signal to be fairly robust. Even when I was moving my whole arm around to try to confuse the accelerometer, I could see a distinct signal around 13 Hz whenever I activated the muscles underneath the sensor. I decided that the x-component of the accelerometer data was giving the highest signal-to-noise ratio, so for the rest of my analysis I only used that.



Since the power at 13 Hz wasn't the only thing present in the accelerometer data, I had to filter out everything around it. I used a 4th-order Butterworth band-pass filter centered on the frequency I wanted. I found this nice site that calculates the proper coefficients for such filters and even gives filtering code examples in C. After implementing the filter, I compared the new signal to the unfiltered one.


With the signal I wanted extracted from the rest of the accelerometer data, I implemented a quick algorithm for determining how much power was in the signal at any given time. Over a 0.1 second window, the Arduino would record the maximum amplitude of the filtered signal and feed that in to a running average of every previous maximum from past windows. This created a somewhat smooth estimate of the amplitude of the 13 Hz oscillations. I modified the Arduino code to only print this amplitude estimation to the serial port and recorded the result while flexing a few times.


I finally had a simple quantity that gave an estimate of muscle activation. The next step was to add some kind of visual response. I had an extra RGB LED strip sitting around from my Daft Punk Helmet project, so I went with that. The strip of 7 LEDs I planned on using only needed three connections, Vcc, Gnd, and Data. I had plenty experience getting these strips to work with the helmet project, so that was a breeze.


Next was to transfer the code over to a little 16 MHz Arduino Pro Mini and wire everything to it. To power the setup I added a little Molex connector so it would be compatable with the 5V battery packs I made for my helmets (a 3-cell LiPo dropped to 5V with a switching regulator).


I added a simple conversion between signal amplitude and LED color to the code and tried it out. Unfortunately, I couldn't find any plastic or cloth straps that would work for securing the sensor and display to my arm, so I duct taped them on. Over the course of the day I had ripped around 5 pieces of tape off my arm so I was getting used to the pain.


At this point I declared the project a success. Not only had I built a contraption to measure muscle activity using components I had sitting around on my desk (and found a use for wavelet transforms), but I had done it in my 12-hour limit (actual time ~10.5 hrs). I've placed the final code on github for those interested.

I've got close to 10,000 still frames from the time-lapse I recorded while working, but it will take me a day or two to get that compiled and edited. I'll update once it's live!

Sunday, November 17, 2013

Daft Punk Helmets: Controllers and Code


I'm hoping this will be my last post about the Daft Punk helmets that I made for Halloween. While they were an incredibly fun project to design, build, and program, it has been a fairly draining process. I'm ready to move on to a new project, but first I'd like to wrap this one up.

Before I get into how I controlled the numerous lights embedded in each helmet, I should take a moment to discuss the power requirements. This is one area that I was uncertain about coming from a background of knowing absolutely nothing about electronics. The first problem was what the power source would be. I listed by options and their pros/cons:


After much deliberation I settled on using some LiPo batteries I had sitting around from another project. I had two 3-cell (11.1V nominal) 4500mAh rechargeable batteries and two helmets to power. A perfect match. Unfortunately, all of the LED drivers needed 5V to operate. Since connecting them to the batteries as-is would most definitely end painfully, I needed a way of dropping the voltage from 11.1V to 5V. The beginner electrical engineer in me immediately thought of using the ubiquitous LM7805, a linear voltage regulator set up to drop an input voltage to a steady 5V on output. A quick check with the datasheet for the pinout and I was ready to go.

Uh oh.

What does this mean? The datasheet for any electrical component typically provides a large amount of information on how to operate the component, what the standard operating characteristics are, and what the limits of operation are. Above, I've highlighted in appropriately fire-orange the thermal resistance of the component. The units for this are degrees Celsius per Watt, and the quantity describes how much the device will heat up when dissipating some amount of power. Why is this of any concern to me? Well, linear voltage regulators drop the input voltage down to the output voltage while maintaining some amount of current by dissipating the excess power as heat. If you don't understand that last sentence, either I'm explaining it poorly or you should read up on basic electronics (V=IR, P=IV). I prefer to believe the latter.

So how much power will I need to dissipate? I know the voltage drop (6.1V), but what is the current draw? Let's consider the pseudo-worst case where every LED in each helmet is on full brightness, but the LED drivers and other components don't require any power. For the red LED panels, that makes 320 LEDs using 20mA of current each, or 6.4 Amps total. For the RGB LEDs, there are 96 LEDs with three sub-components each drawing 20mA resulting in 5.8 Amps. So in dropping 6.1V while powering every LED, we need to dissipate up to 40 Watts of power (~45% efficiency). Our handy datasheet tells us that the regulator will set up a temperature differential relative to the ambient air temperature of 65 degrees Celsius for every Watt we dissipate. This leaves us with... 2600 degrees. That's hot enough to melt iron. It's also hot enough to completely destroy the regulator, so using a linear voltage regulator is not an option. There is the option to use a heatsink to help dissipate the waste heat more efficiently, and the datasheet helpfully tells us that in that case, we only heat up by 5 degrees C per Watt. This gets us to a toasty 200 degrees C, still too hot to handle. We need another option.

Enter the DC-DC switched-mode regulator. I'm not enough of an expert on these circuits to give an intuitive explanation for how they work, but the end result is a drop in voltage with high efficiency (75-95%). The higher efficiency means we won't be dissipating nearly as much energy as heat compared to the linear regulator. I grabbed a couple cheap step-down converters (a switched-mode regulator that can only step voltages down) from eBay and made two battery packs. Each converter had a small dial for picking the output voltage, so I tuned that to output 5V and glued it in place.

Top: finished battery pack. Bottom: Just the switching regulator.

The rest of the circuitry for each helmet was fairly simple. I used an Arduino Pro Mini 5V in each helmet to control the lights and hooked up a tiny push button to each to provide some basic user input. The LED drivers for the red LED panels needed three data lines connected to the Arduino, and the RGB strips just needed one per strip. With all of this hooked up, the helmets were ready to be programmed.



Before I get into the details of the code, the whole thing can be viewed on my github.

There are a couple steps to thinking about the code for each helmet. I think it's easiest to work out the code from the lowest level to the highest. The first step is figuring out how to talk to the LED drivers. Luckily I had fairly complete drivers in each helmet that handled power, PWM, and had an internal data buffer. The first bit of code I wrote just interfaced with the different drivers. The second step is figuring out how to do an internal data/pixel buffer in each Arduino so that you don't have to compute which LEDs to turn on while attempting to talk to the LED drivers. This causes significant lag, and for the RGB LED drivers, is impossible (more on that later). The third step is deciding how to fill the internal pixel buffers. This is where I get to decide what patterns will show on each helmet, how quickly they update, etc. The code will basically 'draw' an image in the pixel buffer and expect it to be sent out to the LED drivers by whatever code was written for step two. The fourth and final step is writing the overall flow of the code. This is what handles which drawing mode is running, how the push button is dealt with, and how often the pixel buffers and LED drivers should be updated.

While the overall theme of both codes roughly follow these steps, there were differences in implementation for each helmet. I'll go through a few of the key points for each step.

The Silver Helmet (Thomas)
Step 1: The drivers were essentially acting as shift registers with 16 bits of space in each. The first 8 bits were an address and the second 8 bits were the data payload. The addresses 0x01 to 0x08 pointed to the 8 different columns of pixels attached to each driver. Sending a data payload of 0x24 (binary 00100100) with an address of 0x02 would set the third and sixth pixels of the second column on.
Step 2: Since each pixel is either on or off, only a single bit is needed for each pixel. The display was conveniently 8 pixels high, so a single byte for each of the 40 columns was used as the internal buffer.
Step 3: There ended up being 6 possible modes for this helmet. Some displayed text by copying individual characters from memory onto the pixel buffer, while others manipulated each pixel to create a unique effect. Below I have a gif of each mode except the 'blank' mode, which you may guess is not very interesting.
Step 4: The button was coded up as an internal interrupt in the Arduino, so at any point, pressing the button would increment the mode counter. Every few milliseconds, the main loop would allow the code pertaining to the current mode to take a 'step' in modifying the pixel buffer, then the main loop would go ahead and push the pixel buffer out to the LED drivers.


Robot / Human


Random Bytes


Around the World


Starfield


Pong

The red pixel near the bottom of the display is not a stuck pixel. It's actually the on-board LED for the Arduino shining through from the backside of the helmet. A few notes on the modes. The Robot/Human mode would actually choose which of those two words to display at random after the previous one had finished scrolling by. The random bytes was done through bit-shifting, byte-swapping, and random bit-flipping. The starfield was boring so I never left it on. The game of Pong actually kept score with itself on the edges and the ball would get faster as the game progressed.

The Gold Helmet (Guy)

Step 1: The LED drivers found on each RGB LED require precise timing in order for them to accept data. Luckily, there is a wonderful library put out by the lovely folks at adafruit that handles this timing. Their library uses hand-tunes Assembly code to ensure the timing is accurate and that the Arduino plays nice with the drivers.
Step 2: The NeoPixel library sets up a pixel buffer on its own, but I decided to also keep one outside the library. Each pixel requires 8 bits of data per color channel and I was using 8 strips of 6 LEDs on each side of the helmet. This is 96 LEDs (288 individual color channels), but I was only interested in having a single color for each strip of 6 LEDs. This limits the number of color values that need to be held in memory to 48. I kept a single array for each color and side (right-red, right-green, ..., left-red, ...), each 8 bytes long.
Step 3: There were only four modes (including a blank one) that I came up with. The gifs are below.
Step 4: Again, the push button acted as an interrupt for the Arduino to switch modes. The main loop stepped forward the appropriate mode and copied my own pixel buffers to the NeoPixel buffers so the data could be sent out.


Classic Blocks


Cycling Colors


Slow Beat

The Classic Blocks mode was created based on some videos I had seen of other similar helmets in action. I felt it had a nice 90s feel to it. The Cycling Colors mode was copied directly from my Music Box project, but sped up by a few times. The Slow Beat mode used the colors from the Classic mode, but kept them mostly steady with a slow pulsation of the brightness.

With that, the code to drive both helmets had been finished. The LED drivers had been wired to the LEDs; the LEDs had been soldered together; the visors had been tinted and melted into place; the helmets had been painted and glossed, sanded and molded, reinforced with fiberglass and resin; and some pieces of paper had been folded into the shape of two helmets. It had been a hectic but incredibly fun project for the eight weeks it took to get to completion. Not everything went as planned and some of the edges were rougher than intended, but I personally think the end result was well worth the effort. I hope these few posts covering the build progress have been interesting or even useful to you. To wrap this up, here are a few snapshots of the helmets being worn for Halloween 2013:




Monday, November 11, 2013

Robobear and a Note on Failure

My next post will be something interesting, probably more about my Daft Punk helmets. But for now, something a little less flashy.



I would like to talk about a recent project that ended up a failure. This is not the first failure I've had with projects related to this blog, but typically I just move on to a new project that has a better chance of succeeding. There's a lot that can be learned from failure, but often each lesson is small and not worth an entire written discussion. Collected together, these small lessons from failure provide an interesting look at personal development over time. Here are some of my own recent discoveries:

- Don't connect the terminals of a battery together to see if the battery is dead.
- When soldering, wait for the iron to reach the right temperature before you begin.
- Don't let a linear voltage regulator try to dissipate 10 Watts of power, they are not light bulbs.
- Know the structural limitations of your materials (duct tape, hot glue, epoxy, solder, nuts and bolts, wood, metal, acrylic, carbon fiber, fiberglass, etc.). You don't want them breaking at just the wrong time.
- Don't waste a microcontroller on something that can be done with a cheap IC (LED driver, motor driver, etc).
- Before you buy a bunch of electronic components, check eBay. Someone might be offering it for 10% the cost you would have payed elsewhere.
- If you need to make analog measurements (like a microphone), make sure the power source is stable and you've taken care to reduce electromagnetic interference.
- Prototype, collect data, and analyze your idea before committing time and money. This is like the advanced version of "measure twice, cut once".

Failure in the context of a hobby is a very productive kind of failure, since the risk is low. All you can waste is your own time and money. There isn't a risk of disappointing someone else (usually), and there isn't a risk of getting fired. The reason behind a failed project can be discovered and the lesson digested on your own time. Working on my own projects in my free time has allowed me the opportunity to stop and smell the roses and then truly understand why soaking roses in chloroform is a bad idea.

WARNING: The following is a long-winded story about the Robobear project and how it failed. If you are not interested in the details of how I attempted to motorize a wooden cart that would carry a bass player and a taxidermy bear, skip it. Alternately, here is a tldr summary: My job was to motorize a big wooden cart so it could move silently across a stage, the project failed despite solving many problems along the way.

A few weeks ago I was approached by my good friend Julie to come up with some ideas on how to get a wooden cart to move across a stage with minimal human interaction. The cart was 9 feet long, 4 feet wide, and only about 3 inches high. The purpose of the cart was to ferry a string bass player and a taxidermy bear on to and off of a stage for an artistic performance. I've heard of strange requirements from clients, but this was certainly unique.

I saw a few ways of moving the cart:
1 - Have someone push/pull the cart
2 - Have two ropes; one tied to the back for pulling, and one attached to a wind-up mechanism underneath the cart to make the cart move away when the second rope is pulled.
3 - Motorize some of the wheels and use wired, wireless, or fully automated controls.
After discussing the options, she chose to move forward with the motorized option and keep the other two as backup plans. And so, the idea of Robobear was born. I started planning, prototyping, and ordering parts. The budget was tight, so I had to plan carefully and keep cost in mind when designing the system.

The primary concern throughout this whole project was torque. The motors have to be capable of exerting enough torque to push the cart. If we pretend the cart is in a frictionless world (except the friction that allows the tires to push against the ground) and the ground is completely flat, then any amount of torque will do. Having more torque available would just increase the possible acceleration of the cart. Since we unfortunately don't live in such a world, we need the motors to be able to overcome friction and bumps in the ground before they can even start to move the cart. The first step in this project was estimating the necessary torque to get the cart rolling. By pulling the cart by hand with various amounts of weight on it (very scientific), I was able to estimate that it needed about 30 pounds of horizontal force.

To convert between force and torque, we need to know the lever arm distance (torque = force * distance). In this case that is the radius of the wheels that will be used. There wasn't a whole lot of space underneath the cart, so I had few options. Assuming 35mm radius wheels, the combined torque of the motors needed to be at least 4.7 Nm, or 660 oz-in. This is a fair amount of torque, I think. I decided to stick with DC gearmotors, and surfed around some websites I knew of that sold high powered ones. On Pololu, the most powerful motors they offer have around 200 oz-in of torque. Four of these would be necessary to get the cart moving. The speed matched up well enough, since a 70mm wheel rotating at 150rpm gives a max speed of 0.5m/s (1.8 fps). I went with plastic wheels found on the same site, and modified some mounting hubs to connect the large motors to such small wheels. Of course, I could have tried for heftier motors, like a golf cart motor or electric scooter motors. Unfortunately the size limitation was fairly strict, so I had to stick with motors that could fit underneath the cart.

For the motor driver, I went with a pre-built controller that could supply 24 Amps of current total, just enough to allow each of the four motors to stall (6A stall current) if they got stuck. Making a motor driver myself would have probably been cheaper, but most likely could not have supplied as much current without serious heat issues. Sometimes it's best to pay for peace of mind.


Two 3S, 2200mAh LiPo batteries were taken from an old hexacopter project to supply power. I connected them in series, providing a whopping 22V with a possible 40A of current. If you have put those figures together in your head and decided this was a bad idea, then I congratulate you both on your ability to calculate electric power and your good sense. The motors would end up drawing only 10 Amps at peak for only a few seconds, so 2200mAh was enough capacity for the job. The brains of the operation was an Arduino Pro Mini running a program that would interpret serial commands and run the motor driver. With all of the parts in place, I started mounting things to the cart.




After only a few days of work, I was ready to test the system on the floor of Julie's studio. It didn't budge. The floor of the studio was a slick, polished concrete surface that was impossible to get any traction on. It was also difficult to dance on, but that's another story. When I lowered the motors a little to press harder on the floor, the motor brackets just bent back under the weight of the cart. The motors seemed capable of exerting the planned amount of torque, but that torque was unable to make it to the ground to move the cart. Things were not looking good. To increase the traction, I added an extra wheel to each motor, and reinforced the motor brackets with stainless steel wire.



With 8 wheels making contact with the slick ground, the cart was finally able to move itself. However, it couldn't handle moving all of the weight of a human and taxidermy bear yet. The wheels were still slipping, but I accepted that the stage that the cart would need to perform on would not be so slick. The next step was to test the cart on the right floor and hope for the best. Unfortunately, the stage required special access and reservations, so by the time we were able to run the test, there were only 2 weeks left before the final rehearsals. We had to accept that if the cart could not move the required amount of weight by the end of the stage test, the motorization plan would have to be abandoned. We just couldn't keep hoping for a fix right up until the end and risk having an inoperable cart with no backup plan acted on.



We finally got the cart on to the stage and I began testing. At first, it didn't work. I tightened some screws that had gotten loose during transportation and it worked a little better. I tuned the motor response in software to prevent slipping (like traction control on a car) and it worked a little better again. I adjusted the height of each motor to allow more pressure to be applied evenly to each motor and it worked a little better. I could add around 200 pounds to the cart and it could move a few feet in each direction. Finally, the cart seemed to work.

I loaded up the cart with the human half of the intended payload and set it to move about 15 feet in one direction. It started up just fine, then stopped 5 feet short. I told it to reverse direction, thinking it was caught on a wire or bump in the floor, but all I got was a repetitive clicking sound from some of the motors. The motors had enough torque to move the cart, and the wheels had enough contact with the ground to exert enough horizontal force, but the rubber tires on the wheels were getting ripped from the plastic wheels after a few seconds of use.

I quickly came up with a few ideas that could fix this problem. One would be to fuse the tires to the wheels using heat or epoxy. Another way would be to create my own tires out of silicone rubber and mold them directly to the wheels. Unfortunately, every solution I came up with required another week or two of work and testing to confirm that it would fix the problem. We just didn't have the time to try something new. The project failed due to unforeseen complications and lack of time.



So the big question is: what did I gain from this failure? Some leftover motors and half broken wheels? An intimate hatred for slick floors? No, despite my reluctance I probably learned something about getting a job done on someone else's time-frame. Most of my projects are done at my own leisure, so unexpected setbacks can last as long as I like. It was a very different experience to have a looming deadline for something I consider to be a hobby. Not only was the idea of a deadline new, but the idea that there were smaller deadlines on the way to make sure the project as a whole was worth continuing. The biggest one here was making sure the wheels could get enough traction on the theater floor far enough in advance.

In all I came away with two big lessons. They are so big, they deserve a big font.
1) Given a large goal, set many smaller goals that lead up to it. This not only helps keep things on track and on time, but forces you to consider how realistic the main goal is.
2) Don't make your hobby into a job. If you do something for fun, keep it fun. Don't set deadlines or have other people depending on your for their job. I say this not because I regret doing this project, but because I've learned that failing at something is easier when no-one is watching.

Wednesday, October 30, 2013

Daft Punk Helmets: Lots of Lights



Previously, I covered the construction of both of the Daft Punk helmets in a series of posts. Starting with cardstock, they were assembled with superglue, reinforced with fiberglass, molded with Bondo, painted with metallic paint, and visors were made from heated plastic. They were nice and shiny and ready to be worn.

While the black-on-metal look matches the current era of Daft Punk helmets, they haven't always been so simple. Over the years they have gone through a series of visual changes, mostly involving the embedded electronics. You can find examples of when both helmets had an unreasonable number of LEDs hidden behind the visors doing all kinds of flashy things. The silver helmet can have a red LED panel stretching across the inside of the visor with other colored lights at the ends, while the gold helmet can have rainbow blocks of color running down the sides of the visor with a large yellow LED display panel covering the center. Mimicking these elaborate setups would take far too much of my own time and money, so I decided to boil down the lights to what I thought would represent the 'core' of the display.

For the silver helmet I decided to make a 40x8 red LED panel that would hide behind the visor and display words and patterns. For the gold helmet I went with RGB strips lined up down the sides of the visor. Instead of trying to diffuse the colored lights to make a solid block of color, I went with gluing the lights right to the back of the visor to create a slightly harsher and more modern look.

I'll start with the work that went into the 40x8 red LED display, since that one was much more difficult to assemble. The first step was to decide on how to control all of the LEDs. I went with a few MAX7219 controllers, each capable of controlling a single 8x8 LED panel. To prototype the controller, I set up a spare Arduino and pre-made LED panel:


Thanks to numerous examples found online of how to interface with the controller, I set off assembling the red LED panels. Since the display would sit right in front of my eyes inside the helmet, I needed the panel to have enough empty space between LEDs in order to allow me to still see out. To do this I set up a grid of holes in some cardboard and used it as a frame to solder the LEDs together. I made 5 8x8 panels where each column had common cathodes and every row had common anodes.




After quite a few hours of soldering, I had all of the LEDs soldered into panels. I wanted the LED drivers to sit on their own small protoboards near the panels.



With every 8x8 panel connected to a driver, I was able to hot glue all of the panels together side-by-side to make the single large display. I put little wooden sticks at the joints of each panel to allow the panel to be glued into the helmet without having to secure the LEDs themselves to the helmet.



Since the drivers could all be daisy-chained together, there were only 5 wires needed to fully control the panel. Two for power, one as a clock, one as a latch, and one as a data line. These would be hooked up to an Arduino controller, but more on that in a later post.

The lights for the gold helmet were someone easier once I decided to buy an addressable RGB LED strip from ebay. The one I picked was 144 LEDs crammed into a single meter with a built-in driver on each LED. As long as every LED was daisy-chained, only one data line was needed to set every LED to a unique color. I cut the strip into segments that were 6 LEDs long and re-soldered them back into a chain with wires to space them out.



This way, I could orient the strips to run parallel by bending each wire segment by 180 degrees, yet still only use a single data wire to control them all.
Since the LED drivers were all on-board, I was able to test the LEDs at many stages of the assembly.

The first thing I really realized was how bright the strip could get. With every RGB LED running at full brightness, the helmet would pull over 5A of current just to light it up. I was looking for a colorfully lit helmet, not a head-mounted search beacon. I decided the best way to solve this was in software, so I just kept the brightness in mind while testing the hardware. I marked off on the visor where the lights needed to be glued and went to work.



With all of the strips attached, I hooked up the power and data lines to a testing Arduino and lit it up.



At this point, both helmets had lights installed and ready to be driven. The end of the project was in sight, with only the control electronics, power source, and coding to sort out. Once again, more on that later.