tag:blogger.com,1999:blog-51240104314574274732024-03-13T11:36:49.112-06:00Lab Notebookgpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.comBlogger35125tag:blogger.com,1999:blog-5124010431457427473.post-73981822045478897752015-06-17T08:16:00.000-06:002015-06-17T19:37:34.082-06:00[HEX] Navigating like A StarThis is the fourth post in my Hexapod Project series. The point of this project is to build a robot that allows me to try out a few robotics concepts. For a listing of each post in this series, <a href="http://www.gperco.com/2015/06/hex-project-intro.html">click here</a>.<br />
<br />
The previous posts have gone through how the hexapod was built, how the different processors communicate with each other, how it is able to walk smoothly, and how it can both keep track of obstacles and where it is relative to those obstacles. In this post, I'll talk about allowing the hexapod to walk around autonomously.<br />
<br />
The first question we need to answer is: what do we mean by walking around autonomously? Do we tell the hexapod to walk forward and it does so without help? Do we tell the hexapod to walk to 'the kitchen' and it figures out how to do that? Does the hexapod wake up in the middle of the night to search for a midnight snack? Enabling the hexapod to act out the last two scenarios is well past my skill and interest level, so I'll settle for something a little simpler. My goal is to be able to give the hexapod a location in space (as 2D coordinates) and have it walk there without running into anything. To do this, it needs to use all of the algorithms developed in the previous posts to avoid obstacles, keep track of where it is, and step with smooth confidence.<br />
<br />
The work from the previous posts allows for a fair amount of abstraction in solving this problem. From the work on inverse kinematics, we can fully control the motion of the hexapod through two parameters: speed and turning. From the SLAM algorithm, we have a discrete grid of points representing a map of the local environment. Autonomous navigation is then just a problem of deciding where on the map the hexapod should go, and then setting the speed and turning values to make it go that way.<br />
<br />
<span style="font-size: large;"><b>CS101</b></span><br />
<br />
Unlike the problem of Simultaneous Localization and Mapping from the previous post, <a href="https://en.wikipedia.org/wiki/Pathfinding">pathfinding</a> has plenty of easy-to-digest literature floating around. However, this does not excuse me to simply point to some code I've written and call it a day. Further, many naive implementations of common pathfinding solutions lead to code that runs well in theory, but not quickly enough for my relatively simple-minded hexapod.<br />
<br />
A classic pathfinding algorithm is <a href="https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm">Dijkstra's algorithm</a>. In this algorithm, we represent every possible location as a 'node' that is connected to other 'nodes' (in this case, neighboring locations). Each connection between nodes is assigned a distance value so that the algorithm can compute distances between far-away nodes by summing up the connection distances between the two. Given a start and end node, Dijkstra's algorithm finds the shortest path (series of connections) between the two by traversing the network of connected nodes and assigning a distance-from-start value to each. If a node is reached multiple times through different paths, the shortest distance takes precedence. When the target node is reached, we simply trace the network in reverse to find which nodes gave the shortest path from start to end. In this way, Dijkstra's algorithm can find the shortest path from point A to point B (assuming you describe positions in space as a network of connected nodes).<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-geQ-KTWHHkc/VYCUIJItjXI/AAAAAAAAB70/SL8inDiW2D0/s1600/Screenshot-1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" src="http://4.bp.blogspot.com/-geQ-KTWHHkc/VYCUIJItjXI/AAAAAAAAB70/SL8inDiW2D0/s320/Screenshot-1.png" width="320" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Pathfinding on a grid of nodes from point A (green) to point B (red).</span></div>
<br />
In our application, we also want the pathfinding algorithm to avoid sending the hexapod through walls or other obstacles. A simple way to achieve this is to modify the pathfinding algorithm to include a 'cost' of travelling between nodes. This cost is large for connections that land you inside a solid object. We are then not finding the shortest path between points A and B, but instead the path that minimizes distance + cost. Since we are free to pick the cost, we can pick how much the algorithm prefers short distances versus avoiding obstacles.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-sj3xnbBjZnk/VYCUOzBgxCI/AAAAAAAAB78/gpRn4VHMFW8/s1600/Screenshot-2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://4.bp.blogspot.com/-sj3xnbBjZnk/VYCUOzBgxCI/AAAAAAAAB78/gpRn4VHMFW8/s320/Screenshot-2.png" width="317" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Pathfinding on a grid with obstacles.</span></div>
<br />
One of the drawbacks of this method is the amount of time needed to find the optimal path. On a large grid, the algorithm will spend a significant amount of time exploring the multitude of potential paths that may not provide an optimal result, but must be checked to make sure. A modification to Dijkstra's algorithm can further improve the results and leads us to the <a href="https://en.wikipedia.org/wiki/A*_search_algorithm">A* algorithm</a> (pronounced A-star). In this version, we explore connections that lead closer (by some measure) to the target before considering any other paths. This is like making a bet that the path between points A and B is the most direct one. If you win, the algorithm finds the path in a very short amount of time. If you lose, the algorithm has to go back and search all of the other less direct paths. I've decided to go with the A* algorithm, since I know the environment in which the hexapod will navigate is not too maze-like.<br />
<br />
<span style="font-size: large;"><b>Autonavigation</b></span><br />
<br />
My implementation of the A* algorithm for the hexapod (which I call "Autonav") can be found <a href="https://github.com/bgreer/HEX/tree/master/LIB_AUTONAV">here</a>. As with the SLAM algorithm, the actual solver runs in a thread separate from the main thread that controls the hexapod servos. Given a copy of the SLAM map, the current location of the hexapod, and a target location, the solver searches through the map to find a sequence of positions that connects the two locations. Since the SLAM map is updated every few seconds and may contain new obstacles, the Autonav solver needs to be able to solve for a new path every few seconds as well. In my implementation (which is admittedly not very optimized), the solver can search through around 2000 grid points in the 1-2 seconds between SLAM updates. For typical grid setups, this unfortunately only gives a physical search range of a few meters. In order for the Autonav solver to quickly and reliably find paths through the environment, the target position can only be a few meters away. Far-away targets can be reached by splitting the distance up into a series of evenly-spaced waypoints.<br />
<br />
To visualize the path to which the Autonav sends the hexapod, I've written a <a href="https://github.com/bgreer/HEX/blob/master/TESTS/reconstruct/main.cpp">short code</a> that parses the <a href="https://github.com/bgreer/HEX/tree/master/LIB_LOGGER">logfile</a> written from a run and creates a reconstruction of the scene. The render shows the SLAM map (with distance to objects as a smooth gradient) in red, the hexapod location in blue, and the Autonav path in green:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-Pr5onF3_oDk/VYDBfWnBsYI/AAAAAAAAB8M/BplZ46tnsLQ/s1600/image_474.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://4.bp.blogspot.com/-Pr5onF3_oDk/VYDBfWnBsYI/AAAAAAAAB8M/BplZ46tnsLQ/s400/image_474.png" width="400" /></a></div>
<br />
<div style="text-align: center;">
<span style="font-size: x-small;">Scale is 5 cm per pixel.</span></div>
<br />
For a given SLAM map, the Autonav solver does a pretty good job finding a path around obstacles. To turn this path into commands for the hexapod to follow, I set the hexapod speed to be a constant (0.3) and modified the turning value based on the path. The low resolution of the Autonav path can cause the hexapod to act a bit jumpy, as the direction the path takes only changes in 45 degree increments. To smooth the path, the hexapod picks a 'tracking point' at least 20 cm away from its current position along the Autonav path. It computes the heading to that point and sets the turning parameter proportional to how far away the hexapod's heading is from that point. In this way, the hexapod continually tracks the Autonav path.<br />
<br />
To test how well the whole system works, I set up different scenarios for the hexapod to navigate through. After each test, I compiled the reconstruction renders into a movie so I could play back what happened. The first test was for the hexapod to move to a target 1.5 meters in front of it in an empty room, then turn around and go back to the starting point.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/YjpZDXWPMuc/0.jpg" frameborder="0" height="400" src="https://www.youtube.com/embed/YjpZDXWPMuc?feature=player_embedded" width="480"></iframe></div>
<br />
From the video, we see that everything seems to be working well. The SLAM algorithm updates the map with LIDAR scans made from new perspectives, the Autonav solver continually updates the intended path, and the hexapod smoothly strolls from point A to point B. Through the eyes of a human observer, this kind of test looks pretty neat:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/Oo9sNKFZIFY/0.jpg" frameborder="0" height="400" src="https://www.youtube.com/embed/Oo9sNKFZIFY?feature=player_embedded" width="480"></iframe></div>
<br />
<br />
Once again, everything works pretty smoothly. Of course, not every test I did wound up being such a simple success. Here's an example of a test where the hexapod got turned around due to what I call "SLAM-slip", where the SLAM algorithm fails to properly match new LIDAR scans to the map.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/GEU8LuLZSHE/0.jpg" frameborder="0" height="400" src="https://www.youtube.com/embed/GEU8LuLZSHE?feature=player_embedded" width="480"></iframe></div>
<br />
<br />
Here's a more complicated test where I added a box along its return path to see if it could adapt to the new environment:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/e5xhxeDKIhA/0.jpg" frameborder="0" height="400" src="https://www.youtube.com/embed/e5xhxeDKIhA?feature=player_embedded" width="480"></iframe></div>
<br />
With a bit of tuning, the hexapod was able to successfully navigate around both permanent and temporary obstacles and reach the waypoints I gave it. The biggest issue I found was in timing. By the time the SLAM algorithm updated the map and the Autonav solver found a new path, the hexapod would have moved enough to invalidate any new solution for where it should go. Sometimes there would appear to be almost a 5 second delay between useful updates to the hexapod turning. Considering all of the math behind what the hexapod is trying to run on such low-powered hardware, a 5 second delay isn't too surprising. It's unfortunate that it seems just barely underpowered for the task set before it. A simple solution to this problem is to just let the hexapod walk slowly so it can get updates to where it should go before walking too far off course.<br />
<br />
<b>UPDATE: more testing, so here's a longer navigation test through my apartment!</b><br />
<div class="separator" style="clear: both; text-align: center;">
<iframe width="480" height="400" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/rFxSaccU5a8/0.jpg" src="https://www.youtube.com/embed/rFxSaccU5a8?feature=player_embedded" frameborder="0" allowfullscreen></iframe></div>
<br />
<br />
<span style="font-size: large;"><b>Race Prep</b></span><br />
<br />
At this point, the hexapod is a pretty fleshed-out robot. It can do some basic autonomous navigation through my apartment, given a set of waypoints to follow. While the end result is conceptually fairly simple, the process of getting there has been considerably involved. It would be a shame to end the process here, but there isn't a whole lot more I can reasonably do to enhance the hexapod's capabilities at this point.<br />
<br />
To give the hexapod a final task, I've entered it into the Sparkfun Autonomous Vehicle Competition (AVC) on 20 June 2015. I attended the previous two AVCs and had a lot of fun watching both the ground and aerial races. The sophistication of the entries in the ground race has been extremely varied, from LEGO cars relying on dead-reckoning to go-karts with <a href="https://en.wikipedia.org/wiki/Real_Time_Kinematic">RTK GPS</a>. I figure that a LIDAR-equipped hexapod falls comfortably between these two ends.<br />
<br />
Unfortunately, there are three main issues with my hexapod that make it woefully underprepared for such a competition. The first is the speed. The fastest I've gotten the hexapod to move is around 1.2 feet per second, but the navigation has issues above 0.5 feet per second. Robots in the competition are expected to travel almost 1000 feet in less than 5 minutes, giving an average speed of 3.3 feet per second. So at best, the hexapod will only make it about 20% of the way around before time is called. The second issue is sunlight. The LIDAR unit I'm using is intended for indoor use, and can't make reliable measurements in direct sunlight. This means that the hexapod might not be able to see any of its surroundings to get a position reference. The third issue is that the course doesn't really have many natural obstacles to use for position reference. In previous years, the outer boundary of the course was a chain-link fence, which is likely all but invisible to LIDAR. Even if the LIDAR could work in sunlight, there might not be any objects within range to sense. With these significant issues, I'm still going to race the hexapod. It won't win, and it might not even lose gracefully.<br />
<br />
One of the requirements for entries into the AVC ground race is that the robot must not only be autonomous, but must start the race with a single button press. So far I've been ssh-ing into the main processor through WiFi to initiate any actions, so I need one last physical addition to the hexapod.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-7K1cEDHF-SE/VXkPIbr70sI/AAAAAAAAB58/L0RiTzSJPG4/s1600/IMG_20150607_114926608.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="332" src="http://1.bp.blogspot.com/-7K1cEDHF-SE/VXkPIbr70sI/AAAAAAAAB58/L0RiTzSJPG4/s600/IMG_20150607_114926608.jpg" width="590" /></a></div>
<br />
<br />
The LED+button board connects to the Due, but since the Due pins are shared with the main cpu, both can access them. The orange LED (#1) blinks to show that the PROG_HWMGR code is running on the Due, and the button below it does nothing. The rest of the LEDs and buttons are controlled by the main cpu when running in fully-autonomous mode. When the code is ready to enable the servos and spin up the LIDAR unit, the white LED (#2) flashes. Pressing the button below it allows the code to continue. The blue LED (#3) flashes when the LIDAR is spun up and the SLAM algorithm is ready to make an initial map. Pressing the button below it starts the integrating process. When the initial integration is done, the green LED (#4) flashes, indicating the hexapod is ready to race. Pressing the final button starts the race, and pressing it again ends the race early (in case of <a href="http://www.clickhole.com/article/science-ftw-ibm-just-announced-robotics-competitio-2540">wayward robot</a>).<br />
<br />
So with that, the hexapod is built, programmed, and ready to compete. I'll post a write-up of whatever happens during the race sometime next week, along with some pictures of it racing!gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com0tag:blogger.com,1999:blog-5124010431457427473.post-16710413189251778552015-06-16T11:13:00.002-06:002015-06-16T21:54:25.052-06:00[HEX] Come on and SLAM<script type="text/x-mathjax-config">
MathJax.Hub.Config({
tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}
});
</script>
<script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
This is the third post on my Hexapod Project series. The point of
this project is to build a robot that allows me to try out a few
robotics concepts. For a
listing of each post in this series, <a href="http://www.gperco.com/2015/06/hex-project-intro.html">click here</a>. In this post, I'll talk about using laser ranging to let the hexapod keep track of where it is relative to its environment.<br />
<br />
<span style="font-size: large;"><b>Robots are Dumb</b></span><br />
<br />
It's not hard to find examples of robots failing to perform tasks that a human might deem trivial. Even the entrants to the DARPA Robotic Challenge--while amazingly sophisticated in their design--sometimes fail to turn handles, <a href="https://www.youtube.com/watch?v=UUOo8N9_iH0">walk through doors</a>, or even just stand upright for very long. If you are unfamiliar with the subtle challenges of robotics, these failures might make you wonder why it takes so much money and time to create such a flawed system. If a computer can calculate the N-millionth digit of pi in a few milliseconds, why can't it handle walking in a straight line? The general answer to this is that the fundamental differences between the human mind and a computer cpu (neurons vs transistors, programming vs evolution, etc) create vast differences in the intrinsic difficulty of many tasks.<br />
<br />
One such task is spatial awareness and localization. Humans are capable of integrating various senses (sight, touch, balance) into a concept of movement and location relative to an environment. To make my hexapod robot capable of autonomous navigation, it also needs to have a sense of location so that it can decide where it needs to go and where it should avoid going.<br />
<br />
Arguably the most common way of letting a robot know where it is in the world is GPS. Satellites in orbit around the earth broadcast their position and the current time, and a GPS receiver receives these broadcasts and triangulates its position. A GPS-enabled robot can figure out its location on Earth to within a few feet or better (depending on how much money you spend on a receiver). The biggest issue with GPS is that a robot using GPS needs a clear view of the sky so that it can receive the signals being beamed down from the GPS satellites. GPS also doesn't give you any information about your surroundings, so it's impossible to navigate around obstacles.<br />
<br />
For the hexapod, I wanted to avoid using GPS altogether. I chose to use a <a href="https://en.wikipedia.org/wiki/Lidar">LIDAR</a> unit for indoor localization, mostly because it seemed like an interesting challenge. LIDAR uses visible light pulses to measure the distance to objects just like RADAR uses radio waves bouncing off objects. A LIDAR unit contains a laser emitter/detector pair that can be swept across a scene to make measurements at different angles. At each angle, the unit emits a pulse of laser light and looks for a reflected pulse with the detector. The delay between emission and detection (and the speed of light) gives the distance to the reflecting object at that angle. High-quality LIDAR units (like those used in self-driving vehicles) can quickly give an accurate 3D scan of the surrounding environment.<br />
<br />
<span style="font-size: large;"><b>More Lasers</b></span><br />
<br />
The LIDAR unit I picked is taken from the <a href="http://www.neatorobotics.com/robot-vacuum/xv/xv-11/">XV-11 robotic vacuum cleaner</a> from Neato Robotics. You can find just the LIDAR unit by itself as a replacement item on various sites; I got mine off eBay for around $80. The XV-11 LIDAR unit has a bit of a following in the <a href="https://www.sparkfun.com/news/490">hacking community</a>, as it offers 360-degree laser ranging at 5 Hz for much cheaper than anyone else. While the unit isn't open source, there are some nice <a href="https://xv11hacking.wikispaces.com/LIDAR+Sensor">resources online</a> that provide enough documentation to get started. It only scans in a 2D plane, but what you lose in dimensionality you gain in monetary savings.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-rqX2cmSSOJ0/VXxWNMo_jjI/AAAAAAAAB6o/TK6rdpFCni4/s1600/IMG_20150225_195804393.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="297" src="http://2.bp.blogspot.com/-rqX2cmSSOJ0/VXxWNMo_jjI/AAAAAAAAB6o/TK6rdpFCni4/s600/IMG_20150225_195804393.jpg" width="590" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Laser module in the lower left, sensor and optics on the upper right.</span></div>
<br />
The LIDAR unit has just 6 wires to control the whole thing. Two are connected directly to the motor that spins the unit; two provide power to the laser and other electronics that do the ranging; and the last two provide a serial connection to the processor. Upon powering the main processor, the following text barrels down the data line at 115200 baud:<br />
<br />
<span style="font-family: Courier New, Courier, monospace; font-size: x-small;">
:)<br />
Piccolo Laser Distance Scanner<br />
Copyright (c) 2009-2011 Neato Robotics, Inc.<br />
All Rights Reserved<br />
<br />
Loader V2.5.14010<br />
CPU F2802x/c000<br />
Serial WTD38511AA-0056881<br />
LastCal [5371726C]<br />
Runtime V2.6.15295<br />
#</span><br />
<br />
It's hard not to trust any device that sends a smiley face as an opening line. The introduction message isn't too informative, but the fact that it is sent as plain ASCII is comforting. The unit doesn't send any laser ranging data until it is spinning at close to 5 Hz, and the processor has no way of controlling the spin motor. By applying an average of around 3 V to the motor (I did PWM from a 12 V line), the unit spins up and raw data starts flooding down the line. The resources link above provides documentation for how the data packets are formatted, but the key points are that they contain some number of laser ranging measurements, error codes, and the current spin rate of the unit. This measured spin rate can be fed into a feedback loop for the motor controller so that it stays spinning at a nice constant 5 Hz.<br />
<br />
I decided to have the Arduino Due on the hexapod handle communication with the LIDAR unit and keeping the motor spinning at the correct rate. The Due already handles communication between the main cpu and the Arbotix-M, so what's one more device? I soldered up a simple board that included an N-channel MOSFET for PWM control of the motor, and a LM317 voltage regulator to provide the LIDAR processor with around 3.3 V.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-B5P_8kQ5E_s/VXkPDfea97I/AAAAAAAAB50/dny2KuOoqAw/s1600/IMG_20150523_161318644.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="296" src="http://3.bp.blogspot.com/-B5P_8kQ5E_s/VXkPDfea97I/AAAAAAAAB50/dny2KuOoqAw/s600/IMG_20150523_161318644.jpg" width="590" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Motor connects on the left, LIDAR controller on the right. Bottom connects to Due.</span></div>
<br />
The hexapod kit came with a mounting board for adding accessory hardware, but the mounting holes on the LIDAR didn't match up. I 3D-printed a little bracket to both attach the unit to the hexapod body and provide a little space for the board I had just made.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-owJZYg8dZKQ/VXkMslvjWoI/AAAAAAAAB5c/AE4tf9Nnpug/s1600/IMG_2920.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="351" src="http://3.bp.blogspot.com/-owJZYg8dZKQ/VXkMslvjWoI/AAAAAAAAB5c/AE4tf9Nnpug/s600/IMG_2920.JPG" width="590" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Credit to my Single Pixel Camera project for making me buy a better printer.</span></div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-_EKJjsOAf_k/VXkN50qKXxI/AAAAAAAAB5k/B7f0LFHk5DY/s1600/IMG_20150512_155339278.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="288" src="http://4.bp.blogspot.com/-_EKJjsOAf_k/VXkN50qKXxI/AAAAAAAAB5k/B7f0LFHk5DY/s600/IMG_20150512_155339278.jpg" width="590" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Attached to the top mounting panel of the hexapod.</span></div>
<br />
With the small interface board mounted below the LIDAR unit, I connected everything up to the Due. A PWM line from the Due is used to control the speed of the motor, and the Serial2 port is used to receive data from the LIDAR processor. The 12 V needed to power the motor and processor come from whatever source already powers the main UDOO board. In testing<span style="color: red;">,</span> this was either an AC adapter or a 3-cell lipo battery.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-dGOpzsTpFqg/VXkN9KFOvTI/AAAAAAAAB5s/u5xr-Zn2gko/s1600/IMG_20150521_222359378.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="332" src="http://3.bp.blogspot.com/-dGOpzsTpFqg/VXkN9KFOvTI/AAAAAAAAB5s/u5xr-Zn2gko/s600/IMG_20150521_222359378.jpg" width="590" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">I have no idea what I'm testing, but damn does it look technical.</span></div>
<br />
The stream of data from the LIDAR unit is unfortunately pretty non-stop, so I had to write a code that parses little bits at a time to make sure none would get lost in the small input buffer of the Due. The Due bundles up bits of the data into packets and sends them off to the main cpu for further processing. To start out, I just plotted the results in gnuplot:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-_X_Xs0M775s/VXkQ0uz4WEI/AAAAAAAAB6I/8R1bL-_abe8/s1600/Untitled.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="381" src="http://4.bp.blogspot.com/-_X_Xs0M775s/VXkQ0uz4WEI/AAAAAAAAB6I/8R1bL-_abe8/s600/Untitled.jpg" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Rounded object at (50,100) is my head.</span></div>
<br />
Each point is a single laser ranging measurement, and they span the full 360-degrees around the unit. A map like this can be made five times a second, allowing for a pretty good update rate.<br />
<br />
<span style="font-size: large;"><b>A Sense of Self</b></span><br />
<br />
At this point, the hexapod has the ability to continually scan its surroundings with lasers and accurately determine its distance from obstacles in any direction. But we still haven't solved the problem of letting the hexapod know where it is. By looking at the plot above, we can clearly see that it was sitting near the corner of a rectangular room. If we moved the robot a few feet in any direction and looked at the new map, we would be able to see that the robot had moved, and by comparing the two plots in detail we could even measure how far it moved. As humans, we are able to do this by matching similarities between the before and after plots and spotting the differences. This is one of those tasks that is relatively easy for a human to do and very tricky for a computer to do.<br />
<br />
Using only the LIDAR scans, we want<span style="color: red;"> </span>the hexapod to be able to track its movement within its environment. By matching new scans to previous ones, we can both infer movement relative to the measured obstacles and integrate new information about obstacles measured from the new location. The process of doing so is called <a href="https://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping">Simultaneous Localization and Mapping</a> (SLAM). There are many ways of solving this problem using measurements like the LIDAR scans I have access to. Some methods involve big point clouds, some involve grids. Some are 3D, some are 2D. One of the most common traits of any SLAM algorithm that I've found is that it is complicated enough to scare away amateur robotics enthusiasts. So in keeping to my goal of writing most (if not all) of the software for my hexapod, I set out to write my own algorithm.<br />
<br />
My algorithm is not great, but it kind of works. I decided to do a 2D grid-based SLAM algorithm because a) my LIDAR scans are only in 2D, and b) point clouds are hard to work with. As the name suggests, a SLAM algorithm involves solving two problems simultaneously: localization and mapping. My algorithm keeps a map of the surroundings in memory, and given a new LIDAR scan performs two steps: matching the new scan to the existing map and infering where the scan was measured from; and then adding the new scan to the map to update it with any changes. As the Wikipedia article on the subject suggests, we have a bit of a chicken-and-egg problem, in that you can't localize without an existing map and you can't map without knowing the location. To solve this problem, I let the hexapod know its initial coordinates and let it collect a few scans while standing still to create an initial map. Then, it is allowed to step through the full SLAM algorithm with a map already set up.<br />
<br />
<b><span style="font-size: large;">Localization</span></b><br />
<br />
For testing, I typically use a map with 128 pixels on a side, and each pixel represents 10 cm. After the initial set up where the hexapod is allowed to create a map in a known location, we might end up with a map like this:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-7HreEluxOFg/VXy3tCI1fvI/AAAAAAAAB7U/nKyDyZ308_I/s1600/Untitled3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="362" src="http://3.bp.blogspot.com/-7HreEluxOFg/VXy3tCI1fvI/AAAAAAAAB7U/nKyDyZ308_I/s600/Untitled3.png" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">My 350 by 400 cm workroom. My head is still at (50,100).</span></div>
<br />
The value in each pixel can vary between [0,1], and roughly represents how confident we are that the pixel contains an obstacle. I'll go into how this map is made using LIDAR scans in the next section, so just assume it's representative of the environment. A new scan is made and needs to be matched onto this map with some shift in horizontal position and angle that represent the new position and heading of the hexapod. To do this, I've expressed it as an optimization problem. The algorithm tries to find the horizontal shift ($x$,$y$) and angle ($\theta$) that minimize the distance between each new scan point ($x_i'$,$y_i'$) and an occupied map pixel ($M(x,y)$):<br />
<br />
\[ \Psi = \sum_i^N D(\left \lfloor{\tilde{x}_i}\right \rfloor, \left \lfloor{\tilde{y}_i}\right \rfloor) + a (x-x_g)^2 + b (y-y_g)^2 + c (\theta-\theta_g)^2 \]<br />
\[ \tilde{x}_i = x_i' \cos \theta - y_i' \sin \theta + x \]<br />
\[ \tilde{y}_i = x_i' \sin \theta + y_i' \cos \theta + y \]<br />
\[ D(x,y) = \sum_{x'} \sum_{y'} M(x',y') \sqrt{(x-x')^2 + (y-y')^2} \]<br />
<br />
Here, we project each LIDAR measurement ($x_i'$,$y_i'$) on to the SLAM map, adjusting for the current best guess of ($x$,$y$,$\theta$). At each projected point, we sum up the distance from that point to every occupied pixel of the SLAM map. This gives us an estimate for how 'far away' the projected scan is from matching the existing map. The three extra terms on the $\Psi$ equation are to bias the solution towards guess values for ($x$,$y$,$\theta$).<span style="color: red;"> </span><br />
<br />
In this way, we are finding the location of the hexapod so that the new LIDAR scan looks most like the existing map. The assumption being made here is that the new scan is similar enough to the existing map that is can be matched with some confidence. Solving the above equation is a problem of non-linear optimization, similar to the inverse kinematics solved in the previous post. The code to solve this problem is a little dense, so I won't try to explain all of the details here. The relevant code is <a href="https://github.com/bgreer/HEX/tree/master/LIB_SLAM">here</a>, and the relevant method is <span style="font-family: Courier New, Courier, monospace;">slam::step(...);</span>.<br />
<br />
In words, we compute the $\Psi$ equation above and how it changes if we modify each of the parameters ($x$,$y$,$\theta$) by a small amount. Using this information, we can nudge each parameter by an amount that should get us to a lower value of $\Psi$. Since the problem is non-linear, we aren't guaranteed that this gets us to the lowest possible value, or even a lower one than before. To help make sure we end up in the right place, we initialize the solver with a guess position based on how the hexapod legs have moved recently. Since we went through so much trouble in the <a href="http://www.gperco.com/2015/06/hex-inverse-kinematics.html">previous post</a> to plan how the feet move, we might as well use that knowledge to help the localization solver. From there we iterate the nudging step over and over again with a smaller nudge until we find there is no way of nudging it to a lower value of $\Psi$. This is when we stop and say we have found the optimal values of ($x$,$y$,$\theta$). With that, the localization step is done!<br />
<br />
For computational efficiency, I keep three versions of the SLAM map other than the normal one shown above. The first extra one is the original map convolved with a distance kernel, which at any position gives us an approximate distance to occupied pixels. The next two are the gradient of this distance map, one for the x-component and one for the y-component. These maps allow us to quickly evaluate both the $\Psi$ function and its derivatives with respect to ($x$,$y$,$\theta$). The distance map is computed in Fourier space using the convolution theorem, using threaded FFTW for computational speed. This method doesn't <i>actually</i> give us the correct distance measure for $\Psi$, but it's close enough for this basic algorithm.<br />
<br />
<span style="font-size: large;"><b>Mapping</b></span><br />
<br />
The companion to localization is mapping. Once we have a solution to where the new scan was measured from, we need to add it to the existing SLAM map. While we have assumed the new scan is close enough to the existing map to be matched, it will have small differences due to the new measurement location that need to be incorporated so that the following scan is still similar enough to the map. In my SLAM code, the method that does the mapping step is <span style="font-family: Courier New, Courier, monospace;">slam::integrate(...);</span>.<br />
<br />
Each new laser ranging measurement from the new scan is projected on to the SLAM map given the estimated hexapod location from the localization step. The pixel below each point is set to 1.0, meaning we are fully confident that there is some object there. We then scan through every other pixel in the map and determine whether it is closer or farther away from the hexapod than the new scan measurements. If it is closer, we decrease the map value because the new scan measured something behind it, meaning it must be free of obstacles. If the pixel is farther, we leave it alone because we don't have any new information there; the new scan was blocked by something in front of it.<br />
<br />
Once this mapping step is done, we have completed the two-part SLAM algorithm and are ready for another LIDAR scan. It's not the best or most accurate method, but it is easy to understand and can run on fairly low-level hardware in real-time. I've written the algorithm to run asynchronously from the main hexapod code, so new scans can be submitted and the hexapod can still walk around while the SLAM algorithm figures out where it is. On the UDOO's Cortex-A9, I can step a 1024x1024 map in around 2-3 seconds. With a 10 cm resolution, this gives over 100 meters of mapping. In practice, I've found that 10 cm is about the coarsest you can go in an indoor environment, but anything less than 3 cm is a waste of computing time.<br />
<br />
<span style="font-size: large;"><b>Examples</b></span><br />
<br />
To demonstrate graphically how my SLAM algorithm works, I've written a little JavaScript app to show how LIDAR scans relate to the SLAM map. It doesn't actually go through the whole algorithm from above, but it does show roughly what happens. The GLOBAL view shows the simulated robot moving through an environment, measuring distances to obstacles with a simulated LIDAR scanner. The ROBOT view shows what these LIDAR measurements look like from the minds-eye of the robot. It doesn't know that it is moving; instead, it looks like the world is moving around it. The SLAM view shows a simulated SLAM map and the approximate location of the robot. Move the ERROR slider back and forth to simulate varying amounts of noise in the measurements. These reduce the accuracy of both the localization and mapping methods.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<canvas height="450" id="slam_sim" width="600">Canvas not working!</canvas></div>
<br />
I've also tested this algorithm out in real life with the hexapod. The following SLAM maps were collected by driving the hexapod around my apartment with a remote control. I started the hexapod out in one room, and it was able to walk into a different room and keep track of its position. The maps are pretty messy, but acceptable considering the simplicity of the algorithm being used.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-WWGHa1yF5OY/VXyz5EDfQtI/AAAAAAAAB7I/N58mtNIWJ4M/s1600/Untitled2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="534" src="http://3.bp.blogspot.com/-WWGHa1yF5OY/VXyz5EDfQtI/AAAAAAAAB7I/N58mtNIWJ4M/s600/Untitled2.png" width="590" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Noisy map of my apartment.</span></div>
<br />
It's not the best <a href="http://www.reddit.com/r/comeonandslam">SLAM</a> algorithm in the world, but it's relatively easy to understand and compute in an embedded setting. It seems to do best when the hexapod is inside a closed room and can see at least two of the walls. It has some issues keeping track of position when transitioning between rooms, mostly due to the sharp changes in the LIDAR scans when passing through a doorway. Still, it does a reasonable job at keeping track of the hexapod location within my apartment.<br />
<br />
In the next post, I'll sort out how to get the hexapod to navigate autonomously. With the algorithms presented so far in this hexapod series, it becomes a straightforward procedure of using the SLAM maps to find optimal paths to pre-determined waypoints.
<script>
/* WARNING: I haven't seriously used javascript since I was
* maybe 17 years old, so in a sense I've never used it
* seriously. This code is a mess and I couldn't care less.
*/
// everything is a global variable? sure.
var scanv, sctx;
var stimeout;
var sdt = 2.0;
var sstep = 0;
var ssimrun = false; // allows sim to be paused
var sdragging = false;
var sslider;
var smode = 0;
var sxpos, sypos, sang;
var snumangles = 72;
var sscanang;
var sscandist = Array(snumangles);
// collision rectangles
var snumrects = 0;
var scolx1 = [];
var scolx2 = [];
var scoly1 = [];
var scoly2 = [];
// slam stuff
var nx = 60;
var ny = 60;
var map = Array(nx*ny);
var sxguess, syguess, sangguess;
var sxerr, syerr, saerr;
// tuning
var lidar_error = 0.02;
var slam_enable = true;
function getOffset( el ) {
var _x = 0;
var _y = 0;
while( el && !isNaN( el.offsetLeft ) && !isNaN( el.offsetTop ) ) {
_x += el.offsetLeft - el.scrollLeft;
_y += el.offsetTop - el.scrollTop;
el = el.offsetParent;
}
return { top: _y, left: _x };
}
function slam_init()
{
scanv = document.getElementById('slam_sim');
sctx = scanv.getContext('2d');
sctx.font="Bold 12px Arial";
sxpos = 0.0;
sypos = -60.0;
sang = 0.0;
sxguess = 0.0;
syguess = 0.0;
sangguess = 0.0;
sxerr = 0.0;
syerr = 0.0;
saerr = 0.0;
sscanang = 0;
// create environment
slam_addrect(-500,-200,-500,500);
slam_addrect(200,500,-500,500);
slam_addrect(-500,500,-500,-150);
slam_addrect(-500,500,150,500);
slam_addrect(-30,30,-10,40);
slam_addrect(-210,-70,-160,-100);
slam_addrect(70,210,-160,-100);
for (var ix=0; ix<nx; ix++)
{
for (var iy=0; iy<ny; iy++)
{
map[ix*ny+iy] = 0.5;
}
}
// generic listeners from some other code I wrote
scanv.addEventListener('click', slam_handleclick);
scanv.addEventListener('mousedown', slam_handlemousedown);
scanv.addEventListener('mousemove', slam_handlemousemove);
scanv.addEventListener('mouseup', slam_handlemouseup);
ssimrun = true;
slam_draw();
slam_run();
}
function slam_disttorect (ind, x0, y0, tht)
{
var x, y, r, ret;
ret = 1e6;
// wall 1
r = (scolx1[ind]-x0)/Math.cos(tht);
y = y0+r*Math.sin(tht);
if (y > scoly1[ind] && y < scoly2[ind] && r > 0.0) ret = Math.min(ret, r);
// wall 2
r = (scolx2[ind]-x0)/Math.cos(tht);
y = y0+r*Math.sin(tht);
if (y > scoly1[ind] && y < scoly2[ind] && r > 0.0) ret = Math.min(ret, r);
// wall 3
r = (scoly1[ind]-y0)/Math.sin(tht);
x = x0+r*Math.cos(tht);
if (x > scolx1[ind] && x < scolx2[ind] && r > 0.0) ret = Math.min(ret, r);
// wall 4
r = (scoly2[ind]-y0)/Math.sin(tht);
x = x0+r*Math.cos(tht);
if (x > scolx1[ind] && x < scolx2[ind] && r > 0.0) ret = Math.min(ret, r);
return ret;
}
function slam_addrect (x1, x2, y1, y2)
{
scolx1.push(x1);
scolx2.push(x2);
scoly1.push(y1);
scoly2.push(y2);
snumrects += 1;
}
function scan ()
{
for (var ii=0; ii<snumangles; ii++)
{
tht = 2*Math.PI*ii/snumangles + (Math.random()-0.5)*lidar_error;
mindist = 1e6;
for (var ij=0; ij<snumrects; ij++)
{
thisdist = slam_disttorect(ij, sxpos, sypos, tht+sang);
if (thisdist < mindist) mindist = thisdist;
}
sscandist[ii] = mindist*(1.0 + (Math.random()-0.5)*2*lidar_error);
}
}
function slam_handlemousedown(event)
{
var x = event.pageX - getOffset(scanv).left-document.body.scrollLeft;
var y = event.pageY - getOffset(scanv).top-document.body.scrollTop;
sdragging = true;
sslider = -1;
if (x >= 430 && x <= 580 && y >= 415 && y <= 435)
{
sslider = 0;
slam_moveSlider(x-430);
}
}
function slam_handlemouseup(event)
{sdragging = false;}
function slam_handlemousemove(event)
{
var x = event.pageX - getOffset(scanv).left-document.body.scrollLeft;
var y = event.pageY - getOffset(scanv).top-document.body.scrollTop;
if (sdragging && sslider >= 0)
slam_moveSlider(x - 430);
}
function slam_moveSlider(x)
{
var value, logval;
value = (x/150.)*0.1;
lidar_error = value;
if (lidar_error < 0.0) lidar_error = 0.0;
if (lidar_error > 0.1) lidar_error = 0.1;
if (ssimrun==false) slam_draw();
}
function slam_handleclick(event)
{
var x = event.pageX - getOffset(scanv).left-document.body.scrollLeft;
var y = event.pageY - getOffset(scanv).top-document.body.scrollTop;
if (Math.sqrt(Math.pow(x-85,2) + Math.pow(y-425,2)) <= 10)
smode = 0;
if (Math.sqrt(Math.pow(x-175,2) + Math.pow(y-425,2)) <= 10)
smode = 1;
if (Math.sqrt(Math.pow(x-265,2) + Math.pow(y-425,2)) <= 10)
smode = 2;
if (ssimrun==false) slam_draw();
}
// contains the primary calculations of the sim
function slam_run()
{
// call run again in a moment
if (ssimrun==true) stimeout = setTimeout('slam_run()', 60); // 30ms delay?
sang += 0.006*sdt;
if (sang < -Math.PI) sang += 2*Math.PI;
if (sang >= Math.PI) sang -= 2*Math.PI;
sxpos += 0.5*sdt*Math.cos(sang);
sypos += 0.5*sdt*Math.sin(sang);
sscanang += 7 + Math.round(Math.random()*5);
if (sscanang > snumangles) sscanang = 0;
// check for collisions
// get lidar scan
scan();
sxerr = 0.95*sxerr + (Math.random()-0.5)*lidar_error*10.;
syerr = 0.95*syerr + (Math.random()-0.5)*lidar_error*10.;
saerr = 0.95*saerr + (Math.random()-0.5)*lidar_error;
sxerr *= 0.99;
syerr *= 0.99;
saerr *= 0.95;
// perform slam steps
if (slam_enable)
{
sxguess = sxpos + sxerr;
syguess = sypos + syerr;
sangguess = sang + saerr;
}
slam_integrate();
slam_draw();
}
function slam_integrate()
{
// using current guess of robot position and angle,
// integrate current scan into map
for (var ii=0; ii<snumangles; ii++)
{
tht = 2*Math.PI*ii/snumangles;
thisx = sxguess + sscandist[ii]*Math.cos(tht+sangguess);
thisy = syguess + sscandist[ii]*Math.sin(tht+sangguess);
thisx = Math.round(thisx*0.1 + nx/2);
thisy = Math.round(thisy*0.1 + ny/2);
if (thisx >= 0 && thisx < nx && thisy >= 0 && thisy < ny)
map[thisx*ny+thisy] = 1.0;
}
for (var ix=0; ix<nx; ix++)
{
for (var iy=0; iy<ny; iy++)
{
xval = (ix-nx/2)/0.1;
yval = (iy-ny/2)/0.1;
thistht = Math.atan2(yval-syguess, xval-sxguess) - sangguess;
if (thistht < 0.0) thistht += 2*Math.PI;
thisdist = Math.sqrt(Math.pow(sxguess-xval,2)+Math.pow(syguess-yval,2));
ind = Math.round(thistht*snumangles/(2*Math.PI));
if (thisdist < sscandist[ind])
map[ix*ny+iy] *= 0.90 + 0.1*thisdist/sscandist[ind];
}
}
}
function slam_draw()
{
sctx.fillStyle = 'white';
sctx.fillRect(0, 0, 600, 450);
if (smode == 0) // world-view
{
// draw laser scans
sctx.strokeStyle = '#ffccdd';
sctx.lineWidth = 1;
for (var ii=0; ii<snumangles; ii++)
{
if (ii == sscanang) sctx.strokeStyle = '#ff7777';
else sctx.strokeStyle = '#ffaaaa';
sctx.beginPath();
sctx.moveTo(300+sxpos,200+sypos);
tht = 2.*Math.PI*ii/snumangles + sang;
sctx.lineTo(300+sxpos+sscandist[ii]*Math.cos(tht),
200+sypos+sscandist[ii]*Math.sin(tht));
sctx.stroke();
}
// draw body
sctx.fillStyle = '#333333';
sctx.beginPath();
sctx.arc(300+sxpos, 200+sypos, 5, 0, 2.*Math.PI, false);
sctx.fill();
sctx.strokeStyle = '#333333';
sctx.lineWidth = 2;
sctx.beginPath();
sctx.moveTo(300+sxpos, 200+sypos);
sctx.lineTo(300+sxpos+15*Math.cos(sang), 200+sypos+15*Math.sin(sang));
sctx.stroke();
// draw walls
sctx.fillStyle='#dddddd';
for (var ij=0; ij<snumrects; ij++)
{
sctx.fillRect(300+scolx1[ij],200+scoly1[ij],
scolx2[ij]-scolx1[ij],scoly2[ij]-scoly1[ij]);
}
} else if (smode == 1) { // scan-view
// draw laser scans
sctx.strokeStyle = '#ffccdd';
sctx.fillStyle = '#ffaaaa';
sctx.lineWidth = 1;
for (var ii=0; ii<snumangles; ii++)
{
if (ii == sscanang) sctx.strokeStyle = '#ff7777';
else sctx.strokeStyle = '#ffaaaa';
sctx.beginPath();
sctx.moveTo(300,200);
tht = 2.*Math.PI*ii/snumangles;
sctx.lineTo(300+sscandist[ii]*Math.cos(tht),
200+sscandist[ii]*Math.sin(tht));
sctx.stroke();
sctx.beginPath();
sctx.arc(300+sscandist[ii]*Math.cos(tht),
200+sscandist[ii]*Math.sin(tht), 2, 0, 2*Math.PI, false);
sctx.fill();
}
// draw body
sctx.fillStyle = '#333333';
sctx.beginPath();
sctx.arc(300, 200, 5, 0, 2.*Math.PI, false);
sctx.fill();
sctx.strokeStyle = '#333333';
sctx.lineWidth = 2;
sctx.beginPath();
sctx.moveTo(300, 200);
sctx.lineTo(300+15, 200);
sctx.stroke();
} else { // slam-view
s = 1.0
// draw slam map
sctx.strokeStyle='#ffffff';
sctx.fillStyle='#333333';
dx = 600./nx;
dy = 600./ny;
for (var ix=0; ix<nx; ix++)
{
for (var iy=0; iy<ny; iy++)
{
rval = Math.round(255 - map[ix*ny+iy]*255);
gval = Math.round(255*(1 - 0.7*map[ix*ny+iy]));
bval = Math.round(255*(1 - 0.5*map[ix*ny+iy]));
sctx.fillStyle = "rgb("+rval+","+gval+","+bval+")";
sctx.fillRect((ix-0.5)*dx,-100+(iy-0.5)*dy,dx,dy);
}
}
// draw laser scans
sctx.strokeStyle = '#ffaaaa';
sctx.lineWidth = 1;
for (var ii=0; ii<snumangles; ii++)
{
if (ii == sscanang) sctx.strokeStyle = '#ff7777';
else sctx.strokeStyle = '#ffaaaa';
sctx.beginPath();
sctx.moveTo(300+sxguess*s,200+syguess*s);
tht = 2.*Math.PI*ii/snumangles + sangguess;
sctx.lineTo(300+(sxguess+sscandist[ii]*Math.cos(tht))*s,
200+(syguess+sscandist[ii]*Math.sin(tht))*s);
sctx.stroke();
}
// draw body
sctx.fillStyle = '#333333';
sctx.beginPath();
sctx.arc(300+sxguess*s, 200+syguess*s, 5, 0, 2.*Math.PI, false);
sctx.fill();
sctx.strokeStyle = '#333333';
sctx.lineWidth = 2;
sctx.beginPath();
sctx.moveTo(300+sxguess*s, 200+syguess*s);
sctx.lineTo(300+sxguess*s+15*Math.cos(sangguess),
200+syguess*s+15*Math.sin(sangguess));
sctx.stroke();
}
// draw borders
sctx.fillStyle = '#666666';
sctx.fillRect(0, 0, 600, 3);
sctx.fillRect(0, 0, 3, 450);
sctx.fillRect(0, 447, 600, 3);
sctx.fillRect(597, 0, 3, 600);
sctx.fillRect(0, 400, 600, 3);
sctx.fillStyle = '#ffffff';
sctx.fillRect(3, 403, 594, 44);
sctx.fillStyle = '#666666';
sctx.fillRect(360,403,3,44);
// buttons
sctx.fillStyle = "#999999";
sctx.fillText("VIEWS:", 20, 430);
sctx.beginPath();
sctx.arc(85,425, 10, 0, 2 * Math.PI, false);
sctx.stroke();
sctx.fillStyle = "#e1aa9d";
if (smode == 0) sctx.fillStyle = "#83e171";
sctx.fill();
sctx.fillStyle = "#999999";
sctx.fillText("GLOBAL",100,430);
sctx.beginPath();
sctx.arc(175,425, 10, 0, 2 * Math.PI, false);
sctx.stroke();
sctx.fillStyle = "#e1aa9d";
if (smode == 1) sctx.fillStyle = "#83e171";
sctx.fill();
sctx.fillStyle = "#999999";
sctx.fillText("ROBOT",190,430);
sctx.beginPath();
sctx.arc(265,425, 10, 0, 2 * Math.PI, false);
sctx.stroke();
sctx.fillStyle = "#e1aa9d";
if (smode == 2) sctx.fillStyle = "#83e171";
sctx.fill();
sctx.fillStyle = "#999999";
sctx.fillText("SLAM",280,430);
// reset button
// slider
sctx.fillText("ERROR:",370,430);
sctx.strokeStyle = '#bbbbbb';
sctx.lineWidth = 5;
sctx.beginPath();
sctx.moveTo(430,425);
sctx.lineTo(580,425);
sctx.stroke();
sctx.lineWidth = 1;
sctx.strokeStyle = '#dddddd';
for (j=0; j<3; j++)
{
sctx.moveTo(430+j*150.0/2.0,415);
sctx.lineTo(430+j*150.0/2.0,435);
sctx.stroke();
}
sctx.fillStyle = '#555555';
sctx.fillRect(430-3+(lidar_error)*150/0.1+1,417,5,16);
}
// this gets called on page load?
slam_init();
</script>gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com1tag:blogger.com,1999:blog-5124010431457427473.post-80258284801507316782015-06-15T09:33:00.000-06:002015-06-15T09:53:10.096-06:00[HEX] Inverse Kinematics<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script>
<script>
$(document).ready(function()
{
$("#h2_hide0").hide();
$("#h2_code0").hide();
$("#h2_hide0").click(function()
{
$("#h2_code0").hide();
$("#h2_hide0").hide();
$("#h2_show0").show();
});
$("#h2_show0").click(function()
{
$("#h2_code0").show();
$("#h2_hide0").show();
$("#h2_show0").hide();
});
});
</script>
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}
});
</script>
<script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-bgKTQzUtMzY/VUlyaDFtanI/AAAAAAAAB1k/v1b4kUX4mFU/s1600/IMG_2867.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="169" src="http://4.bp.blogspot.com/-bgKTQzUtMzY/VUlyaDFtanI/AAAAAAAAB1k/v1b4kUX4mFU/s320/IMG_2867.jpg&container=blogger&gadget=a&rewriteMime=image" width="320" /></a></div>
<br />
This is the second post on my Hexapod Project series. The point of
this project is to build a robot that allows me to try out a few
robotics concepts. For a
listing of each post in this series, <a href="http://www.gperco.com/2015/06/hex-project-intro.html">click here</a>. In this post, I'll go over the steps (!) needed to get a hexapod robot walking.<br />
<br />
At this point, I have a robot with six legs, three servos per leg, and the electronics and code to control each servo independently. But with no guidance for how to move the servos, the hexapod is useless. With their above-average leg count and sixfold symmetry, hexapods can move around in <a href="https://www.youtube.com/watch?v=pXMnbNoccgA">all kinds of unique ways</a>. While dancing around is certainly a possibility for my hexapod, I'm really only interested in getting it to walk around. So to begin the process of getting it mobile, let's start with the basics of getting a robot to walk.<br />
<span style="font-size: large;"><b><br /></b></span>
<span style="font-size: large;"><b>Inverse Kinematics</b></span><br />
<br />
In order for the hexapod to walk, it needs to be able to independently move each foot up and down, forward and back. But how can we tell it to move its foot? All we have control over are the three servos for each leg, none of which independently determine where the foot ends up. The position (and angle) that any given foot is at is determined by the angles of each of the three leg servos together. We can describe the relationship between leg position and servos angles as such:<br />
\[ r = l_c + l_f \cos(\theta_f) + l_t \cos(\theta_f+\theta_t) \] \[ x_f = x_0 + r \cos(\theta_c+\theta_0) \] \[ y_f = y_0 + r \sin(\theta_c+\theta_0) \] \[ z_f = z_0 + l_f \sin(\theta_f) + l_t \sin(\theta_f+\theta_t) \]<br />
Here, $\theta_c$, $\theta_f$, and $\theta_t$ are the servo angles for the coxa, femur, and tibia joints, respectively, and $l_c$, $l_f$, and $l_t$ are the distances between the joints. The position and angle at which the leg is connected to the body are represented by $x_0$, $y_0$, $z_0$, and $\theta_0$. This set of equations represent the <a href="http://en.wikipedia.org/wiki/Forward_kinematics">forward kinematics</a> of the leg. Each leg has an identical set of equations, but with different values for the initial position and angle.<br />
<br />
These equations can tell us where the foot is, given the angles of the servos, but we need to do the opposite. Unfortunately, there isn't any way to rearrange the equations above so that we can plug in the foot position and solve for the servo angles (go ahead and try!). Fortunately, this doesn't mean that it's an impossible task! The process of inverting these equations is called <a href="http://en.wikipedia.org/wiki/Inverse_kinematics">inverse kinematics</a>, and I've done a project on it before. My <a href="http://www.gperco.com/2013/12/robot-arm-reaching-for-stars.html">other post</a> explains how to go about solving an inverse kinematics problem, so if you're interested in the details, check that out.<br />
<br />
In short, the inverse kinematic solver takes a target foot position and outputs the servo angles that it thinks are appropriate. Starting with the servo angles as they are, the algorithm uses the forward kinematic equations to see which way each servo needs to turn so that the foot ends up slightly closer to the target. It takes many small steps like this until the forward kinematics equations say the new set of servo angles put the foot in the right place. This kind of procedure has its flaws, though. Imagine you tell it to find the servo angles that put the foot a mile away? The algorithm has no way to achieve this since the legs aren't nearly that long. In situations like this, it often goes haywire, giving you a nonsensical result for the servo angles. So careful attention to the iteration procedure is important.<br />
<br />
The code I've written for the hexapod inverse kinematics solver is in my <a href="https://github.com/bgreer/HEX/tree/master/LIB_HEXAPOD">LIB_HEXAPOD</a> library. The library also contains code used in the rest of this post along with some other bits. The procedure for determining foot position is as follows:<br />
<button id="h2_hide0">Hide Code</button>
<button id="h2_show0">Show Code</button>
<span id="h2_code0"><script src="https://gist.github.com/bgreer/c514a4db8ad17666b5e2.js"></script></span>
<br />
<br />
This bit of code gets called once for each leg, giving it a target position for that leg and getting back the optimal servo angles to use. Assuming I give it reasonable target positions, it can spit out servo angles quickly enough to solve each leg in just a few milliseconds (running on the Cortex-A9). The inverse kinematics procedure allows me to effectively ignore the
fact that each foot is positioned by three servos, and instead
concentrate on where I want the feet to actually go. A simple test is to just tell the feet to move up and down repeatedly:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-5K2eopRpneE/VUl2l3c2EGI/AAAAAAAAB2E/v3gWZJaaMs8/s1600/Sequence-01_7-compressor.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="224" src="http://3.bp.blogspot.com/-5K2eopRpneE/VUl2l3c2EGI/AAAAAAAAB2E/v3gWZJaaMs8/s400/Sequence-01_7-compressor.gif" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Impressing Alan the cat with robot weightlifting.</span></div>
<br />
<span style="font-size: large;"><b>Smooth Stepping</b></span><br />
<br />
Six legs seems to be more stable than two or four (or an odd number?), but that doesn't make my hexapod impervious to falling over. An object will fall over (or at least be unstable) if it is not supported by at least three points of contact that surround the center of mass. If we assume the center of mass of the hexapod is roughly in the center of the body, this means that we need at least three feet touching the ground at all times. Further, if we were to draw a triangle between where the feet touching the ground are, the center of mass needs to be directly above anywhere within this triangle. Since we have six legs in all, these rules lead us to one of the simplest <a href="http://en.wikipedia.org/wiki/Gait">gaits</a> for a hexapod, the tripod gait:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.cals.ncsu.edu/course/ent425/images/tutorials/external/locomotion/Ant2.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://www.cals.ncsu.edu/course/ent425/images/tutorials/external/locomotion/Ant2.gif" /></a></div>
<br />
The six legs are broken up into two groups which trade off being the support for the body. The legs within each group lower to the ground in unison, move towards the back of the body, then lift up and move back to the front. The two groups do this exactly out of phase with each other so that there are always exactly three feet on the ground at any one point. For my hexapod, I've modified this a bit so that the three legs within each group hit the ground at slightly different times. I've done this to reduce the repetitive jolting that occurs from moving each leg simultaneously.<br />
<br />
Even so, an issue I ran into while designing the gait is that whenever a foot hits the ground, the sudden transition from free-moving to weight-bearing caused the whole hexapod to jerk around. To specify the position of each foot as a function time, I was using a cosine bell with a flat bottom:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-8ix0Xp2cFDE/VXitlBVygyI/AAAAAAAAB4U/5-9PPOTIGhU/s1600/cosine.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="194" src="http://2.bp.blogspot.com/-8ix0Xp2cFDE/VXitlBVygyI/AAAAAAAAB4U/5-9PPOTIGhU/s320/cosine.png" width="320" /></a></div>
<br />
Notice the sharp change in direction at the start and end of when the foot is in contact with the floor. The transition to weight-bearing doesn't happen instantaneously (imagine carpeted floors), so the sudden transition when the foot goes from moving down to moving back creates problems. To create a smoother path for the feet to follow, I turned to <a href="http://en.wikipedia.org/wiki/B%C3%A9zier_curve">Bezier curves</a>.<br />
<br />
Bézier curves are smooth functions that are completely determined by a sequence of points that I will call anchors. These anchors specify generally what shape the curve has, so tweaking the shape of the curve just involves moving around the anchor points. Going from a set of anchor points to the resulting Bezier curve involves a series of linear interpolations. Given some intended distance along the total path between 0 and 1, we start by linearly interpolating between each adjacent anchor points. So if we want to know where the Bezier curve is halfway along the path, we start by linearly interpolating halfway between each pair of adjacent anchors. If we have $N$ anchors, this gives us $N-1$ interpolated points. We then linearly interpolate again halfway between these $N-1$ points to get $N-2$ doubly-interpolated points. We continue this procedure until we are left with a single interpolated point, and this is the position of the Bezier curve at the halfway point.<br />
<br />
The procedure for generating Bezier curves is a little difficult to describe, so I've made a little interface to help explain it. Drag the grey anchor points around to see how the Bezier curve changes, then increase the Guide Level slider to see the various levels of linear interpolation.<br />
<br />
<div style="text-align: center;">
<canvas height="350" id="bez_sim" width="500">Canvas not working!</canvas></div><br />
<br />
To make sure the hexapod stays steady when walking, I've kept the straight part of the foot path where the foot touches the ground, but set the return path to be a Bezier curve. I wrote a simple <a href="https://github.com/bgreer/HEX/blob/master/LIB_HEXAPOD/bezier2d.h">Bezier curve class</a> to handle the computations on the hexapod.<br />
<br />
<b><span style="font-size: large;">Turntables</span></b><br />
<br />
Applying this Bezier curve stepping method to each leg in an alternating pattern gets the hexapod to walk forwards, but it can't yet handle turning. To implement turning, my first instinct was to simply adjust the amount by which each foot sweeps forward and back on each side of the body differently. This would cause one side to move forward more than the other, and the hexapod would turn. The problem with this method is that it isn't particularly physical. If you try it, you'll find that the hexapod has to drag its feet sideways to compensate for the fact that it is turning sidways but the feet only move forward and back. In order to let the hexapod turn naturally, you need to go into a different frame of reference.<br />
<br />
If you tell all of the feet to move up towards the sky, the hexapod moves closer to the ground. Tell the feet to move down, the hexapod moves up. It can get confusing to figure out how to move the feet to get the body to move a certain way. I've found it's best to think that the hexapod body stays still in space and the ground just moves around relative to it. Then all you need to do is make sure the feet are in contact with that moving floor and they don't slide on it. For straight walking, we can just see it as a treadmill-like floor that continually moves backwards, and the feet are just trying to match the treadmill speed. For turning, we can think about the hexapod sitting above a giant turntable. How close the hexapod body sits to the axis of rotation determines how sharp of a turn it makes, or what the turning radius is. In order to keep the feet from sliding around on the turntable, we need to make sure each foot travels along a curve of constant radius from the axis of rotation. If we set it so the axis of rotation is directly underneath the body, the hexapod will stay in one place and just turn around and around. If the axis of rotation is set very very far away, there will barely be any curvature to the foot-paths, and the hexapod will basically walk forward in a straight line.<br />
<br />
To help explain this concept, I've made another little interface for seeing how the hexapod feet need to move. Move the bottom slider around to change the turning radius, and enable the helper lines to see how each foot follows a specific path relative to the axis of rotation.<br />
<br />
<div style="text-align: center;">
<canvas height="200" id="walk_sim01" width="600">Canvas not working!</canvas></div><br />
<br />
To incorporate the Bezier curve method from above into this view of walking, I convert the foot positions into polar coordinates around the axis of rotation and use the Bezier curve to pick the $\theta$ and $z$ coordinates as a function of time. In the hexapod code, I've parameterized the turning by a single parameter that relates to the turning radius. Between the turning parameter and a single speed parameter, I have full control over the movement of the hexapod.<br />
<br />
At every time step, the code considers the current values for speed and turning, and decides where along the Bezier curve each foot should be. It then computes the actual positions in space from the curves and feeds these positions into the inverse kinematic solver. The solver outputs servo angles for each joint of each leg, and the main code packages these up and sends them off to the lower-level processors. This whole procedure is fairly quick to compute, so I can update the servo positions at about 50Hz.<br />
<br />
I wrote a <a href="https://github.com/bgreer/HEX/tree/master/TESTS/remotecontrol">test code</a> that takes input from an Xbox 360 controller and turns it into speed and turning values. This allowed me to treat the whole hexapod like a remote-control car. After driving it around and chasing my cats, I made a little video to show of the build:<br />
<br />
<div style="text-align: center;">
<iframe allowfullscreen="" frameborder="0" height="349" src="https://www.youtube.com/embed/nRqtquGCFX0" width="620"></iframe>
</div>
<br />
At this point, the hexapod can walk around freely, but does not know where to go. In the next post, I'll go into giving the hexapod a sense of awareness through laser ranging.
<script>
// everything is a global variable? sure.
var bcanv, bctx;
var btimeout;
var bdt = 0.005;
var btime = 0.0;
var bstep = 0;
var bmode = 1;
var blevel = 0; // [0,5] inclusive
var bsimrun = false; // allows sim to be paused
var bdragging = false;
var bslider;
var numanc = 7;
var bxanc = [-50,-80,-100,0,100,80,50];
var byanc = [0,0,50,200,50,0,0];
var bxpos, bypos;
var fdf = 0.5;
// this gets called on page load?
function bez_init()
{
bcanv = document.getElementById('bez_sim');
bctx = bcanv.getContext('2d');
bctx.font="Bold 12px Arial";
bxpos = bxanc[0];
bypos = byanc[0];
var t = bez_getLength();
fdf = t/(t+100.);
// generic listeners from some other code I wrote
bcanv.addEventListener('click', bez_handleclick);
bcanv.addEventListener('mousedown', bez_handlemousedown);
bcanv.addEventListener('mousemove', bez_handlemousemove);
bcanv.addEventListener('mouseup', bez_handlemouseup);
bsimrun = true;
bez_draw();
bez_run();
}
function getOffset( el ) {
var _x = 0;
var _y = 0;
while( el && !isNaN( el.offsetLeft ) && !isNaN( el.offsetTop ) ) {
_x += el.offsetLeft - el.scrollLeft;
_y += el.offsetTop - el.scrollTop;
el = el.offsetParent;
}
return { top: _y, left: _x };
}
function bez_handlemousedown(event)
{
var x = event.pageX - getOffset(bcanv).left-document.body.scrollLeft;
var y = event.pageY - getOffset(bcanv).top-document.body.scrollTop;
bdragging = true;
bslider = -1;
console.log(x);
for (var i=1; i<numanc-1; i++)
{
if (Math.sqrt(Math.pow(x-250-bxanc[i],2) + Math.pow(y-250+byanc[i],2)) <= 8)
bslider = i;
}
if (x >= 320 && x <= 470 && y >= 318 && y <= 338)
{
bslider = 100;
bez_moveSlider(x-320);
}
}
function bez_moveSlider(x)
{
var value, logval;
value = (x/150.)*5.;
blevel = Math.round(value);
if (blevel < 0) blevel = 0;
if (blevel > 5) blevel = 5;
}
function bez_handlemouseup(event)
{
bdragging = false;
bslider=-1;
var t = bez_getLength();
fdf = t/(t+100.);
}
function bez_handlemousemove(event)
{
var x = event.pageX - getOffset(bcanv).left-document.body.scrollLeft;
var y = event.pageY - getOffset(bcanv).top-document.body.scrollTop;
if (bdragging && bslider > 0 && bslider < 50)
bez_moveAnchor(x, y);
if (bdragging && bslider == 100)
bez_moveSlider(x-320);
}
function bez_moveAnchor(x, y)
{
var value, logval;
if (bslider > 0)
{
bxanc[bslider] = x-250;
byanc[bslider] = -y+250;
}
if (bsimrun==false) bez_draw();
}
function bez_handleclick(event)
{
var x = event.pageX - getOffset(bcanv).left-document.body.scrollLeft;
var y = event.pageY - getOffset(bcanv).top-document.body.scrollTop;
if (Math.sqrt(Math.pow(x-35,2) + Math.pow(y-328,2)) <= 10)
bmode = 0;
if (Math.sqrt(Math.pow(x-125,2) + Math.pow(y-328,2)) <= 10)
bmode = 1;
if (bsimrun==false) bez_draw();
}
// contains the primary calculations of the sim
function bez_run()
{
// call run again in a moment
if (bsimrun==true) btimeout = setTimeout('bez_run()', 30); // 30ms delay?
btime += bdt;
if (btime > 1.0) btime -= 1.0;
if (btime < fdf)
{
bxpos = bez_getX(btime/fdf);
bypos = bez_getY(btime/fdf);
if (bmode == 0)
{
bxpos = -50. + 100.*(btime)/fdf;
bypos = 100.0*Math.sin(btime*3.14159/fdf);
}
} else {
bxpos = 50. - 100.*(btime-fdf)/(1.-fdf);
bypos = 0.0;
}
bez_draw();
}
function bez_getLength()
{
var l, i, bx, by, bxo, byo;
l = 0.0;
bxo = bez_getX(0.0);
byo = bez_getY(0.0);
for (i=1; i<=100; i++)
{
bx = bez_getX(i*0.01);
by = bez_getY(i*0.01);
if (bmode == 0)
{
bxpos = -50. + 100.*i*0.01;
bypos = 100.0*Math.sin(i*0.01*3.14159);
}
l += Math.sqrt((bx-bxo)*(bx-bxo) + (by-byo)*(by-byo));
bxo = bx;
byo = by;
}
return l;
}
function bez_getX(t)
{
var temp = Array(numanc);
var i, j;
for (i=0; i<numanc; i++)
temp[i] = bxanc[i];
for (i=0; i<numanc-1; i++)
{
for (j=0; j<numanc-i-1; j++)
temp[j] = (1.0-t)*temp[j] + t*temp[j+1];
}
return temp[0];
}
function bez_getY(t)
{
var temp = Array(numanc);
var i, j;
for (i=0; i<numanc; i++)
temp[i] = byanc[i];
for (i=0; i<numanc-1; i++)
{
for (j=0; j<numanc-i-1; j++)
temp[j] = (1.0-t)*temp[j] + t*temp[j+1];
}
return temp[0];
}
function bez_draw()
{
var i, j, t;
var tempx = Array(numanc);
var tempy = Array(numanc);
var px, py;
bctx.fillStyle = 'white';
bctx.fillRect(0, 0, 500, 350);
// grid
bctx.lineWidth = 1;
bctx.strokeStyle = '#dddddd';
for (var i=0; i<10; i++)
{
bctx.beginPath();
bctx.moveTo(i*50,0);
bctx.lineTo(i*50,350);
bctx.stroke();
bctx.beginPath();
bctx.moveTo(0,i*50);
bctx.lineTo(500,i*50);
bctx.stroke();
}
// draw borders
bctx.fillStyle = '#666666';
bctx.fillRect(0, 0, 500, 3);
bctx.fillRect(0, 0, 3, 350);
bctx.fillRect(0, 347, 500, 3);
bctx.fillRect(497, 0, 3, 450);
// hard bottom
bctx.lineWidth = 4;
bctx.strokeStyle = '#666666';
bctx.beginPath();
bctx.moveTo(250-50,250);
bctx.lineTo(250+50,250);
bctx.stroke();
// helper lines
if (bmode == 1)
{
bctx.lineWidth = 2;
bctx.strokeStyle = '#dddddd';
bctx.beginPath();
bctx.moveTo(250+bxanc[0],250-byanc[0]);
for (i=1; i<numanc; i++)
bctx.lineTo(250+bxanc[i],250-byanc[i]);
bctx.stroke();
if (btime < fdf)
{
t = btime/fdf;
// load data
for (i=0; i<numanc; i++)
{
tempx[i] = bxanc[i];
tempy[i] = byanc[i];
}
// iterate
for (i=0; i<numanc-1; i++)
{
for (j=0; j<numanc-i-1; j++)
{
tempx[j] = (1.0-t)*tempx[j] + t*tempx[j+1];
tempy[j] = (1.0-t)*tempy[j] + t*tempy[j+1];
}
if (blevel > i)
{
bctx.beginPath();
bctx.moveTo(250+tempx[0],250-tempy[0]);
for (j=1; j<numanc-i-1; j++)
{
bctx.lineTo(250+tempx[j],250-tempy[j]);
}
bctx.stroke();
brval = Math.round(200.*(1-i/(numanc-1.)));
bbval = Math.round(255.*i/(numanc-1));
bctx.fillStyle = "rgb("+brval+",200,"+bbval+")";
for (j=0; j<numanc-i-1; j++)
{
bctx.beginPath();
bctx.arc(250+tempx[j],250-tempy[j],4,0,2*Math.PI,false);
bctx.fill();
}
}
}
}
// anchor points
bctx.lineWidth = 2;
bctx.strokeStyle = '#aaaaaa';
bctx.fillStyle = '#dddddd';
for (var i=1; i<numanc-1; i++)
{
bctx.beginPath();
bctx.arc(250+bxanc[i], 250-byanc[i], 8,0,2 * Math.PI, false);
bctx.fill();
bctx.stroke();
}
}
// bezier curve
bctx.lineWidth = 2;
bctx.strokeStyle = '#999999';
bctx.beginPath();
bctx.moveTo(250+bxanc[0],250-byanc[0]);
px = bxanc[0]; py = byanc[0];
var coll = false;
for (var i=0; i<=100; i++)
{
bx = bez_getX(i*0.01);
by = bez_getY(i*0.01);
if (bmode==0)
{
bx = -50. + i;
by = 100.0*Math.sin(i*0.01*3.14159);
}
/*
if (by < 0 && coll == false)
{
coll = true;
bctx.stroke();
bctx.lineWidth = 4;
bctx.strokeStyle = '#ff4433';
bctx.beginPath();
bctx.moveTo(250+px,250-py);
} else if (by >= 0 && coll == true) {
coll = false;
bctx.stroke();
bctx.lineWidth = 2;
bctx.strokeStyle = '#999999';
bctx.beginPath();
bctx.moveTo(250+px,250-py);
}
*/
bctx.lineTo(250+bx, 250-by);
px=bx; py=by;
}
bctx.stroke();
// current position
bctx.fillStyle = '#7996de';
bctx.beginPath();
bctx.arc(250+bxpos, 250-bypos, 5,0,2 * Math.PI, false);
bctx.fill();
// bottom region
bctx.fillStyle = '#ffffff';
bctx.fillRect(3,310,494,37);
bctx.fillStyle = '#666666';
bctx.fillRect(0,307,500,3);
// buttons
bctx.fillStyle = "#999999";
bctx.beginPath();
bctx.arc(35,328, 10, 0, 2 * Math.PI, false);
bctx.stroke();
bctx.fillStyle = "#e1aa9d";
if (bmode == 0) bctx.fillStyle = '#83e171';
bctx.fill();
bctx.fillStyle = "#999999";
bctx.fillText("COSINE",50,333);
bctx.beginPath();
bctx.arc(125,328, 10, 0, 2 * Math.PI, false);
bctx.stroke();
bctx.fillStyle = "#e1aa9d";
if (bmode == 1) bctx.fillStyle = '#83e171';
bctx.fill();
bctx.fillStyle = "#999999";
bctx.fillText("BEZIER",140,333);
// slider
if (bmode == 1)
{
bctx.fillText("GUIDE LEVEL:",220,333);
bctx.strokeStyle = '#bbbbbb';
bctx.lineWidth = 5;
bctx.beginPath();
bctx.moveTo(320,328);
bctx.lineTo(470,328);
bctx.stroke();
bctx.lineWidth = 1;
bctx.strokeStyle = '#dddddd';
for (var j=0; j<6; j++)
{
bctx.moveTo(320+j*150.0/5.0,318);
bctx.lineTo(320+j*150.0/5.0,338);
bctx.stroke();
}
bctx.fillStyle = '#555555';
bctx.fillRect(320-3+(blevel)*150/5+1,320,5,16);
}
}
bez_init();
</script>
<script>
/* WARNING: I haven't seriously used javascript since I was
* maybe 17 years old, so in a sense I've never used it
* seriously. This code is a mess and I couldn't care less.
*/
// everything is a global variable? sure.
var wtime = 0.0;
var wdt;
var wcanv, wctx;
var wtimeout;
var step = 0;
var wsimrun = false; // allows sim to be paused
var wdragging = false;
var wslider = -1;
var sidedown;
var curvature;
var cx;
var legx0 = Array(6);
var legy0 = Array(6);
var legx = Array(6);
var legy = Array(6);
var legtht = Array(6);
var sweepwangle;
var wxpos, wypos, wang;
// display options
var show_ground = true;
var show_turn_center = false;
function winit()
{
wcanv = document.getElementById('walk_sim01');
wctx = wcanv.getContext('2d');
wctx.font="Bold 12px Arial";
wdt = 0.01;
wtime = 0.0;
curvature = 0.0;
cx = 1e8;
sidedown = 0;
wxpos = 0.0;
wypos = 0.0;
wang = 0.0;
legtht[0] = 315.;
legtht[1] = 0.0;
legtht[2] = 45.;
legtht[3] = 135.0;
legtht[4] = 180.0;
legtht[5] = 225.0;
// compute winitial positions of legs
for (var ii=0; ii<6; ii++)
{
legtht[ii] = (legtht[ii])*Math.PI/180.;
tht = 2.0*Math.PI*ii/6.0;
tht = legtht[ii]
legx0[ii] = 50.*Math.cos(tht);
legy0[ii] = 50.*Math.sin(tht);
legx[ii] = legx0[ii];
legy[ii] = legy0[ii];
}
// generic listeners from some other code I wrote
wcanv.addEventListener('click', w_handleclick);
wcanv.addEventListener('mousedown', w_handlemousedown);
wcanv.addEventListener('mousemove', w_handlemousemove);
wcanv.addEventListener('mouseup', w_handlemouseup);
wsimrun = true;
wdraw();
wrun();
}
function w_handlemousedown(event)
{
var x = event.pageX - getOffset(wcanv).left-document.body.scrollLeft;
var y = event.pageY - getOffset(wcanv).top-document.body.scrollTop;
wdragging = true;
wslider = -1;
// bounds of the main wslider
if (x >= 100 && x <= 500 && y >= 174 && y <= 186)
{
wslider = 0;
movewslider(x-100);
}
}
function w_handlemouseup(event)
{wdragging = false;}
function w_handlemousemove(event)
{
var x = event.pageX - getOffset(wcanv).left-document.body.scrollLeft;
var y = event.pageY - getOffset(wcanv).top-document.body.scrollTop;
if (wdragging && wslider >= 0)
movewslider(x - 100);
}
function movewslider(x)
{
var value, logval;
value = (x/100.) - 2.0;
curvature = value;
if (curvature < -1.999) curvature = -1.999;
if (curvature > 1.999) curvature = 1.999;
if (wsimrun==false) wdraw();
}
function w_handleclick(event)
{
var x = event.pageX - getOffset(wcanv).left-document.body.scrollLeft;
var y = event.pageY - getOffset(wcanv).top-document.body.scrollTop;
if (Math.sqrt(Math.pow(x-20,2) + Math.pow(y-20,2)) <= 10)
show_ground = !show_ground;
if (Math.sqrt(Math.pow(x-490,2) + Math.pow(y-20,2)) <= 10)
show_turn_center = !show_turn_center;
if (wsimrun==false) wdraw();
}
// contains the primary calculations of the sim
function wrun()
{
// call run again in a moment
if (wsimrun==true) wtimeout = setTimeout('wrun()', 30); // 30ms delay?
// increment and loop wtime variable
wtime += wdt;
if (wtime > 1.0) wtime -= 1.0;
// which leg grouping is down?
sidedown = 0;
if (wtime >= 0.5) sidedown = 1;
// mywtime is a bad name
// gives a [0,1] value for each half of the leg movement
mywtime = wtime*2.0;
if (wtime > 0.5) mywtime = (wtime-0.5)*2.0;
// compute radius of curvature based on "curvature" from wslider
if (Math.abs(curvature) < 0.0001) curvature = 0.0001;
if (curvature > 0.0) cx = Math.tan((2.0-curvature)*3.1415/4.0)*50.;
if (curvature < 0.0) cx = Math.tan((2.0-curvature)*3.1415/4.0)*50.;
if (curvature < 0.0) sidedown = ((sidedown+1)%2);
// length = radius * wangle
// everyone needs same wangle, capped by leg with farthest length to step
maxdist = 0.0;
sweepwangle = 0.0;
for (var ii=0; ii<6; ii++)
{
dist = Math.sqrt((cx-legx0[ii])*(cx-legx0[ii]) + legy0[ii]*legy0[ii]);
if (dist > maxdist)
maxdist = dist;
}
sweepwangle = Math.sign(cx)*25.0/maxdist;
// center of body rotates and moves
wang -= sweepwangle*wdt*2; // rotation is easy
// shift body in direction of wangle
wxpos -= sweepwangle*cx*wdt*2*Math.sin(wang);
wypos -= sweepwangle*cx*wdt*2*Math.cos(wang);
// compute where each leg is
for (var ii=0; ii<6; ii++)
{
dist = Math.sqrt((cx-legx0[ii])*(cx-legx0[ii]) + legy0[ii]*legy0[ii]);
tht0 = Math.atan2(legy0[ii], -(cx-legx0[ii]));
wdtht = sweepwangle*Math.sign(cx);
if (ii % 2 == sidedown)
{
// leg is down
pos = -wdtht*(mywtime-0.5) + tht0;
legx[ii] = cx+dist*Math.cos(pos);
legy[ii] = dist*Math.sin(pos);
} else {
// leg is up
pos = wdtht*(mywtime-0.5) + tht0;
legx[ii] = cx+dist*Math.cos(pos);
legy[ii] = dist*Math.sin(pos);
}
}
wdraw();
}
function wdraw()
{
wctx.fillStyle = 'white';
wctx.fillRect(0, 0, 600, 200);
// save position and wangle
wctx.save();
if (show_ground)
{
// draw background
y0 = Math.round((wypos-200.0)/30.)*30. - wypos;
x0 = Math.round((wxpos-200.0)/30.)*30. - wxpos;
wctx.fillStyle = '#eeeeee';
for (var ix=-10; ix<20; ix++)
{
for (var iy=-10; iy<20; iy++)
{
wctx.beginPath();
x1 = ix*30 + x0;
y1 = iy*30 + y0
x = x1*Math.cos(wang) - y1*Math.sin(wang);
y = x1*Math.sin(wang) + y1*Math.cos(wang);
wctx.arc(x + 300, y + 80, 5, 0, 2.*Math.PI, false);
wctx.fill();
}
}
}
if (show_turn_center)
{
// long leg path
wctx.strokeStyle = '#eeeeee';
wctx.lineWidth=2;
for (var ii=0; ii<6; ii++)
{
dist = Math.sqrt((cx-legx0[ii])*(cx-legx0[ii]) + legy0[ii]*legy0[ii]);
wctx.beginPath();
wctx.arc(300+cx,80,dist, 0., 2.*Math.PI, false);
wctx.stroke();
}
// small leg path
wctx.strokeStyle = '#ccccff';
wctx.lineWidth=4;
for (var ii=0; ii<6; ii++)
{
dist = Math.sqrt((cx-legx0[ii])*(cx-legx0[ii]) + legy0[ii]*legy0[ii]);
tht0 = Math.atan2(-legy0[ii], -(cx-legx0[ii]));
wdtht = sweepwangle*0.5*Math.sign(cx);
wctx.beginPath();
wctx.arc(300+cx,80,dist, tht0-wdtht, tht0+wdtht, false);
wctx.stroke();
}
}
// draw border on top
wctx.fillStyle = '#666666';
wctx.fillRect(0, 0, 600, 3);
wctx.fillRect(0, 0, 3, 200);
wctx.fillRect(0, 197, 600, 3);
wctx.fillRect(597, 0, 3, 200);
wctx.fillRect(0, 160, 600, 3);
wctx.fillStyle = '#ffffff';
wctx.fillRect(3, 163, 594, 34);
// draw helper circles
wctx.lineWidth = 2;
wctx.strokeStyle = '#fff8f8';
// draw hex body as circle
wctx.beginPath();
wctx.arc(300,80,20, 0, 2 * Math.PI, false);
wctx.lineWidth = 2;
wctx.strokeStyle = '#999999';
wctx.fillStyle = '#999999';
wctx.stroke();
wctx.fill();
// draw each leg
for (var ii=0; ii<6; ii++)
{
tht = 2.0*Math.PI*ii/6.;
tht = legtht[ii];
x0 = 300 + 20*Math.cos(tht);
y0 = 80 + 20*Math.sin(tht);
x1 = 300 + legx[ii];
y1 = 80 + legy[ii];
// foot-down dot
if ((ii%2==sidedown&&cx>0) || (ii%2!=sidedown&&cx<0))
{
wctx.fillStyle = '#444444';
wctx.beginPath();
wctx.arc(x1,y1, 3, 0, 2 * Math.PI, false);
wctx.fill();
}
wctx.strokeStyle = '#999999';
wctx.lineWidth = 8;
wctx.beginPath();
wctx.moveTo(x0,y0);
wctx.lineTo(x1,y1);
wctx.stroke();
}
if (show_turn_center)
{
// draw circle center
wctx.lineWidth = 1;
wctx.fillStyle = '#ce8080';
wctx.beginPath();
wctx.arc(cx+300, 80, 5, 0, 2 * Math.PI, false);
wctx.fill();
}
// wsliders
wctx.strokeStyle = '#bbbbbb';
wctx.lineWidth = 5;
wctx.beginPath();
wctx.moveTo(100,180);
wctx.lineTo(500,180);
wctx.stroke();
wctx.lineWidth = 1;
wctx.strokeStyle = '#dddddd';
for (j=0; j<5; j++)
{
wctx.moveTo(100+j*400.0/4.0,170);
wctx.lineTo(100+j*400.0/4.0,190);
wctx.stroke();
}
// the slidey parts
wctx.fillStyle = '#555555';
wctx.fillRect(297+(curvature)*400/4+1,172,5,16);
// display options (green = #83e171, red = #e1aa9d)
wctx.strokeStyle = "#999999";
wctx.lineWidth = 3;
wctx.beginPath();
wctx.arc(20,20, 10, 0, 2 * Math.PI, false);
wctx.stroke();
wctx.fillStyle = "#e1aa9d";
if (show_ground) wctx.fillStyle = "#83e171";
wctx.fill();
wctx.fillStyle = "#999999";
wctx.fillText("BACKGROUND",35,25);
wctx.beginPath();
wctx.arc(490,20, 10, 0, 2 * Math.PI, false);
wctx.stroke();
wctx.fillStyle = "#e1aa9d";
if (show_turn_center) wctx.fillStyle = "#83e171";
wctx.fill();
wctx.fillStyle = "#999999";
wctx.fillText("TURN GUIDES",505,25);
wctx.fillStyle = "#777777";
wctx.fillText("L",85,184);
wctx.fillText("R",510,184);
}
winit();
</script>gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com2tag:blogger.com,1999:blog-5124010431457427473.post-8001599927708825592015-06-14T10:54:00.000-06:002015-06-14T10:54:21.878-06:00[HEX] Heterogeneous Hardware<script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script>
<script>
$(document).ready(function()
{
$("#h1_hide0").hide();
$("#h1_code0").hide();
$("#h1_hide1").hide();
$("#h1_code1").hide();
$("#h1_hide0").click(function()
{
$("#h1_code0").hide();
$("#h1_hide0").hide();
$("#h1_show0").show();
});
$("#h1_show0").click(function()
{
$("#h1_code0").show();
$("#h1_hide0").show();
$("#h1_show0").hide();
});
$("#h1_hide1").click(function()
{
$("#h1_code1").hide();
$("#h1_hide1").hide();
$("#h1_show1").show();
});
$("#h1_show1").click(function()
{
$("#h1_code1").show();
$("#h1_hide1").show();
$("#h1_show1").hide();
});
});
</script>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-XRuF6lG1sAg/VXjRxNdp3eI/AAAAAAAAB4s/WL85vqMYajo/s1600/IMG_2824.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="169" src="http://4.bp.blogspot.com/-XRuF6lG1sAg/VXjRxNdp3eI/AAAAAAAAB4s/WL85vqMYajo/s480/IMG_2824.JPG" width="320" /></a></div>
<br />
This is the first post in my Hexapod Project series. The point of this project is to build a robot that allows me to try out a few robotics concepts. For a listing of each post in this series, <a href="http://www.gperco.com/2015/06/hex-project-intro.html">click here</a>.<br />
<br />
In this post, I'll cover building the hexapod body, the various levels of processing needed to keep the hexapod on its feet, and some of the code I wrote to tie various levels of processing together.<br />
<b><br />Building the Body</b><br />
<br />
Instead of designing the hexapod from scratch, I decided to go with a <a href="http://www.trossenrobotics.com/phantomx-ax-hexapod.aspx">plexiglass kit</a> for the body. This kit (and others from the same company) seem aimed at researchers and high-level hobbyists: higher quality than kids' toys, but not expensive enough to scare absolutely everyone away. I actually stumbled across this kit while looking for high-quality servos, which would play a key part in the success of the robot. My servo experience from the <a href="http://www.gperco.com/2014/10/single-pixel-camera.html">Single Pixel Camera</a> project left a bad taste in my mouth (I <i>did</i> buy the cheapest servos I could find..), so I opted for a set of <a href="http://www.trossenrobotics.com/dynamixel-ax-12-robot-actuator.aspx">AX-12A</a>s for this project. These servos use a serial communication protocol to not only set the target angle, but also to set angle limits, torque limits, failure modes, and request the current internal status (temperature, voltage, load, speed, etc.). They are significantly more expensive than your run-of-the-mill micro servos, but I've learned a few times now that the money you save on parts you pay later in headaches (with interest!).<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-Cl0zekaWzuo/VUg_ujnt6VI/AAAAAAAAB00/F8MkphE3EG8/s1600/IMG_20150318_163836538.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="252" src="https://3.bp.blogspot.com/-Cl0zekaWzuo/VUg_ujnt6VI/AAAAAAAAB00/F8MkphE3EG8/s600/IMG_20150318_163836538.jpg" width="590" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">The fancy boxes are a nice touch.</span></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-6oo4SKvpWGs/VUg_vr8MOHI/AAAAAAAAB1A/MpgNq6AZGJ0/s1600/IMG_20150320_173839654.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="302" src="https://1.bp.blogspot.com/-6oo4SKvpWGs/VUg_vr8MOHI/AAAAAAAAB1A/MpgNq6AZGJ0/s600/IMG_20150320_173839654.jpg" width="590" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">It's like LEGOs, but for people with more money and less imagination.</span><br />
<div style="text-align: left;">
<span style="font-size: x-small;"><br /></span></div>
<div style="text-align: left;">
The body consists of two flat panels separated by an inch or so with various mounting holes. This leaves a good bit of space in the middle for wires and other electronics. Each of the six legs consists of a coxa (hip) joint, a femur joint, and a tibia joint. This gives allows each foot to move around in all three dimensions, although with some limitations in position and angle. The kit also comes with an upper deck with a grid of mounting holes, providing even more space for accessories. I've been a little worried about shattering the plastic components, but have yet to have any problems. The range of motion of each leg is impressive, and the overall design is nicely menacing.</div>
</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-47s0hKNc7lE/VUlyZ_a9ckI/AAAAAAAAB1g/AM-fSZ1XBXo/s1600/IMG_2756.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="363" src="https://3.bp.blogspot.com/-47s0hKNc7lE/VUlyZ_a9ckI/AAAAAAAAB1g/AM-fSZ1XBXo/s1600/IMG_2756.jpg" width="590" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Sitting pretty on a wooden stand.</span></div>
<br />
<b>Controlling the Servos</b><br />
<br />
The servos receive both power and control data through a three-wire cable system. They can be daisy-chained together, so in theory, all 18 of my servos can be powered and controlled with only three wires. The same site I got my kit and servos from offers an Arduino-based controller board for these kinds of servos called the <a href="http://www.trossenrobotics.com/p/arbotix-robot-controller.aspx">Arbotix-M</a>. It sports an ATMEGA644p processor running at 16MHz and 5V, and most importantly, has the basic accessory electronics needed to handle communication over the servo data line. The unfortunate number of data lines on the servos dedicated to bi-directional communication (one) means extra electronics, which I am happy to let someone else figure out.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-Yjd97hxV2ks/VXjQZAsWR7I/AAAAAAAAB4k/MZnbT_v7c_M/s1600/IMG_2910.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="327" src="http://3.bp.blogspot.com/-Yjd97hxV2ks/VXjQZAsWR7I/AAAAAAAAB4k/MZnbT_v7c_M/s640/IMG_2910.JPG" width="590" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Almost like the hexapod was designed to have one! (it was)</span></div>
<br />
Note, hroughout this project, I'm dealing with the servos as black boxes. Not that they are particularly mysterious -- the documentation is actually pretty good -- but more that I don't want to adjust their firmware at all or consider the possibility of improving them. So the Arbotix-M controller is the first and lowest level of hardware I need to deal with in terms of programming.<br />
<br />
Eighteen servos are a lot to deal with, so I decided to use the Arbotix-M as a kind of aggregator for servo commands and information. Other hardware can send the Arbotix-M a single command to set all of the servos to various angles, and it handles splitting up the commands and sending them to the servos one by one. It also monitors the servos and report any statistics or faults to other interested processors. The full code I wrote to run on the Arbotix-M can be found <a href="https://github.com/bgreer/HEX/tree/master/PROG_SCONTROL">here</a>. I set up a small test where I sent the Arbotix-M commands over FTDI and found I could update all 18 servos at around 100Hz, which was plenty fast for nice, smooth motion.<br />
<br />
<b>Higher Level Hardware</b><br />
<br />
So far, the hexapod had nice servos and a controller that could keep the limbs moving, but nothing giving any it interesting commands. For the next two levels of processing above the Arbotix-M, I decided to go with a <a href="http://shop.udoo.org/usa/product/udoo-quad.html">UDOO Quad</a> board. The UDOO Quad is a small single-board computer with a quad-core ARM Cortex-A9 processor alongside an Arduino Due (ARM Cortex-M3), connected to a slew of peripherals including WiFi, HDMI, Ethernet, USB, Audio, and SATA. Instead of choosing this board after hours of deliberating between the numerous single-board computers out there, I went with this one because I had one sitting around in my graveyard of processors from failed projects. The Arduino Due would act as a hardware interface between the main cpu and the various hardware components (like the Arbotix-M), and the main cpu would do the heavy lifting required to get the hexapod moving around.<br />
<br />
With the goal of only using my own software comes the goal of utilizing multiple cores in the single-board computer I have chosen. Throughout the project I've decided to use threading in C++ to achieve this.<br />
<br />
<div style="text-align: center;">
<a href="http://2.bp.blogspot.com/-mJtoZjc6er4/VUg_vHqSYVI/AAAAAAAAB04/AVVeN-leyxo/s1600/IMG_20150420_174425864.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="274" src="https://2.bp.blogspot.com/-mJtoZjc6er4/VUg_vHqSYVI/AAAAAAAAB04/AVVeN-leyxo/s600/IMG_20150420_174425864.jpg" width="590" /></a><span style="font-size: x-small;">UDOO board with optional touch screen attached.</span></div>
<br />
<b>Packets of Information</b><br />
<br />
Because the quad-core cpu shares a board with the Ardinuo Due they share a dedicated serial connection (<span style="font-family: Courier New, Courier, monospace;">/dev/ttymxc3</span> in unix-land), so there's no need to worry about keeping them connected. I added a pair of wires to connect the Due to the Arbotix-M with another serial connection (Serial1 on the Due). These three processors needed a standard way of communicating data with each other. Since I wanted to allow for any processor to send data to any other processor, I needed a standardized method of packaging data and transmitting it so that it could be directed, parsed, and understood by any processor.<br />
<br />
I created a packet class in C++ to handle this. A packet object could be created by any processor, filled with data, and sent along with a desination tagged in the header. The key properties of the packet class are as follows:<br />
<ul>
<li>Arbitrary data size. (As long as it is an integer number of bytes).</li>
<li>Optional data padding. Some processors have difficulty with tiny packets.</li>
<li>Checksum. I only programmed for a single-bit XOR checksum, but it's marginally better than nothing.</li>
<li>Optional pre-buffered data. When memory is scarce, allow new packets to take the place of old ones in memory.</li>
<li>Destination Tagging. Single bytes indicating the destination, which is useful for packets bouncing between multiple processors.</li>
</ul>
The C++ class I ended up creating to achieve this looked something like this:<br />
<script src="//gist.github.com/bgreer/09714079319013d1be0e.js"></script>
The buffer holds all of the relevant information for the packet, including the size, destination, and checksum. If you don't care for code examples, you can view these packets of information as just a string of bytes, each with a specific purpose. The data it contains can be any length, but the header and footer are always the same:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-uOOsAED-y2M/VUu5bAKCuuI/AAAAAAAAB2c/f6ddlgIztAU/s1600/diagram.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="234" src="https://3.bp.blogspot.com/-uOOsAED-y2M/VUu5bAKCuuI/AAAAAAAAB2c/f6ddlgIztAU/s640/diagram.png" width="640" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Each block represents a single byte.</span></div>
<br />
The three processors are connected in series and use serial communication. Each monitors the serial port that connects it with the other processor(s) and decides what actions to take when a valid packet is received. For the quad-core processor, I wrote a simple serial port monitor that runs in its own thread. It both monitors incoming traffic for packets, and accepts new packets from other threads to send. The interface presented to other threads allows for either blocking or non-blocking sends and receives. The threaded serial library can be seen <a href="https://github.com/bgreer/HEX/tree/master/LIB_SERIAL">here</a>. The lower level processors can't use threading (or at least, I'm not going to try), so they just periodically monitor their serial ports and pause other functions to act on incoming packets.<br />
<br />
The whole point of this serial communication path with standardized packets is so that you can do something like send information from the quad-core processor, have the Due see that it is meant for the Arbotix-M, send it along, and have the Arbotix-M see it and act on it. Here's an example of what this kind of action looks like in code:<br />
<br />
First, the quad-core processor initiates communication with the Due, creates a packet with some information for the Arbotix-M, then sends it through the serial port.<br />
<button id="h1_hide0">Hide Code</button>
<button id="h1_show0">Show Code</button>
<span id="h1_code0"><script src="//gist.github.com/bgreer/e43f3197f2ccc603bde0.js"></script></span>
<br />
<br />
The Due is continually running a code like the following, which looks for valid packets and sends them on to the Arbotix-M if the destination tag is correct (this looks a lot more complicated, but that's because I haven't made as much of an effort to hide the complicated parts behind libraries and such):<br />
<button id="h1_hide1">Hide Code</button>
<button id="h1_show1">Show Code</button>
<span id="h1_code1"><script src="//gist.github.com/bgreer/9854cf0492d80242eea6.js"></script></span><br />
<br />
Code snippets are nice and all, but at the end of the day we need to make sure the code is actually sending the correct information. When determining whether data is being sent between bits of hardware correctly, my favorite method is to just snoop on the data lines themselves and see what the ones and zeroes look like. I hooked up a digital logic analyzer to the data lines between the quad-core processor and the Due, and between the Due and the Arbotix-M to look at the data being passed around.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-OJva6BFNUys/VXjfEJypQkI/AAAAAAAAB48/alOtbPfyWYM/s1600/scan01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="110" src="http://3.bp.blogspot.com/-OJva6BFNUys/VXjfEJypQkI/AAAAAAAAB48/alOtbPfyWYM/s600/scan01.png" width="590" /></a></div>
<br />
<br />
If you aren't familiar with this kind of view, we are looking at the voltage of each data line as a function of time. Within each horizontal band, the thin white line shows the voltage on the line. I'm using a digital analyzer, so it can only tell me if the voltage is low or high, hence the rapid switching up and down. The horizontal axis is time, and the values above all of the lines shows the timescale being plotted.<br />
<br />
As the probe trace shows, the three processors are successfully passing information back and forth. The quad-core cpu sends a packet of ones and zeros to the Due, then a short time later the same packet of information gets sent from the Due to the Arbotix-M. Zooming in to one of the packets, we can even see the individual bytes being sent along with what they mean:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-HgD8v68w9mo/VXjfXVjPv7I/AAAAAAAAB5E/JebwxsH1LVU/s1600/scan02.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="110" src="http://2.bp.blogspot.com/-HgD8v68w9mo/VXjfXVjPv7I/AAAAAAAAB5E/JebwxsH1LVU/s600/scan02.png" width="590" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-xuhJtVOe8Kw/VXjfaDFcF_I/AAAAAAAAB5M/5L5oo3aI4hE/s1600/scan03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="110" src="http://2.bp.blogspot.com/-xuhJtVOe8Kw/VXjfaDFcF_I/AAAAAAAAB5M/5L5oo3aI4hE/s600/scan03.png" width="590" /></a></div>
<br />
<br />
The three levels of processing can pass information back and forth, so the next step is to decide what that information should be. The quad-core processor can tell the servos where to go through this setup of bouncing packets, but the servos don't know what they should be doing. In the next post, I'll talk about using inverse kinematics and some other tricks to decide how each servo should move such that the hexapod can walk around smoothly.<br />
<br />
<!-- Blogger automated replacement: "http://images-blogger-opensocial.googleusercontent.com/gadgets/proxy?url=http://3.bp.blogspot.com/-Cl0zekaWzuo/VUg_ujnt6VI/AAAAAAAAB00/F8MkphE3EG8/s600/IMG_20150318_163836538.jpg&container=blogger&gadget=a&rewriteMime=image/*" with "https://3.bp.blogspot.com/-Cl0zekaWzuo/VUg_ujnt6VI/AAAAAAAAB00/F8MkphE3EG8/s600/IMG_20150318_163836538.jpg" --><!-- Blogger automated replacement: "http://images-blogger-opensocial.googleusercontent.com/gadgets/proxy?url=http://1.bp.blogspot.com/-6oo4SKvpWGs/VUg_vr8MOHI/AAAAAAAAB1A/MpgNq6AZGJ0/s600/IMG_20150320_173839654.jpg&container=blogger&gadget=a&rewriteMime=image/*" with "https://1.bp.blogspot.com/-6oo4SKvpWGs/VUg_vr8MOHI/AAAAAAAAB1A/MpgNq6AZGJ0/s600/IMG_20150320_173839654.jpg" --><!-- Blogger automated replacement: "http://images-blogger-opensocial.googleusercontent.com/gadgets/proxy?url=http://2.bp.blogspot.com/-mJtoZjc6er4/VUg_vHqSYVI/AAAAAAAAB04/AVVeN-leyxo/s600/IMG_20150420_174425864.jpg&container=blogger&gadget=a&rewriteMime=image/*" with "https://2.bp.blogspot.com/-mJtoZjc6er4/VUg_vHqSYVI/AAAAAAAAB04/AVVeN-leyxo/s600/IMG_20150420_174425864.jpg" --><!-- Blogger automated replacement: "http://images-blogger-opensocial.googleusercontent.com/gadgets/proxy?url=http://3.bp.blogspot.com/-uOOsAED-y2M/VUu5bAKCuuI/AAAAAAAAB2c/f6ddlgIztAU/s640/diagram.png&container=blogger&gadget=a&rewriteMime=image/*" with "https://3.bp.blogspot.com/-uOOsAED-y2M/VUu5bAKCuuI/AAAAAAAAB2c/f6ddlgIztAU/s640/diagram.png" --><!-- Blogger automated replacement: "http://images-blogger-opensocial.googleusercontent.com/gadgets/proxy?url=http://3.bp.blogspot.com/-47s0hKNc7lE/VUlyZ_a9ckI/AAAAAAAAB1g/AM-fSZ1XBXo/s1600/IMG_2756.jpg&container=blogger&gadget=a&rewriteMime=image/*" with "https://3.bp.blogspot.com/-47s0hKNc7lE/VUlyZ_a9ckI/AAAAAAAAB1g/AM-fSZ1XBXo/s1600/IMG_2756.jpg" -->gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com0tag:blogger.com,1999:blog-5124010431457427473.post-39749528482192407992015-06-13T14:25:00.000-06:002015-06-17T08:22:03.796-06:00[HEX] Project IntroEight months ago, when I published a write-up of my <a href="http://www.gperco.com/2014/10/single-pixel-camera.html">Single Pixel Camera</a>, I wrote that I was going to stop doing projects and blog posts so that I could concentrate on finishing up my PhD in astrophysics. As it turns out, spending 100% of my mental energy on Real Research drives me insane and lowers the quality of my work. Most of <a href="http://en.wikipedia.org/wiki/Helioseismology">what I do</a> might be considered advanced data analysis. While I enjoy finding and learning new algorithms and bits of math, trying them out on research data can often lead to overly-complicated and poorly-understood results. Instead, I've found that having a pet project at home allows me to keep on learning new things without sacrificing the quality of the work I do that matters most.<br />
<br />
So here we are, at another project post. This newest project ended up being the biggest one I've done, so I've had to split the write-up into a series of posts. Fortunately, there's no danger of never finishing the series due to lack of time, because both the project and the write-ups were finished weeks ago! This first post will serve as a general overview / table of contents for the rest of the posts.<br />
<b><br /></b>
<b>Project Description</b><br />
I built a 6-legged robot with custom electronics and software as a test platform for a variety of concepts including inverse kinematics, heterogeneous processing, LIDAR, and path finding. The robot's name is <a href="http://www.merriam-webster.com/dictionary/myrmidon">Myrmidon</a>.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-ldTsCWMRjc4/VXyQOdwZ-KI/AAAAAAAAB64/XbDzk2oN5nA/s1600/IMG_2964.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="260" src="http://2.bp.blogspot.com/-ldTsCWMRjc4/VXyQOdwZ-KI/AAAAAAAAB64/XbDzk2oN5nA/s600/IMG_2964.JPG" width="590" /></a></div>
<br />
<div style="text-align: center;">
<br /></div>
<div style="text-align: center;">
<b><span style="font-size: large;">Table of Contents</span></b></div>
<br />
<b>Post 0: <a href="http://www.gperco.com/2015/06/hex-project-intro.html">Introduction</a></b><br />
You are here. <br />
<br />
<b>Post 1: <a href="http://www.gperco.com/2015/06/hex-heterogeneous-hardware.html">Heterogeneous Hardware</a></b><br />
I dive into the hardware and software architecture needed to create a general-purpose hexapod platform. Keeping to open source hardware as much as possible, I set up a series of connected processors to handle the various robotics tasks in parallel.<br />
<br />
<b>Post 2: <a href="http://www.gperco.com/2015/06/hex-inverse-kinematics.html">Inverse Kinematics</a></b><br />
The hexapod is built and ready to move, but has nothing to do. I step through the process of building a six-legged gait from basic concepts to inverse kinematics.<br />
<br />
<b>Post 3: <a href="http://www.gperco.com/2015/06/hex-come-on-and-slam.html">Come on and SLAM</a></b> <br />
I use a LIDAR unit from a robot vacuum cleaner to allow the hexapod to 'see' its surroundings. I introduce a method for letting the hexapod know its position and trajectory using information about its surroundings.<br />
<br />
<b>Post 4: <a href="http://www.gperco.com/2015/06/hex-navigating-like-star.html">Navigating like A Star</a></b><br />
The hexapod can walk and avoid obstacles, now it's time to let it decide where to go. I go over setting goals for the hexapod and how it can figure out the best way to achieve these goals.<br />
<br />
<b>Post 5: Race Day</b> (will be posted sometime after June 20)<br />
With a hexapod capable of fully-autonomous locomotion, I enter into the <span id="goog_414823964"></span><a href="https://avc.sparkfun.com/">Sparkfun Autonomous Vehicle Competition</a><span id="goog_414823965"></span>.<br />
<br />gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com4tag:blogger.com,1999:blog-5124010431457427473.post-89293263189984579102014-10-19T16:25:00.000-06:002014-10-20T09:29:33.457-06:00Single Pixel CameraPlease forgive me for the lack of new content over the last five or so months. After my most recent post about <a href="http://www.gperco.com/2014/05/quadcopter-stability-and-neural-networks.html">neural networks and quadcopters</a>, I decided to work on a couple smaller projects. First I replaced the bulb in a standing lamp with RGB LEDs, added an Arduino Pro Mini and an <a href="http://en.wikipedia.org/wiki/Real-time_clock">RTC</a>, and programmed the lamp to change color temperature based on the time of day. This wasn't really worth a whole blog post by itself, so I moved on to the next project. I then taught myself to sew and reupholstered my couch with white canvas. No electronics, no programming, just lots of sewing. Again, not really worth a blog post. In all, I had spent most of the summer on small projects that couldn't really top the last few posts I've done.<br />
<br />
But I would be remiss if I did not provide something to show for the last five months. So here is a picture of our cats, Fitzcarraldo (left) and Alan "Bonesaw" Turing (right). Alan was born June 2014, so I guess he happened too.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-TWPimPPh0Yg/VC-LFVN93dI/AAAAAAAABgc/xVsWza_9gJ4/s1600/20140803_224050_LLS.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-TWPimPPh0Yg/VC-LFVN93dI/AAAAAAAABgc/xVsWza_9gJ4/s1600/20140803_224050_LLS.jpg" height="248" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">An offering of cats to please the internet gods.</span></div>
<br />
But to the topic at hand: I've been interested in cameras and image processing for many, many years. I've written all kinds of code to filter, sort, encrypt, convolve, and de-convolve images, and I've built gadgets that were made to be photographed. But one thing I hadn't done until this project was build a camera.<br />
<br />
What's the easiest way to build a camera? My guess would be to build a simple pinhole camera with real film. All you would need is something like a shoebox and some film. Assuming that you could figure out how to handle the film in a dark enough room so it didn't get exposed to light before you wanted it to, and that you had a place you could take the film to get processed, this might be the simplest way of building a camera.<br />
<br />
But I'm picky, and wanted a digital color camera. Most modern digital cameras work on the same principle: replacing the sheet of film from our simple pinhole with a dense array of light sensors, and replacing the pinhole with a glass lens (or a set of glass lenses). The lenses are used to gather more light than the pinhole could allow and focus the light on to the sensor array. The array contains millions of sensors (pixels), each with either a red, green, or blue filter in front. You point the camera at the object you wish to take a picture of, focus the lenses, and tell the pixel array to gather light for a little while. When you read out the values each pixel has recorded, and use a bit of software to arrange the colors and pixels on a screen, you see the image you want. Simple enough!<br />
<br />
But let's say I didn't want to cough up the money for a pixel array or even lenses. In fact, all I could afford was a single color light sensor, some basic electronic components, and whatever I could print on a 3D printer. How do we make the MacGyver of digital cameras? First we need a hand-wavy theory of how to build an image with only one sensor.<br />
<br />
<span style="font-size: large;"><b>Hand-Wavy Theory</b></span><br />
<br />
Imagine you are standing outside in a field on a clear day with the sun shining in front of you. Why are you in this field? I don't know, maybe you are having a picnic. When you stare forward, your eyes help your brain form an image of the scene in front of you. But your eye has a lens and a dense array of sensors (your retina), and we don't want that luxury. So you close your eyes. Your entire retina is reduced to picking up just the light that hits your eyelids, giving you just one measure of the scene in front of you: the total integrated light hitting your face. In this single-pixel scenario, you measure the light from every object in the scene all at once, but you don't know <i>where</i> each object is or how much of the total light came from it. But you can start to build a mental image of where things sit by using your hand. Still standing in a field with your eyes closed, you hold your hand out to block light and wave your arm around (all good hand-wavy theories involve actual hand-waving). As your hand blocks light from different angles, the total light hitting your face is reduced somewhat. By knowing where you've placed your hand in front of you (assuming decent spatial awareness), you can get an estimate of how much light was coming from that direction. For example, you can figure out what direction the sun is by waving your arm around until you notice most of the light has disappeared. Then you know that the sun is in the direction connecting your face to your hand, and extending off into the sky.<br />
<br />
Using this principle, we can build a digital camera with a single pixel as long as we can also build some kind of 'arm' that can be moved around in front of the sensor. The sensor continuously records the amount of light hitting it while the arm moves around in front. A computer reads the sensor measurement, considered the arm placement, and builds up an image in memory. The next question is: how well can this method actually produce an image?<br />
<br />
<span style="font-size: large;"><b>Testing the Theory</b></span><br />
<br />
A good first step to any potentially crazy idea is to simulate it on a computer. If it doesn't work there, either your simulation is crap or the idea is bad. If it does work, either your simulation is crap or the idea <i>might</i> work in the real world. While these options don't make the simulation seem very worthwhile, they are often much simpler than trying things in real life, and can save a lot of money and tears.<br />
<br />
Below, I've provided a couple little programs that demonstrate the theory of producing an image by blocking light in a systematic way. The first simulates a light-blocking arm sweeping across the field of view of a sensor, along with a plot of the total light collected as a function of arm position. The panel near the top right shows the color corresponding to the given combination of red, green, and blue at any given time.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<canvas height="200" id="canv_sim01" width="500"></canvas><br /></div>
<br />
This gives us an idea of the data we can record by moving a blocking arm back and forth across the field of view. Next, we not only vary the position of the blocking arm, but also the angle at which it moves. By sweeping the arm across at many different angles, we can build up a two-dimensional plot of the total integrated light hitting the sensor as a function of position and angle. This is shown in the next program as a color map.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<canvas height="200" id="canv_sim02" width="500"></canvas><br /></div>
<br />
The resulting image is fascinating, but doesn't look like the original scene at all. At each arm angle (plotted along the x-axis), the same object in the scene will be 'recorded' at a different arm position (plotted along the y-axis). This causes each object to become a stretched out and curved object in the recorded image. Luckily, there is some clever math that can get us from the stretched out recorded image to a nice reconstruction of the original scene.<br />
<br />
<span style="font-size: large;"><b>Radon Transforms</b></span><br />
<br />
When I first looked at one of the simulated recordings from this simulation, I was ecstatic. I immediately recognized it as the <a href="http://en.wikipedia.org/wiki/Radon_transform">Radon Transform</a> of the original scene. Admittedly, this is not a normal thing to immediately recognize. A significant portion of my PhD work is spent thinking about similar transforms, so I have things like this on my mind.<br />
<br />
Why care about transforms like this? An integral transform is essentially a method for taking a signal and mushing it up in a particular way that makes the result more understandable. The most common example is the <a href="http://en.wikipedia.org/wiki/Fourier_transform">Fourier Transform</a>, which is useful for breaking down a signal into waves. An example of when a Fourier Transform might be useful is when analyzing an audio source. This transform will separate the audio into its component frequencies and tell you how much 'power' is in each separately. The Radon Transform is a little more obscure, but one example of its usefulness is <a href="http://www-rohan.sdsu.edu/doc/matlab/toolbox/images/transf10.html">detecting hard edges</a> in an image. An important feature of many integral transforms is that they can be inverted to give you back the original signal. The inverse of the Radon Transform (also called a filtered back projection) is most commonly used in <a href="http://en.wikipedia.org/wiki/X-ray_computed_tomography">CT scans</a>, where a sensor array measures the projected image of someone's insides and the final image is reconstructed from many projections.<br />
<br />
Our simulated sensor-and-arm camera has given us the negative Radon Transform of the scene it tried to image. All we need to do is perform the inverse of the transform to the recorded image and we should get an image of the original scene. One issue with this procedure is resolution. When making the camera, we need to pick the width of the blocking-arm, how finely we can move it across the scene, and at how many angles we choose to do this at. All three of these choices determine the resolution of both the recorded transform and the final reconstructed image. After a bit of playing around with some test images, I settled on a resolution that would keep the total scan time for the camera reasonable.<br />
<br />
With a solid theory in place with fancy enough math behind it to look impressive, I could now continue on and build the single pixel Radon Transform camera in real life.<br />
<br />
<span style="font-size: large;"><b>Constructing the Camera</b></span><br />
<br />
The main components I used to build this camera were:<br />
- <a href="https://www.sparkfun.com/products/11113">Arduino Pro Mini</a><br />
- <a href="http://www.adafruit.com/product/1334">Color Light Sensor</a><br />
- <a href="http://www.adafruit.com/products/254">MicroSD Card Board</a><br />
- <a href="http://www.adafruit.com/product/1450">2 Servos</a><br />
- <a href="http://www.adafruit.com/product/830">Battery Pack</a><br />
<br />
The SD card board and battery pack added a bit to the cost, but were added to make the camera portable, with the hope I could take the camera outdoors and take some pictures of mountains and fields and things. They were not necessary for indoor, tethered use of the camera.<br />
<br />
The first big hurdle was designing the parts to be printed. I'm not an engineer, I'm a scientist. I don't even enjoy saying that physicists are good at doing an engineer's job. Making things that don't immediately self-destruct is non-trivial. This was probably the most complicated thing I've had to design, and the number of revisions I had to go through shows that. About half-way through, my 3D printer died an ungraceful death (after a year of use and a few pounds of filament), so I upgraded to a better printer and enjoyed a marked increase in print quality by the end.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-DsEWvp1exo4/VDTT7_EMnuI/AAAAAAAABhE/XnC_4oTtKlA/s1600/camera_render_all.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-DsEWvp1exo4/VDTT7_EMnuI/AAAAAAAABhE/XnC_4oTtKlA/s1600/camera_render_all.png" height="183" width="600" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Printed in 8 parts, in case you were wondering.</span></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-7xZgddCVvdk/VEQct2_hCWI/AAAAAAAABiQ/nh5a9PFKtwY/s1600/20141002_191338.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-7xZgddCVvdk/VEQct2_hCWI/AAAAAAAABiQ/nh5a9PFKtwY/s1600/20141002_191338.jpg" height="338" width="600" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Left: old dead printer. Right: new not-dead printer.</span></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-u3X_mOguEXw/VEQcw6uQynI/AAAAAAAABiY/fPa05LuaGCg/s1600/20141007_220643.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-u3X_mOguEXw/VEQcw6uQynI/AAAAAAAABiY/fPa05LuaGCg/s1600/20141007_220643.jpg" height="338" width="600" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Various attempts at being an engineer.</span></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-JvPmvFAgNrc/VEQb1XfAbKI/AAAAAAAABh4/cXwZSQg868E/s1600/IMG_2674.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-JvPmvFAgNrc/VEQb1XfAbKI/AAAAAAAABh4/cXwZSQg868E/s1600/IMG_2674.JPG" height="387" width="600" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Glam shot with all electronics magically completed.</span></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-ySnUWikgtfA/VEQb5sL_00I/AAAAAAAABiA/gW3TVzEp-vs/s1600/IMG_2677.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-ySnUWikgtfA/VEQb5sL_00I/AAAAAAAABiA/gW3TVzEp-vs/s1600/IMG_2677.JPG" height="395" width="600" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">MicroSD card reader on the side.</span></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-eV9PgXP8vrM/VEQb-AOiykI/AAAAAAAABiI/jVEezp1R9pQ/s1600/IMG_2670.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-eV9PgXP8vrM/VEQb-AOiykI/AAAAAAAABiI/jVEezp1R9pQ/s1600/IMG_2670.JPG" height="375" width="600" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Attached to my tripod, ready for testing.</span></div>
<br />
<br />
The circuit to control the camera was as simple as I could make it. I soldered the Arduino board to a protoboard and added connections for the servos, sensor, battery pack, and SD card board. Once I confirmed that everything could be controlled by the Arduino, I moved on to the code.<br />
<br />
The main problem I encountered while building the camera was the fact that the servos would not rotate the full 180 degrees I expected to see. In fact, using the default Servo library for Arduino and the standard servo.write(); command, I only saw about 100 degrees of rotation. As it turns out, different servos have different standards for how to make them turn to various positions. The Servo library assumes that a pulse-width of 1000 us corresponds to 0 degrees and a pulse-width of 2000 us corresponds to 180 degrees. In the servos I bought, these pulse-widths corresponded to about 70 degrees and 170 degrees, respectively. By manually sending a pulse-width of 2100 us, I could get the servos to rotate close enough to 180 degrees, but getting the low-angle end was trickier. At a pulse-width of 450 us, I was getting down to around 30 degrees, but any lower than that caused the servo to swing to zero and bounce around. My guess is that the internal servo electronics aren't capable of handling pulse-widths shorter than about 450 us.<br />
<br />
So in the end, I could only get around 150 degrees of rotation out of the servos. This wasn't too much of an issue for the servo that moved the blocking arm in front of the sensor, since the sensor probably couldn't tell if light was hitting it from those extreme angles anyway. But the limited range was a significant problem for the other servo that rotated the whole arm contraption around the axis of the sensor. This limitation is like chopping off the right end of the simulated Radon Transform images in the above animations. Without information from some band of angles, the image reconstruction is unconstrained and will show significant banding. I thought about this problem for a few days before running out of ideas and searching the internet for a solution. The good news is that this problem is <a href="https://www.google.com/search?q=limited+angle+tomography">not uncommon</a> and there are research groups around the world thinking about it. The bad is that there isn't some magical way to infer the information lost at those angles, only methods to mitigate the artifacts introduced in the inverse transform. The best solution is to record as many angles as possible and be aware of the limitations of what you've recorded.<br />
<br />
<span style="font-size: large;"><b>Programming the Processor</b></span><br />
<br />
I decided to save the inverse Radon Transform computation for post-processing on my workstation, so that all the camera had to do was record the sensor values and store them in a sensible way. This led to a fairly simple code flow for the on-board controller:<br />
<br />
<script src="https://gist.github.com/bgreer/49b12a04879eecafb958.js"></script><br />
<br />
I've excluded some bits of code that do things like set the servo position and write data out, mostly to simplify the code above. These extra bits aren't too important and can be done in a variety of ways.<br />
<br />
Here is the flow of how the camera works: once the camera initializes, it determines the correct sensor gain and begins scanning the scene. Each measurement is saved to a file on the SD card. I collect this data and move it to a file on my laptop for post-processing. Before the raw data can be transformed from Radon-space into real-space, it needs to be adjusted.<br />
<br />
One issue with the camera is the long exposure time. During the ten or so minutes it takes to record a single image, the lighting in the scene can change dramatically. The biggest cause of lighting changes I ran into was clouds. This kind of change in lighting would cause the recorded data to vary in brightness over time, which would imprint into the final image as a variation with position. To ameliorate this, I computed a weighted mean of the total brightness at each arm angle and used it to compute the negative image. This way, the light could change slowly over time and the moving average would account for it.<br />
<br />
<span style="font-size: large;"><b>Results</b></span><br />
<br />
The first test scene was of a lamp and a red flashlight. This provided a few simple diagnostics, like checking to see if a bright point source would get smeared at all in the final reconstruction.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-6Fz0bhfQ_MY/VEQS9P-KdoI/AAAAAAAABho/bzy7By2M27M/s1600/lights_final.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-6Fz0bhfQ_MY/VEQS9P-KdoI/AAAAAAAABho/bzy7By2M27M/s1600/lights_final.jpg" /></a></div>
<br />
The first panel shows the raw values from the camera mapped as a function of arm angle and position and scaled between 0 and 255 in each color channel. The second panel shows the negative scan, where I've gone from measuring the total light at every arm position to measuring the light blocked at every arm position. The third panel shows the inverse transformed scan, which is ideally the reconstruction of the actual scene. The final panel is a comparison image taken with a normal camera.<br />
<br />
The raw scan and negative show exactly what we would expect from the Radon transform. The two main objects show up as smooth curves as a function of arm angle. The resulting image isn't quite as clear. You can tell that it picked up on the bright light and a bit of the red light, but there is a huge amount of aberration present, particularly around the bright light. This butterfly-looking pattern shows up when the full 180 degrees of arm rotation aren't recorded. The angled lines radiating from the light show you the limits of angle the arm could achieve. With this limitation in mind, I moved on to recording more complex scenes.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-T7WNHs85-Gw/VEQdAPfJa_I/AAAAAAAABig/nbKZU-8aJAQ/s1600/desk_final.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-T7WNHs85-Gw/VEQdAPfJa_I/AAAAAAAABig/nbKZU-8aJAQ/s1600/desk_final.jpg" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Close enough?</span></div>
<br />
Next up was a few objects that were sitting around on my desk. I wanted to know how well the camera could record different colors, so I placed some yellow sticky notes and red wire next to the blue painters tape I had on my printer. The raw scan and negative look pretty sensible, and the resulting image actually shows some resemblance to the real scene. You can pick out the blue region, the bright yellow spot on the right, the red blur on the bottom left, and even the red printer nozzle above the blue region. The whole image looks like it was taken underwater or something, but then again, it's an image made by a single-pixel camera. I'm surprised it works at all.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-GC1D4wnWDN4/VEQdDR6tN4I/AAAAAAAABio/YVKjYPCofXc/s1600/0009_final.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-GC1D4wnWDN4/VEQdDR6tN4I/AAAAAAAABio/YVKjYPCofXc/s1600/0009_final.jpg" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">View from my balcony.</span></div>
<br />
Next up, I took the camera out to my balcony and took a picture of my morning view. No simple point sources to make things easy, but the result looks pretty good. The raw scan and negative look nice and smooth, and the result has most of the main features present in the real scene. You can see the horizon, the tree right in the center, and even a little of the parking lot in the bottom right. Of course, I spent time and money making the camera portable, so next up I needed to go on a little photo trip.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-f81tngDWFLE/VEQdG1kMfeI/AAAAAAAABiw/rbCHPP8zBd0/s1600/0013_final.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-f81tngDWFLE/VEQdG1kMfeI/AAAAAAAABiw/rbCHPP8zBd0/s1600/0013_final.jpg" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Living in Colorado has its benefits.</span></div>
<br />
One of the nice things about living in Boulder, CO is that you are never far away from beautiful scenery. I took the camera up the Front Range and let it record a view of the Rocky Mountains in the distance. The raw scan and negative look pretty good, but have a lot of sharp jumps as a function of arm angle. I'm really not sure where these came from, but I suspect either my battery pack was dying or my servos were giving out. Even with these problems, the resulting image looks great.<br />
<br />
So there you have it, a single pixel camera that takes color images. It may not produce the highest quality photographs, but it's certainly enough of a proof-of-concept to be worth a post. Given some more time and money, a better version of this camera could be made to take better pictures. But I'll leave that as an exercise for the reader.<br />
<br />
<br />
<br />
This project took much longer for me to complete than it would have a year ago. As I inch closer to finishing my PhD work, I can afford less and less time to projects outside of work. I think it's time to say that I won't be updating this blog any more, or at least until I'm done with school. Of course, I say this, but who knows what kind of small projects I'll be able to do <a href="http://www.gperco.com/2013/11/12-hour-project-arduino-mechanomyography.html">in a day</a> later on. Fortunately, I think this blog has had a good run, from <a href="http://www.gperco.com/2013/11/daft-punk-helmets-controllers-and-code.html">Daft Punk helmets</a> to <a href="http://www.gperco.com/2014/03/led-planet-software.html">LED Planets</a>, from <a href="http://www.gperco.com/2014/05/quadcopter-stability-and-neural-networks.html">Neural Networks</a> to <a href="http://www.gperco.com/2014/04/automated-investigator.html">Robots that Slap People</a>. While this may just look like a collection of eccentric and technical projects, it has really helped me figure out what I might be good at (outside of science) and what I enjoy doing. I know my documentation here has helped a few people with their own projects, and I hope they continue to do so in the future. Thanks for reading!<br />
<script>
var rx = -10.0;
var ry = 10.0;
var rr = 15.0;
var gx = 40.0;
var gy = 30.0;
var gr = 18.0;
var bx = -40.0;
var by = -50.0;
var br = 20.0;
var armpos2, armtht2;
var armpos = 10.0;
var simrun2 = false;
var timeout;
var rval = new Array(150);
var gval = new Array(150);
var bval = new Array(150);
var pixels = new Array(90000);
var img;
function init2()
{
canv_sim01 = document.getElementById('canv_sim01');
canv_sim02 = document.getElementById('canv_sim02');
ctx1 = canv_sim01.getContext('2d');
ctx2 = canv_sim02.getContext('2d');
img = ctx2.createImageData(256, 146); // width x height
clear2();
draw2();
simrun2 = true;
precompute();
armpos2 = 0.0;
armtht2 = 0.0;
run2();
}
function precompute()
{
var i,j,ix,iy;
var ac, bc, cc;
var c = 140./400.;
for (i=0; i<150; i++)
{
rval[i] = rr*rr*c;
gval[i] = gr*gr*c;
bval[i] = br*br*c;
for (j=-5; j<=4; j++)
{
if (Math.abs(i+j - (rx+75)) < rr)
{
rval[i] -= Math.sqrt(rr*rr - ((i+j-75)-rx)*((i+j-75)-rx))*c;
}
if (Math.abs(i+j - (gx+75)) < gr)
{
gval[i] -= Math.sqrt(gr*gr - ((i+j-75)-gx)*((i+j-75)-gx))*c;
}
if (Math.abs(i+j - (bx+75)) < br)
{
bval[i] -= Math.sqrt(br*br - ((i+j-75)-bx)*((i+j-75)-bx))*c;
}
}
}
r2 = rr*rr;
g2 = gr*gr;
b2 = br*br;
for (ix=0; ix<256; ix++)
{
ac = Math.sin(ix*3.14159/256.);
bc = -Math.cos(ix*3.14159/256.);
norm = Math.sqrt(ac*ac+bc*bc);
for (iy=0; iy<146; iy++)
{
loc = (iy*256+ix)*4;
img.data[loc] = rr*rr*c;
img.data[loc+1] = gr*gr*c;
img.data[loc+2] = br*br*c;
img.data[loc+3] = 255;
rtemp = (ac*rx+bc*ry)/norm;
gtemp = (ac*gx+bc*gy)/norm;
btemp = (ac*bx+bc*by)/norm;
for (j=-5; j<=4; j++)
{
cc = (iy-75.0+j)/norm;
dist = Math.pow(rtemp+cc,2.);
if (dist < r2)
{
img.data[loc] -= Math.sqrt(r2 - dist)*c;
}
dist = Math.pow(gtemp+cc,2.);
if (dist < g2)
{
img.data[loc+1] -= Math.sqrt(g2 - dist)*c;
}
dist = Math.pow(btemp+cc,2.);
if (dist < b2)
{
img.data[loc+2] -= Math.sqrt(b2 - dist)*c;
}
}
}
}
}
function run2()
{
if (simrun2==true) timeout = setTimeout('run2()', 30);
armpos = armpos + 0.6;
if (armpos > 150) armpos = 0.0;
armpos2 = armpos2 + 10.0;
if (armpos2 > 150)
{
armpos2 = 0.0;
armtht2 += 3.14159/128.;
if (armtht2 > 3.14159) armtht2 = 0.0;
}
clear2();
draw2();
}
function draw2()
{
var i;
// image area
ctx1.fillStyle = '#222222';
ctx1.fillRect(20,25,150,150);
// things
ctx1.fillStyle = '#ff0000';
ctx1.beginPath();
ctx1.arc(rx+95, ry+100, rr, 0, 2*Math.PI, false);
ctx1.fill();
ctx1.fillStyle = '#00ff00';
ctx1.beginPath();
ctx1.arc(gx+95, gy+100, gr, 0, 2*Math.PI, false);
ctx1.fill();
ctx1.fillStyle = '#0000ff';
ctx1.beginPath();
ctx1.arc(bx+95, by+100, br, 0, 2*Math.PI, false);
ctx1.fill();
// image area
ctx2.fillStyle = '#222222';
ctx2.fillRect(20,25,150,150);
// things
ctx2.fillStyle = '#ff0000';
ctx2.beginPath();
ctx2.arc(rx+95, ry+100, rr, 0, 2*Math.PI, false);
ctx2.fill();
ctx2.fillStyle = '#00ff00';
ctx2.beginPath();
ctx2.arc(gx+95, gy+100, gr, 0, 2*Math.PI, false);
ctx2.fill();
ctx2.fillStyle = '#0000ff';
ctx2.beginPath();
ctx2.arc(bx+95, by+100, br, 0, 2*Math.PI, false);
ctx2.fill();
// arm
ctx1.fillStyle = 'black';
ctx1.fillRect(Math.max(armpos+20-5,20),25,
Math.min(Math.min(10,armpos+10), 154-armpos),150);
ctx2.fillStyle = 'black';
ctx2.save();
ctx2.translate(95,100); // move to hinge position
ctx2.rotate(armtht2); // rotate into arm coordinates
ctx2.fillRect(-110,70-armpos2-5,220,10);
ctx2.restore();
// cover overlapping edges of arm
ctx2.fillStyle = 'white';
ctx2.fillRect(3,3,240,22);
ctx2.fillRect(3,3,17,194);
ctx2.fillRect(170,3,60,194);
ctx2.fillRect(3,175,200,30);
ctx2.fillStyle = '#666666';
ctx2.fillRect(0,0,240,3);
ctx2.fillRect(0,0,3,200);
ctx2.fillRect(0,197,250,3);
// plot area
ctx1.fillStyle = '#555555';
ctx1.fillRect(220,25,260,150);
ctx1.fillStyle = 'white';
ctx1.fillRect(222,27,256,146);
ctx2.fillStyle = '#555555';
ctx2.fillRect(220,25,260,150);
ctx2.fillStyle = 'white';
ctx2.fillRect(222,27,256,146);
// axes
ctx1.font="12px Arial";
ctx1.fillStyle = 'black';
ctx1.fillText("Arm Position", 315, 190);
ctx1.fillText("Value", 185, 100);
ctx1.fillText("Scene", 75, 190);
ctx1.fillText("Current Color: ",260,18);
ctx2.font="12px Arial";
ctx2.fillStyle = 'black';
ctx2.fillText("Arm Angle", 315, 190);
ctx2.fillText("Arm", 190, 88);
ctx2.fillText("Pos", 190, 100);
ctx2.fillText("Scene", 75, 190);
// main image
ctx2.putImageData(img,222,27);
ctx2.fillStyle = 'white';
val = 256.*armtht2/3.14159;
ctx2.fillRect(222+val+2,27,256-val-2,146);
val2 = 146.*armpos2/150.;
ctx2.fillRect(222+val,27,3,146-val2);
// current color
ind = parseInt(armpos);
ctx1.fillStyle = 'rgb(' + parseInt(rval[ind]) +
',' + parseInt(gval[ind]) + ',' + parseInt(bval[ind]) + ')';
ctx1.fillRect(345,7,50,14);
// plot lines
ctx1.strokeStyle = '#ff0000';
ctx1.lineWidth = 3;
for (i = 1; i<armpos; i++)
{
ctx1.beginPath()
xpos = 222 + i*256./150.;
xpos0 = 222 + (i-1)*256./150.;
ctx1.moveTo(xpos0,173-rval[i-1]);
ctx1.lineTo(xpos,173-rval[i]);
ctx1.stroke();
}
ctx1.strokeStyle = '#00ff00';
for (i = 1; i<armpos; i++)
{
ctx1.beginPath()
xpos = 222 + i*256./150.;
xpos0 = 222 + (i-1)*256./150.;
ctx1.moveTo(xpos0,173-gval[i-1]);
ctx1.lineTo(xpos,173-gval[i]);
ctx1.stroke();
}
ctx1.strokeStyle = '#0000ff';
for (i = 1; i<armpos; i++)
{
ctx1.beginPath()
xpos = 222 + i*256./150.;
xpos0 = 222 + (i-1)*256./150.;
ctx1.moveTo(xpos0,173-bval[i-1]);
ctx1.lineTo(xpos,173-bval[i]);
ctx1.stroke();
}
// ctx1.stroke();
}
function clear2()
{
// white with border
ctx1.fillStyle = '#666666';
ctx1.fillRect(0,0,500,200);
ctx1.fillStyle = '#ffffff';
ctx1.fillRect(3,3,494,194);
ctx2.fillStyle = '#666666';
ctx2.fillRect(0,0,500,200);
ctx2.fillStyle = '#ffffff';
ctx2.fillRect(3,3,494,194);
}
init2();
</script>gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com2tag:blogger.com,1999:blog-5124010431457427473.post-31069557179818212222014-05-21T17:43:00.001-06:002014-05-22T15:06:55.551-06:00Quadcopter Stability and Neural Networks<script type="text/x-mathjax-config"> MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}}); </script> <script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"> </script>
<b style="font-size: small;">This is a fairly long post, so here's a TLDR: Quadcopters use an algorithm called a PID controller to not fall over in the air, but a more complicated algorithm involving neural networks may be just as stable if not more.</b><br />
<br />
<div style="text-align: center;">
<a href="#canvasjump">Skip to interactive part.</a></div>
<br />
<div style="text-align: center;">
<a href="https://github.com/bgreer/NNC">Code (in various stages of revision) on Github</a></div>
<br />
A few times now, I've mentioned my desire to write a post about the hexacopter I build long ago. The project started almost two years ago and never really finished. The goal was to not just build a hexacopter for my own personal use, but to build and program my own flight controller from scratch.<br />
<br />
Building a hexacopter or quadcopter using a standard pre-programmed flight controller is not too hard. You need a frame, a battery, motors, propellers, speed controllers (motor drivers), a flight controller, and other miscellaneous electronics (radio, gps, etc). The flight controller tells the speed controllers how fast each motor and propeller should be spinning. It monitors the orientation of the craft using an accelerometer and gyroscope and continually adjusts the motor speeds to keep the platform stable. If for some reason the copter tips to the left, the flight controller needs to bump up the thrust on the left side to rotate the body back to a level position. The flight controller can also intentionally tip the body in some direction in response to commands received from the radio link.<br />
<br />
Now, why have I used the word quadcopter (four propellers) in the title of this post instead of hexacopter (six propellers)? Because in terms of the flight controller, there is little difference. Sure, the process of assembling a hexacopter is different than that of a quadcopter, but only in that the frame is a little different and there are two more of a couple components. Quadcopter is a generally more recognized term, so I'll stick with what from now on.<br />
<br />
<b>1 - A Problem with a Solution</b><br />
<br />
So what's the most important part about making your own flight controller? Giving it the tools to keep the quadcopter flying. Handling radio commands, GPS coordinates, data logging, and pathfinding are really all extraneous tasks compared to the ability to stay in the air without flipping over and nose-diving into the ground. Any amount of force from wind or otherwise will cause the quadcopter to tip in a random direction. It's up the flight controller to detect these forces and compensate for them by adjusting how fast each propeller turns.<br />
<br />
To simplify my discussion on how a quadcopter can stabilize itself, I'll reduce the problem some. Imagine a solid rod with a motor and propeller pointed up at one end, and an unmoveable hinge at the other end. Gravity pulls the arm down, and the thrust produced by the motor and propeller pushes the arm up. A sensor can measure the current angle of the arm as well as a few other related quantities. The goal is to develop an algorithm that will cause the arm to spend most of it's time perfectly horizontal. The only way to influence the position of the arm is to adjust the motor speed, so you can think of the algorithm as having a single output. <br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-zc04-TrqR_M/U30Z8sMK2KI/AAAAAAAABbE/fiEZ7-eCjyQ/s1600/fig01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-zc04-TrqR_M/U30Z8sMK2KI/AAAAAAAABbE/fiEZ7-eCjyQ/s1600/fig01.png" height="206" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-qxJmRHdnez0/U30a3Rtcy0I/AAAAAAAABbU/W2zZbA3a7jE/s1600/fig02.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-qxJmRHdnez0/U30a3Rtcy0I/AAAAAAAABbU/W2zZbA3a7jE/s1600/fig02.png" height="230" width="600" /></a></div>
<br />
<br />
This situation has many similarities to the problem of quadcopter stability, but also a few key differences. The primary difference is that the arm can only rotate itself around the hinge in one direction, while a quadcopter can leverage motors on opposite sides of the body to have controlled rotation in every direction. To make this simple problem more applicable, I'll allow the motor to spin either way, providing thrust either up or down.<br />
<br />
Instead of attempting to derive an optimal algorithm to achieve this goal based on the underlying physics, I'll go ahead and jump to a well-accepted solution: the <a href="http://en.wikipedia.org/wiki/PID_controller">PID controller</a>. This is an algorithm that takes sensor measurements of the arm and returns a motorspeed. This process of turning sensor measurements into motor speed is typically done a few hundred times per second. How does a PID controller work? The best way to explain it is to explain the name:<br />
<br />
<b> P</b> = Proportional<br />
Adjust the motor speed proportionally to how far away the arm is to the target position. The farther the arm is below level, the higher the motor speed will be set to. If the arm is perfectly level, the proportional term has nothing to add.<br />
<b> I</b> = Integral<br />
Adjust the motor speed to account for any systematic difference between the arm and the target. If the arm tends to droop below the target position, increase the motor speed. The quantity used to determine this is the integrated difference between the arm position and the target position.<br />
<b> D</b> = Derivative<br />
If the arm is rapidly approaching the target, slow it down to avoid overshooting. The derivative of the difference between the arm position and the target position is used here.<br />
<br />
These three components of the algorithm are computed at every time step and added together to come up with the 'optimal' motor speed that will get the arm to the right position. Each component has a tuning parameter that can increase or decrease their relative importance. Given proper tuning parameters, the PID controller can be a very effective method of stabilization.<br />
<br />
Why does the PID controller work? How do these three components make sense? To answer these questions, we start by describing the stability problem with math. The governing equation for the arm rotating about the base hinge is Newton's Second Law for Rotation:<br />
<br />
\[ \sum_i \mathscr{T}_i = I \ddot{\theta} \]<br />
<br />
where $\mathscr{T}_i$ is a torque exerted on the arm about the hinge, $I$ is the moment of inertia (has to do with the mass of the arm and it's size), $\theta$ is the position/angle of the arm, and the double dots indicate the second derivative with respect to time. The left hand side is a sum over many torques because there will be multiple forces at work. In this language, our goal is to have $\theta = 0$, and we can apply a torque $\mathscr{T}$ using the motor and propeller to get us there. Gravity and wind can also apply torques, throwing the arm away from the solution we want.<br />
<br />
The first torque to include is gravity, which exerts itself the most when the arm is horizontal:<br />
<br />
\[ \mathscr{T}_{grav} = -g m L \mathrm{cos} \theta \]<br />
<br />
Here, we approximate the arm and motor as a mass $m$ at a length $L$ away from the hinge. I could elaborate this model and go back to express the moment of inertia $I$ in terms of the mass and length, but it adds little to the discussion. The next source of torque is that exerted by the motor and propeller. If we use the PID controller, the torque applied is a function of the arm angle along with the derivative and the integral:<br />
<br />
\[ \mathscr{T}_{PID} = f(\theta, \dot{\theta}, \int \! \theta dt) = P \theta + I \int_0^{\infty} \! \theta dt + D \dot{\theta} \]<br />
<br />
Combining these pieces together, we get an equation describing the battle between gravity and the PID controller to stabilize the arm:<br />
<br />
\[ I \ddot{\theta} = -g m L \mathrm{cos} \theta + P \theta + I \int_0^{\infty} \! \theta dt + D \dot{\theta} \]<br />
<br />
The $\mathrm{cos} \theta$ term makes things a little complicated, but if we assume the arm <a href="http://en.wikipedia.org/wiki/Small-angle_approximation">won't deviate very far</a> from horizontal, we can approximate this term as constant. Rearranging terms and collecting constants for readability, we end up with a nice textbook-looking 2nd order inhomogeneous <a href="http://en.wikipedia.org/wiki/Integro-differential_equation">integro-differential equation</a>:<br />
<br />
\[ a \ddot{\theta} + b \dot{\theta} + c \theta + d \int_0^{\infty} \! \theta dt = e \]<br />
<br />
If there's one thing I've learned in my many years of schooling about integro-differential equations (and there was in fact only one thing I learned), it's that they are a pain and should be solved numerically. But before we give up on mathematical beauty and resort to number crunching, we can gain a bit of intuition for how the system acts under various conditions.<br />
<br />
If we turned off the PID controller completely, we would end up with a very simple equation to solve. Unfortunately, this solution involves the arm rotating a significant ways away from horizontal, which breaks our earlier assumption. In that case, the equation we would have solved would no longer be valid. Instead, we will start by turning off the I and D components of the PID controller. With that, we are left with:<br />
<br />
\[ a \ddot{\theta} + c \theta = e \]<br />
<br />
This is a simple inhomogenous second order differential equation that has a correspondingly simple solution:<br />
<br />
\[ \theta(t) = A \mathrm{cos}(\sqrt{c/a} t + \tau) + e/c \]<br />
<br />
This is a simple <a href="http://en.wikipedia.org/wiki/Harmonic_oscillator">harmonic oscillator</a>. Depending on the initial conditions, the arm will bounce endlessly around some angle a little below horizontal. Plotting the arm angle as a function of time for some reasonable values of the various coefficients, we can see that this solution is not exactly optimal:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-Bxw0ViHiInA/U3rAkHDkyKI/AAAAAAAABac/3HPXX0At-F8/s1600/arm_plot01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-Bxw0ViHiInA/U3rAkHDkyKI/AAAAAAAABac/3HPXX0At-F8/s1600/arm_plot01.png" height="360" width="600" /></a></div>
<br />
<br />
The next step in understanding the full PID solution is to use the P and D terms, but keep the I term off. This produces a similarly simple equation that can be solved using standard freshman-level differential equation solving methods:<br />
<br />
\[ a \ddot{\theta} + b \dot{\theta} + c \theta = e \]<br />
<br />
\[ \theta(t) = A e^{C_- t} + B e^{-C_+ t} + e/c \]<br />
\[ C_{\pm} = \frac{\sqrt{b^2 - 4 a c} \pm b}{2 a} \]<br />
<br />
While this solution might seem much more complicated than the previous, it is primarily because I have decided to express what could be a simple sine wave as a <a href="http://en.wikipedia.org/wiki/List_of_trigonometric_identities#Relation_to_the_complex_exponential_function">complex exponential</a>. Plotting this solution for a particular set of initial conditions demonstrates the character:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-siGxGMY5QOY/U3rAsIxCBVI/AAAAAAAABak/Pe0xoSaEbk8/s1600/arm_plot02.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-siGxGMY5QOY/U3rAsIxCBVI/AAAAAAAABak/Pe0xoSaEbk8/s1600/arm_plot02.png" height="360" width="600" /></a></div>
<br />
<br />
Including the D term in the PID controller has helped damp out the oscillations from the previous example. A lot is known about <a href="http://en.wikipedia.org/wiki/Damping">damped</a> oscillators, including the fact that certain values of the PID coefficients P and D will cause the system to be 'critically damped'. This means that the system will achieve a steady state as fast as possible. Below is a plot showing three different cases: under-damped, critically damped, and over-damped.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-UUnKVnY3Ejo/U3rAwchvUsI/AAAAAAAABas/fjfbF0nVxzQ/s1600/arm_plot03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-UUnKVnY3Ejo/U3rAwchvUsI/AAAAAAAABas/fjfbF0nVxzQ/s1600/arm_plot03.png" height="360" width="600" /></a></div>
<br />
<br />
This version of the algorithm does a decent job at stabilizing the arm, but still tends to settle on an angle slightly below horizontal. To fix this, we have to turn back on the I component of the PID controller. The full integro-differential equation can be solved for certain values of the coefficients, but it's difficult to gain a fundamental understanding of the system by looking at the solution. Instead, it's better to reason out what the final term does.<br />
<br />
By looking at the previous solutions, we can see that even when the arm was stable, it would settle at an angle below horizontal (the solid black line). If we were to sum up how far negative the arm was over time, the sum would continue to grow as long as the arm sat below horizontal. The purpose of the I component of the PID controller is to bump up the motor speed the more this sum of how far the arm is from horizontal. If this additional term happens to bring the arm back to horizontal, the sum of how far the arm has been below horizontal will stay at whatever value it was before the arm was restored to the proper position. This way, the I component describes an offset that only has to be determined once by looking at how the arm tends to settle. Comparing a properly tuned PID controller to the critically damped PD controller from before, we can see how the new component affects the solution.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-F7z6ohTAwWk/U3rA1NO0R9I/AAAAAAAABa0/Kb_-sSGhRGE/s1600/arm_plot04.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-F7z6ohTAwWk/U3rA1NO0R9I/AAAAAAAABa0/Kb_-sSGhRGE/s1600/arm_plot04.png" height="360" width="600" /></a></div>
<br />
<br />
Finding the values of the PID coefficients that give an optimal solution such as the one shown requires either knowing the mechanical properties of the system to a high precision, or a decent amount of guesswork. The usual method for finding the appropriate coefficients for a real world system is the latter. The procedure is similar to what I have done here: turn on one component of the PID controller at a time, tuning them independently until the solution is as good as possible without the next component.<br />
<br />
<br />
<b>2 - Solution to the Wrong Problem</b><br />
<br />
To show how the PID controller is useful for stabilizing a system, I have had to simplify the problem and ignore various effects. This has allowed me to solve the governing equations and explain why the PID controller works, but my 'solution' is then only truly applicable to the idealized scenario. What happens to the PID controller when we add some complications, like wind or noise? Wind will act as a random torque on the arm in either direction at any time. The algorithm can't predict the wind, and it can't even measure the wind directly. All it can do it notice that the system has suddenly and unexpectedly gone off course and corrections need to be made. Any sensor used to measure the arm angle and inform the algorithm will have noise, a random variation between the true answer and the reported answer. How well can the PID controller handle incorrect information? Finally, My model has assumed that the algorithm can assimilate sensor measurements and adjust the thrust instantaneously. In the real world, there will be a delay between the arm moving and the sensor picking up on it, a delay due to the computation of the motor speed correction, and a delay between telling the motor to change speed and the propeller actually generating a different amount of force.<br />
<br />
These three non-ideal effects (wind, noise, delay) are difficult to model mathematically. It's certainly possible to do so, but the amount of fundamental insight gained from such an analytic solution is limited. Instead, we can turn to a numerical simulation of our system. I've written a simple Javascript code that simulates an arm and motor just as I have describes in the equations above, but can also add wind, noise, and delay in varying amounts to the system. I've initialized the PID coefficients to a reasonably stable solution so you can press RUN and see what happens. Clicking the circles next to WIND, NOISE, and DELAY will increase the amount of each present in the system, and clicking the POKE button on the left will nudge the arm in a random direction to test the stability. Clicking the circles next to P, I, and D will toggle each component. The sliders next to them determine the coefficients, but will only apply to the system if the circle next to them is green. Pressing RESET will put the arm back at horizontal, but will keep the PID coefficients the same.<br />
<br />
If you are feeling adventurous, try setting the PID coefficients randomly and then tuning the PID controller from stratch. If the arm just flails about randomly, press RESET to calm it down. The buttons next to PID and NN determine which controller is being used to stabilize the arm. I'll describe what the NN controller is doing in the next section.<br />
<br />
<a href="https://www.blogger.com/blogger.g?blogID=5124010431457427473" name="canvasjump"></a>
<br />
<div style="text-align: center;">
<canvas height="300" id="canv-app" width="500">Canvas not working!</canvas></div>
<br />
<b><br /></b>
<b>3 - A Smarter Solution</b><br />
<br />
The PID controller does a decent job at stabilizing the arm under certain conditions. If it gets pushed around too much by noise, gets too many flawed measurements, has too large of a delay, or simply ends up outside it's comfort zone, it tends to have trouble. Is there another option?<br />
<br />
I, like many others, have recently finished up with <a href="https://www.coursera.org/course/ml">Andrew Ng's Coursera course on Machine Learning</a>. A significant fraction of my PhD research has utilized large nonlinear optimization codes, so I found most of the details of machine learning presented in the course to be pretty straightforward. The biggest concept I was able to take away from the course was that of an <a href="http://en.wikipedia.org/wiki/Artificial_neural_network">artificial neural network</a>. There seems to be a lot of mystery and awe when talking about artificial neural networks in a casual setting, but I think this is largely due to the name. If I were to rename neural networks so something a little more down-to-earth, I would call them Arbitrary Nonlinear Function Approximators. Not nearly as magical sounding, but a little more to the point. But until I am the king of naming things, I'll call them neural networks.<br />
<br />
What does a neural network have to do with stability? The PID controller was attempting to model the 'optimal' response to a set of inputs that would stabilize a rotating arm. We know that the PID controller is not a perfect solution, but it seems to have some ability to mimic a perfect solution in some cases. We might imagine that the truly optimal solution would be something far more complex than the PID controller, but we don't really have a way of knowing what that solution is. Not saying we can never know, it's just really hard to figure out. A neural network is a useful tool for approximating an unknown nonlinear function, as long as we have some examples of what the function looks like at various points.<br />
<br />
A neural network works by taking a set of inputs, combining them back and forth in all kinds of nonlinear ways, and producing a final output. The way in which inputs are combined with each other can be varied incrementally, allowing the network to 'learn' how to appropriately combine them to match a specified output. Given enough examples of the function it needs to approximate, the neural network will eventually (and hopefully) converge on the right answer. This is of course a gross simplification of how a neural network works, but the point of this post is not to delve into the specifics of sigmoid functions and backpropagation. If you want to know more, use the internet! That's what I did.<br />
<br />
So let's say a neural network could replace the PID controller. How do we train the network to know what the right answer is? The only example we have so far to learn from is the PID controller, but if that's the only information we give it, we might as well just use the PID controller and avoid the confusion the neural network will add. We need a way of letting the neural network controller (NNC) explore possibilities and recognize when it has come across a good part of the solution. To do this, we use reinforcement learning. Specifically, I've generally followed the procedure put forth in this paper by <a href="http://webdocs.cs.ualberta.ca/~vanhasse/papers/Reinforcement_Learning_in_Continuous_Action_Spaces.pdf">Hasselt and Wiering</a> for using neural networks to perform reinforcement learning in a continuous action space. It was possible to do this with a discrete action space, but I wanted to generalize the problem for other projects.<br />
<br />
With the knowledge that an artificial neural network can be used to approximate a complicated and unknown function given examples of the function for different inputs, how can we apply this to the problem of learning stability? The idea behind reinforcement learning is that you are training an 'actor' (the part of the algorithm that looks at inputs and decides on the optimal action) by giving it rewards or punishments. The reward is a single number that is used to communicate the idea of how good the previous action that the actor performed was. For example, you could reward the actor in this situation one point for every second that it keeps the arm near horizontal, but take away a hundred points if the arm falls down. The actor is continually building up an idea of the appropriate function that translates the current state of the arm into an optimal action through trial and error. At every step, the actor either picks the action it thinks is optimal based on it's current neural network or picks a random action close to the optimal guess. Since the neural network can only provide an approximate answer for the optimal action while learning is in progress, the randomness allows for exploration of new possibilities that may be better. After performing an action, an appropriate reward is returned. If this reward is better than what the actor anticipated getting for a given action, then the action taken is reinforced. There are a few other details to this method that I'm not mentioning, but if you are interested, start by reading up on <a href="http://en.wikipedia.org/wiki/Temporal_difference_learning">Temporal Difference Learning</a>.<br />
<br />
Given enough attempts to find the optimal action to achieve stability, the artificial neural network controller will (hopefully) converge on a function that can take the same inputs given to the PID controller and output an optimal motor speed to keep the arm stable.<br />
<br />
<b>4 - Learning vs Knowing</b><br />
<br />
I wrote up a simple code to perform reinforcement learning on a simulation of the arm using the <a href="http://leenissen.dk/fann/wp/">FANN library</a> to simplify my use of artificial neural networks. As inputs, I provided the same quantities that the PID controller uses (current angle, time derivative of angle, integral of angle) and had the desired motor speed as the only output. To initialize the NN controller, I preconditioned the actor by teaching it to approximate a linear combination of the inputs, resulting in a standard PID controller. <br />
<br />
The following animation shows the progression of learning over many iterations. Since the actor starts without knowing anything about the appropriate action to take, it spends a lot of time completely failing to keep the arm anywhere near horizontal. As more simulations are run and the actor explores new possibilities, it picks up on the rewards reaped for keeping the arm in the right place and starts to make a concerted effort to keep the arm in line.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-K12Wl0Acpi0/U305SwOfrTI/AAAAAAAABbk/lmH4zJrhYP0/s1600/anim.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-K12Wl0Acpi0/U305SwOfrTI/AAAAAAAABbk/lmH4zJrhYP0/s1600/anim.gif" /></a></div>
<br />
<br />
It's hard to quantify the performance of this new controller relative to the PID controller in a way that is simple to understand and appropriate for a post like this. Instead of presenting you, the reader, with a definitive answer for which controller is better, I am providing you with the ability to try out a neural network controller that I have trained. If you scroll back up to the PID controller app and click the NN button, you will switch the algorithm from PID to NN (neural network). See how well it can keep the arm horizontal under various conditions. The neural network controller isn't doing any learning in this app; it is only applying lessons it has been taught beforehand. I trained it on a simulation almost identical to the one in the app here, and ran it through about 10k learning iterations to settle on an answer. Running on a single core of my desktop, the training procedure took around 5 minutes.<br />
<br />
As you may find, the new controller performs just as well as the PID controller at keeping the arm horizontal. When the PID controller is exposed to signal noise, it tends to amplify this noise in the output motor speed. The NN controller seems to react a little nicer to noise, allowing less noise on the final output. The NN controller also seems to be a little lighter on the trigger for correcting sudden jumps in arm angle. The PID controller will respond to a sudden change in state with a sudden change in output, while the NN controller seems to have a smoother response curve.<br />
<br />
It's hard to determine exactly why the NN controller acts the way it does, other than to say it has found an optimal solution to the training data I provided. While there is some mystery around how to interpret the non-linear solution the controller has found, I hope this post has cleared up some of the mystery around how the controller operates and how it is able to 'learn'. In the end, it's all about twisting and turning a complicated function into something that best matches the training data.<br />
<br />
So what does all of this mean for quadcopters? Should they all implement neural network stability algorithms with the ability to learn from current conditions? The process of learning is fairly computationally intense and can involve quite a few failures, so should they instead use pre-trained networks like in my app? I'm not sure. Neural networks are much more difficult to deal with than standard linear controllers. Not only are they harder to implement and tune, they are harder to understand. I believe that one of the greatest things about the growing quadcopter / <a href="http://en.wikipedia.org/wiki/Unmanned_aerial_vehicle">UAV</a> field is the lowering investment cost, both monetarily and intellectually. I'm not advocating for everyone to start replacing simple controllers that work pretty well with much more complicated ones that work slightly better, I would leave that to the researchers who are <a href="https://www.google.com/search?q=quadcopter+neural+network">working out how best to implement</a> such complicated systems. Instead, I advocate people using this field as a testbed for their own experiments. There are many levels of complexity to an unmanned vehicle, and a lot can be gained from picking one and thoroughly examining it.<br />
<br />
<br />
<script>
function getOffset(e){var t=0;var n=0;while(e&&!isNaN(e.offsetLeft)&&!isNaN(e.offsetTop)){t+=e.offsetLeft-e.scrollLeft;n+=e.offsetTop-e.scrollTop;e=e.offsetParent}return{top:n,left:t}}function init(){canv=document.getElementById("canv-app");ctx=canv.getContext("2d");clear();mode=0;time=0;dt=.03;num_hist=100;hist_index=0;x_hist=Array(num_hist);act_hist=Array(num_hist);wind_hist=Array(num_hist);x=(Math.random()-.5)*.08;v=(Math.random()-.5)*.04;a=0;integral=0;action=0;thrust=0;P=2;on_p=true;I=.01;on_i=false;D=2;on_d=true;logp=Math.log(P)/log10;logi=Math.log(I)/log10;logd=Math.log(D)/log10;init_weights();windlevel=0;wind=0;wind_smooth=0;noiselevel=0;delaylevel=0;time=0;ptime=0;step=0;simrun=false;draw();console.log("done with init.");canv.addEventListener("click",handleclick);canv.addEventListener("mousedown",handlemousedown);canv.addEventListener("mousemove",handlemousemove);canv.addEventListener("mouseup",handlemouseup);console.log("event listeners attached.")}function handlemousedown(e){var t=e.pageX-getOffset(canv).left-document.body.scrollLeft;var n=e.pageY-getOffset(canv).top-document.body.scrollTop;dragging=true;slider=-1;if(t>=100&&t<=400&&n>=206&&n<=224)slider=0;if(t>=100&&t<=400&&n>=236&&n<=254)slider=1;if(t>=100&&t<=400&&n>=266&&n<=284)slider=2;if(slider>=0)moveSlider(t-100)}function handlemouseup(e){dragging=false}function handlemousemove(e){if(dragging&&slider>=0)moveSlider(e.pageX-getOffset(canv).left-document.body.scrollLeft-100)}function moveSlider(e){var t,n;n=e/300;if(n<0)n=0;if(n>1)n=1;n=n*5-3;t=Math.pow(10,n);if(slider==0){P=t;logp=n}if(slider==1){I=t;logi=n}if(slider==2){D=t;logd=n}if(simrun==false){clear();draw()}}function handleclick(e){var t=e.pageX-getOffset(canv).left-document.body.scrollLeft;var n=e.pageY-getOffset(canv).top-document.body.scrollTop;console.log("click: "+t+", "+n);if(t>=10&&t<=70&&n>=170&&n<=198){simrun=true;run()}if(t>=10&&t<=70&&n>=200&&n<=228&&simrun==true){v=(Math.random()-.5)*.4;if(v<0)v-=.1;if(v>=0)v+=.1}if(t>=10&&t<=70&&n>=230&&n<=258&&simrun==true)reset();if(t>=10&&t<=70&&n>=260&&n<=288&&simrun==true){simrun=false;draw()}if(Math.sqrt(Math.pow(t-420,2)+Math.pow(n-214,2))<=10)on_p=!on_p;if(Math.sqrt(Math.pow(t-420,2)+Math.pow(n-244,2))<=10){integral=0;on_i=!on_i}if(Math.sqrt(Math.pow(t-420,2)+Math.pow(n-274,2))<=10)on_d=!on_d;if(Math.sqrt(Math.pow(t-260,2)+Math.pow(n-180,2))<=10){windlevel+=1;if(windlevel==3)windlevel=0}if(Math.sqrt(Math.pow(t-330,2)+Math.pow(n-180,2))<=10){noiselevel+=1;if(noiselevel==3)noiselevel=0}if(Math.sqrt(Math.pow(t-400,2)+Math.pow(n-180,2))<=10){delaylevel+=1;if(delaylevel==3)delaylevel=0}if(Math.sqrt(Math.pow(t-110,2)+Math.pow(n-180,2))<=10)mode=0;if(Math.sqrt(Math.pow(t-170,2)+Math.pow(n-180,2))<=10)mode=1;if(simrun==false){clear();draw()}}function reset(){x=(Math.random()-.5)*.08;v=(Math.random()-.5)*.04;a=0;action=0;integral=0;thrust=0;wind=0;wind_smooth=0;x_hist=Array(num_hist);act_hist=Array(num_hist);wind_hist=Array(wind_hist)}function run(){if(simrun==true)timeout=setTimeout("run()",5);delaycoef=Math.pow(7,-delaylevel*.5);measx=x+(Math.random()-.5)*.05*noiselevel*1.3;measv=v+(Math.random()-.5)*.052*noiselevel*1.3;action=0;if(mode==0){if(on_p)action-=P*measx;if(on_i)action-=I*integral;if(on_d)action-=D*measv}else{action=1.5*(nn(measx/Math.PI,measv/20,integral/1e4)-.5)}thrust=delaycoef*action+(1-delaycoef)*thrust;x+=v*dt;if(x>Math.PI/2){x=Math.PI/2-.001;v=0}if(x<-Math.PI/2){x=-Math.PI/2+.001;v=0}integral+=measx;a=-.3*Math.cos(x)+thrust;if(Math.abs(v)>.001)a-=.1*v*v*v/Math.abs(v);a+=wind_smooth*.04*windlevel;v+=a*dt;wind+=Math.random()-.5;wind*=.99;wind_smooth=.99*wind_smooth+.01*wind;plength=20*Math.cos(ptime);pspeed=thrust*30;time+=dt;ptime+=dt*pspeed;step+=1;if(step%10==0){x_hist[hist_index]=x;act_hist[hist_index]=thrust;wind_hist[hist_index]=wind_smooth*windlevel;hist_index++;if(hist_index==num_hist)hist_index=0;step=0;clear();draw()}}function draw(){ctx.save();ctx.beginPath();ctx.arc(40,80,20,0,2*Math.PI,false);ctx.lineWidth=2;ctx.strokeStyle="#bbbbbb";ctx.stroke();ctx.translate(40,80);ctx.rotate(-x);ctx.fillStyle="#3f5872";ctx.fillRect(0,-3,100,6);ctx.fillStyle="black";ctx.fillRect(92,-3,8,-8);ctx.fillStyle="#672088";ctx.fillRect(96-plength,-10,plength*2,-2);ctx.restore();ctx.strokeStyle="#bbbbbb";ctx.lineWidth=2;ctx.moveTo(180,80);ctx.lineTo(380,80);ctx.stroke();ctx.lineWidth=1;ctx.moveTo(180,30);ctx.lineTo(180,130);ctx.stroke();ctx.strokeStyle="#3f5872";ctx.lineWidth=3;var e,t;for(var n=0;n<num_hist-1;n++){e=hist_index-n-1;if(e<0)e+=num_hist;t=hist_index-n-2;if(t<0)t+=num_hist;ctx.beginPath();ctx.moveTo(180+n*2,-80*x_hist[e]+80);ctx.lineTo(180+(n+1)*2,-80*x_hist[t]+80);ctx.stroke()}ctx.font="Bold 10px Arial";ctx.fillStyle="#3f5872";ctx.fillText("ANGLE = "+(x*57.2957795131).toFixed(2)+"°",390,60);ctx.strokeStyle="#672088";for(var n=0;n<num_hist-1;n++){e=hist_index-n-1;if(e<0)e+=num_hist;t=hist_index-n-2;if(t<0)t+=num_hist;ctx.beginPath();ctx.moveTo(180+n*2,-50*act_hist[e]+80);ctx.lineTo(180+(n+1)*2,-50*act_hist[t]+80);ctx.stroke()}ctx.font="Bold 10px Arial";ctx.fillStyle="#672088";ctx.fillText("THRUST",390,80);if(windlevel>0){ctx.strokeStyle="#208288";ctx.lineWidth=2;for(var n=0;n<num_hist-1;n++){e=hist_index-n-1;if(e<0)e+=num_hist;ctx.beginPath();ctx.moveTo(180+n*2,-5*wind_hist[e]+80);e=hist_index-n-2;if(e<0)e+=num_hist;ctx.lineTo(180+(n+1)*2,-5*wind_hist[e]+80);ctx.stroke()}ctx.fillStyle="#208288";ctx.fillText("WIND",390,100)}ctx.font="Bold 12px Arial";ctx.fillStyle="#76c7e5";ctx.fillRect(10,170,60,28);if(simrun==false)ctx.fillStyle="black";else ctx.fillStyle="gray";ctx.fillText("RUN",12,196);ctx.fillStyle="#76e5c2";ctx.fillRect(10,200,60,28);if(simrun==false)ctx.fillStyle="gray";else ctx.fillStyle="black";ctx.fillText("POKE",12,226);ctx.fillStyle="#d4e576";if(Math.abs(x)>.8)ctx.fillStyle="#e2f867";ctx.fillRect(10,230,60,28);if(simrun==false)ctx.fillStyle="gray";else ctx.fillStyle="black";ctx.fillText("RESET",12,256);ctx.fillStyle="#e59a76";ctx.fillRect(10,260,60,28);if(simrun==false)ctx.fillStyle="gray";else ctx.fillStyle="black";ctx.fillText("STOP",12,286);ctx.strokeStyle="#bbbbbb";ctx.lineWidth=5;ctx.moveTo(100,214);ctx.lineTo(400,214);ctx.stroke();ctx.moveTo(100,244);ctx.lineTo(400,244);ctx.stroke();ctx.moveTo(100,274);ctx.lineTo(400,274);ctx.stroke();ctx.lineWidth=1;ctx.strokeStyle="#dddddd";for(j=0;j<6;j++){ctx.moveTo(100+j*300/5,204);ctx.lineTo(100+j*300/5,224);ctx.stroke();ctx.moveTo(100+j*300/5,234);ctx.lineTo(100+j*300/5,254);ctx.stroke();ctx.moveTo(100+j*300/5,264);ctx.lineTo(100+j*300/5,284);ctx.stroke()}ctx.fillStyle="#555555";ctx.fillRect(97+(logp+3)*300/5,205,5,16);ctx.fillRect(97+(logi+3)*300/5,235,5,16);ctx.fillRect(97+(logd+3)*300/5,265,5,16);ctx.strokeStyle="#999999";ctx.lineWidth=3;if(on_p)ctx.fillStyle="#83e171";else ctx.fillStyle="#e1aa9d";if(mode==1)ctx.fillStyle="#dddddd";ctx.beginPath();ctx.arc(420,214,10,0,2*Math.PI,false);ctx.stroke();ctx.fill();if(on_i)ctx.fillStyle="#83e171";else ctx.fillStyle="#e1aa9d";if(mode==1)ctx.fillStyle="#dddddd";ctx.beginPath();ctx.arc(420,244,10,0,2*Math.PI,false);ctx.stroke();ctx.fill();if(on_d)ctx.fillStyle="#83e171";else ctx.fillStyle="#e1aa9d";if(mode==1)ctx.fillStyle="#dddddd";ctx.beginPath();ctx.arc(420,274,10,0,2*Math.PI,false);ctx.stroke();ctx.fill();ctx.fillStyle="#333333";if(on_p)ctx.fillText("P = "+P.toFixed(3),435,218);else ctx.fillText("P = 0.000",435,218);if(on_i)ctx.fillText("I = "+I.toFixed(3),437,248);else ctx.fillText("I = 0.000",437,248);if(on_d)ctx.fillText("D = "+D.toFixed(3),435,278);else ctx.fillText("D = 0.000",435,278);ctx.strokeStyle="#999999";ctx.lineWidth=3;ctx.beginPath();ctx.arc(260,180,10,0,2*Math.PI,false);ctx.stroke();ctx.fillStyle="#83e171";if(windlevel==1)ctx.fillStyle="#e1d271";if(windlevel==0)ctx.fillStyle="#e1aa9d";ctx.fill();ctx.fillStyle="#999999";ctx.fillText("WIND",275,185);ctx.strokeStyle="#999999";ctx.lineWidth=3;ctx.beginPath();ctx.arc(330,180,10,0,2*Math.PI,false);ctx.stroke();ctx.fillStyle="#83e171";if(noiselevel==1)ctx.fillStyle="#e1d271";if(noiselevel==0)ctx.fillStyle="#e1aa9d";ctx.fill();ctx.fillStyle="#999999";ctx.fillText("NOISE",345,185);ctx.strokeStyle="#999999";ctx.lineWidth=3;ctx.beginPath();ctx.arc(400,180,10,0,2*Math.PI,false);ctx.stroke();ctx.fillStyle="#83e171";if(delaylevel==1)ctx.fillStyle="#e1d271";if(delaylevel==0)ctx.fillStyle="#e1aa9d";ctx.fill();ctx.fillStyle="#999999";ctx.fillText("DELAY",415,185);ctx.strokeStyle="#999999";ctx.lineWidth=3;ctx.beginPath();ctx.arc(110,180,10,0,2*Math.PI,false);ctx.stroke();ctx.fillStyle="#83e171";if(mode==1)ctx.fillStyle="#e1aa9d";ctx.fill();ctx.fillStyle="#999999";ctx.fillText("PID",125,185);ctx.strokeStyle="#999999";ctx.lineWidth=3;ctx.beginPath();ctx.arc(170,180,10,0,2*Math.PI,false);ctx.stroke();ctx.fillStyle="#e1aa9d";if(mode==1)ctx.fillStyle="#83e171";ctx.fill();ctx.fillStyle="#999999";ctx.fillText("NN",185,185)}function clear(){ctx.fillStyle="#666666";ctx.fillRect(0,0,500,300);ctx.fillStyle="white";ctx.fillRect(3,3,494,294)}function end(){clearTimeout(timeout)}function sigmoid(e){return 1/(1+Math.exp(-e))}function nn(e,t,n){var r;for(r=0;r<numneurons;r++){neuronval[r]=theta0[r][3]+e*theta0[r][0]+t*theta0[r][1]+n*theta0[r][2];neuronval[r]=sigmoid(neuronval[r])}output=theta1[numneurons];for(r=0;r<numneurons;r++){output+=theta1[r]*neuronval[r]}return sigmoid(output)}function init_weights(){theta0=[[1.976275,-1.366728,1.700507,-3.611319],[-2.71836,.920997,-2.848787,-1.521753],[-.989081,1.298712,3.945118,3.95212],[-4.028871,-2.565233,4.868216,-1.538942],[-3.087944,.891396,4.424495,4.366303],[3.19838,4.393627,-.504058,-2.511775],[.551712,-4.743971,-3.173155,.151744],[-1.141901,-7.05375,1.027377,.189321],[.127989,.108622,1.252993,1.121588],[-5.228032,2.559772,-4.717364,-.981179],[-.188857,4.477013,-2.093361,.061968],[-3.43119,-1.92697,4.113407,-4.903769],[3.873991,5.828975,-1.476212,-.031697],[3.280022,2.635463,1.835312,-.036636],[3.150733,4.590076,-3.890529,.662972],[4.586281,3.076976,4.345586,-1.154782],[-2.727697,.866995,-4.155793,-5.323842],[-1.334027,-5.003085,.700603,2.820904],[4.252829,3.453271,-1.551479,-4.979583],[-4.047869,-1.751292,-2.201864,4.07338]];theta1=[-2.486062,1.175934,1.552187,-1.045564,-1.654554,-3.305333,3.115497,7.974597,-2.353098,1.547659,-1.515346,-3.339061,-4.721977,-2.425438,-2.810513,-1.993199,-4.358187,1.52532,1.003176,.796318,1.027897];console.log(nn(0,0,0))}var time=0;var ptime=0;var pspeed=0;var plength;var dt;var canv,ctx;var x,v,a,action,thrust,integral;var num_hist,hist_index;var x_hist,act_hist,wind_hist;var timeout;var step;var simrun=false;var P,I,D,logp,logi,logd;var log10=Math.log(10);var dragging=false;var slider=-1;var on_p,on_i,on_d;var noiselevel,windlevel,delaylevel;var wind,wind_smooth;var mode;var numstate=3;var numneurons=20;var theta0=[];var theta1=[];var state=Array(numstate);var neuronval=Array(numneurons);init()
</script>gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com12tag:blogger.com,1999:blog-5124010431457427473.post-59746050251349990422014-04-30T21:55:00.000-06:002014-04-30T22:03:32.923-06:00Automated Investigator<script type="text/javascript">
<!--
function toggle_visibility(id) {
var e = document.getElementById(id);
if(e.style.display == 'block')
e.style.display = 'none';
else
e.style.display = 'block';
}
//-->
</script>
It's time for a new project post. What have I been doing since I finished the <a href="http://gperco.blogspot.com/2014/03/led-planet-hardware.html">LED Planet Project</a> last month? Mostly the things I actually get paid to do, but also a decent amount of slacking off. Without a project that has been handed to me by someone else as a challenge, I've had time to consider what types of projects I want to do for this blog. A while back, I designed and programmed my own flight controller for a hexacopter platform, so I guess I could say something about that. Unfortunately, that was a fairly lengthy project, so organizing my thoughts on the subject will take some time. I'd also like to do a few new experiments with flight controllers, which will add to the complexity of the write-up.<br />
<br />
So while I've been trudging along producing interesting content about flight, I figured I could use a little weekend project to have some fun. Just like my project from last November where I <a href="http://gperco.blogspot.com/2013/11/12-hour-project-arduino-mechanomyography.html">used an accelerometer to measure muscle activation</a>, I decided to limit my supplies to only the hardware I had sitting around my apartment. At the very least, this requirement would help reduce the electronic clutter I've accumulated.<br />
<br />
As for the goal of the project, I took a suggestion from my brother. He was brainstorming for an engineering project in college that needed to use physical sensing coupled with advanced computation on a computer. He had the idea of making a lie detector (<a href="http://en.wikipedia.org/wiki/Polygraph">polygraph</a>), where a set of sensors monitor a human subject and a computer determines whether or not the subject is lying based on the sensor measurements. I decided this was an idea worth stealing. So I chose to make a lie detector.<br />
<br />
Now, I don't want you thinking that I would try to one-up my own brother in his college engineering project just because I have the resources and free time. My plan was to boil his idea down to something much simpler yet still retaining the core concept, then post my work on the internet to become famous. The polygraph was invented in the 1920's, so neither of us are doing anything particularly original anyways. But if the last two years of internet trends tells me anything, it's that original content is not required to get pageviews.<br />
<br />
<b>Sensing</b><br />
<br />
How does a polygraph work? The core concept is that a subject under interrogation will have some sort of uncontrollable bodily response when lying. The validity of this statement is debated in the scientific community, but we won't let a bunch of boring scientists stop us here. If the subject believes that there will be some kind of negative repercussion if they to the investigator, their body is likely to act differently when telling the truth or lying. The reactions commonly associated with lying are an increased heart rate, sweating, changes in breathing pattern, eyes moving in a particular direction, etc. A polygraph will measure one or more of these effects in order to determine whether or not the subject has just told a lie.<br />
<br />
Heart rate sensors are easy to come by, and almost as easy to make. The transmission of infrared light through your finger or earlobe is modified by the amount of blood being pushed through. If you were to shine infrared light from one side of your finger to an infrared detector placed on the other side, the detector would see a series of pulses in the amount of light detected. The rate of these pulses is your heart rate. Unfortunately, I didn't have any infrared LEDs sitting around, so I didn't use heart rate sensing for my lie detector.<br />
<br />
Your body is a decent conductor of electricity, as long as you can get past the skin. Sweat is a good conductor, and can help reduce the electrical resistance of your skin. A skin conductivity sensor applies a positive and negative voltage to two parts of your body and measures the amount of current passing between those two points. The more current that gets through, the less resistance (more conductance) your body has, the more sweat you are assumed to have recently produced. The locations of the electrical contacts can vary, but a common choice is on two adjacent fingers. There isn't a whole lot of specialized equipment needed to make a skin conductivity sensor, so I decided to make one for my lie detector.<br />
<br />
There are of course other bodily signals that can be monitored and loosely associated with the act of lying, but many require specialized equipment. For a DIY lie detector, these other methods require significantly more effort than a skin conductivity sensor requires, which is significantly more than I was willing to provide.<br />
<br />
<b>Construction</b><br />
<br />
I decided to go with the two-finger approach to the skin conductivity sensor, where metal pads are attached to two adjacent fingers. I started by taping some wires to pieces of aluminum foil that could be wrapped around a finger.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-Q07cHFcA1x4/U1hy_C80JwI/AAAAAAAABX8/xqNJRrZEVWI/s1600/20140419_121812.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-Q07cHFcA1x4/U1hy_C80JwI/AAAAAAAABX8/xqNJRrZEVWI/s1600/20140419_121812.jpg" height="193" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Starting real simple here.</span></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-s6oD5spgj4I/U1hzVcnVQoI/AAAAAAAABYE/7pKFg2SMq0g/s1600/20140419_122743.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-s6oD5spgj4I/U1hzVcnVQoI/AAAAAAAABYE/7pKFg2SMq0g/s1600/20140419_122743.jpg" height="228" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">That was easy.</span></div>
<br />
Metal conducts electricity. If you attach two metal pads to your fingers with the intention of measuring the resistance between the pads (your hand), you need to make sure the pads on each finger don't touch each other. The current that would be sent through your hand would instead prefer to travel directly between the two touching pads, causing the sensor to see an extremely high conductivity. This would throw off the lie detector and make it think your hand has suddenly turned into metal, which is probably a sign of lying. I needed a way of covering the metal pads to make sure they couldn't touch. I considered using medical tape to keep the pads on, but wanted a less permanent and hair-removing solution.<br />
<br />
When I said that I was going to limit my building to only use parts I had lying around my apartment, I was telling the (polygraph approved) truth. What I didn't mention is that I have a 3D printer in my apartment and a few kilograms of printing plastic, so I wasn't really hurting for custom parts. I decided the best way to make sure the pads didn't touch was to print a custom finger-holder with the metal pads taped to the inside. I got to work on the design.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-al_I1LjpoPw/U1h1PLkNexI/AAAAAAAABYQ/-WFeJD0nkVI/s1600/finger_sensor.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-al_I1LjpoPw/U1h1PLkNexI/AAAAAAAABYQ/-WFeJD0nkVI/s1600/finger_sensor.png" height="380" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Definitely the hardest project I've done. Yep.</span></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-xUPShOiMMe8/U1h2JVhnE6I/AAAAAAAABYY/LF9l_41MsmM/s1600/20140419_125620.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-xUPShOiMMe8/U1h2JVhnE6I/AAAAAAAABYY/LF9l_41MsmM/s1600/20140419_125620.jpg" height="400" width="293" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Perfect fit.</span></div>
<br />
I attached the metal pads to the inside walls of each finger and started on the sensor electronics. Some basic resistance measurements using a multimeter tell me that I should expect the resistance between the two pads to be anywhere from 30 to 300 k<span style="background-color: white; color: #252525; font-family: sans-serif; line-height: 22.399999618530273px;">Ω</span>. Since I was able to vary the measured resistance by more than 50% just by moving the contacts around, I decided to forego any amplifiers in the sensing circuit and just use a <a href="http://en.wikipedia.org/wiki/Voltage_divider">voltage divider</a> to get a 0 to 5V signal from the sensor that a simple Arduino controller could read. It may not be the most elegant or sophisticated sensor setup, but that's lie detectors for you.<br />
<br />
<b>Data Collection</b><br />
<br />
With an Arduino reading the voltage from the sensor voltage divider, I recorded some data with the finger brace on and tried to manipulate the results with sheer willpower.<br />
<br />
<script src="https://gist.github.com/bgreer/32e0dabba6399c1189d5.js"></script>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-4DjV5FiTEsI/U1_tsw1nYsI/AAAAAAAABYo/TiB93FhwwYM/s1600/finger2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-4DjV5FiTEsI/U1_tsw1nYsI/AAAAAAAABYo/TiB93FhwwYM/s1600/finger2.png" height="421" width="600" /></a></div>
<br />
On the vertical axis of the previous plot, I've shown the resistance between the finger pads inferred from the voltage measured from the voltage divider. The arrows indicate instances when I attempted to elicit a skin conductivity response without significant movement. Ideally, I would have tested this by lying to an interrogator. But since I had no imposing figures around at the time, I attempted to trigger a response by thinking about stressful things like paying taxes or that dream I had where I showed up at middle school in just my underwear. The plot above shows that I was indeed able to trigger a response.<br />
<br />
A stressful event seems to correspond to a decrease in skin resistance that persists long after the stressful event has passed. This is likely due to sweat that accumulates in small amounts underneath the sensor pads and cannot evaporate easily. To allow the lie detector to automatically determine whether or not a lie is being told, I put a condition on the first derivative of the skin. When the resistance dips down at a specified amount per second or more, the code decides I am lying.<br />
<br />
<a href="#codeld01" onclick="toggle_visibility('codeld01');">Show/Hide Code</a>
<br />
<div id="codeld01" style="display: none;">
<a href="https://www.blogger.com/null" name="codeld01"></a>
<script src="https://gist.github.com/bgreer/4731cd1a35c478b6e9a9.js"></script></div>
<br />
So I guess that's it. I set out to build a simple lie detector and I did it using some cheap parts and an Arduino controller. It took maybe 3 hours of work, plus an hour or so of printing time for the fancy finger brace. I've basically taken a project that has <a href="http://cranklin.wordpress.com/2011/09/22/how-to-build-a-homemade-lie-detector/">been done</a> quite <a href="http://tech-zen.tv/episodes/shows/make-it-work/episodes/make-a-lie-detector-with-an-arduino-episode-32">a few times</a> <a href="http://www.instructables.com/id/Make-a-portable-handy-lie-detector-in-Altoid-tin/">already</a> on <a href="http://mad-science.wonderhowto.com/how-to/diy-polygraph-machine-detect-lies-with-tin-foil-wire-and-arduino-0134599/">the internet</a> and added a dash of 3D printing to make it trendy. Is this really something that deserves a write-up? No, I can do better.<br />
<br />
<br />
<b>Good Cop, Bad Cop</b><br />
<br />
TV and movies have taught me that anytime a suspect is interrogated, there needs to be two people asking questions. One plays the part of the bad cop who tries to break the suspect's resolve and get to the bottom of things as quickly as possible. The other is the good cop who offers the suspect salvation from the bad cop if they would only give up the truth. In order for my lie detector to be an effective mechanism for divining the truth, it needs a set of interrogators.<br />
<br />
But before I dive headfirst into building a pair of human-sized bipedal robots with mustaches, I should try to boil down the idea of the two interrogators to their basic concepts. For example, boiling someone down into their constituent parts might be something the bad cop threatens to do to the suspect. They don't necessarily need to be strong with words, just strong with their hands. In fact, the only thing the bad cop really needs is a hand. To slap people with.<br />
<br />
To build the bad cop, I needed a motor, a motor driver, an arm, and a hand. I happened to have a set of decently powerful motors and an accompanying motor driver from an older project of mine where I tried to <a href="http://gperco.blogspot.com/2013/11/robobear-and-note-on-failure.html">motorize a large wooden cart</a>. The project was a failure, but I see no need for the motors to live in shame forever. I adapted the old Arduino controller I had used for that project to simultaneously control a motor and read skin conductivity measurements.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-KWBPLSON20o/U2ATcJGxFWI/AAAAAAAABY4/0tuSPqA8qlg/s1600/20140419_161325.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-KWBPLSON20o/U2ATcJGxFWI/AAAAAAAABY4/0tuSPqA8qlg/s1600/20140419_161325.jpg" height="282" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-08SNnL5sKxg/U2ATf85E2gI/AAAAAAAABZA/4-FddtGIuTg/s1600/20140429_120918.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-08SNnL5sKxg/U2ATf85E2gI/AAAAAAAABZA/4-FddtGIuTg/s1600/20140429_120918.jpg" height="302" width="400" /></a></div>
<br />
<br />
The driver could supply enough voltage and current to the motor to make it swing around at a good slapping speed, but I needed a way of attaching an arm. While I've had good luck finding a few motor adapters online to attach wheels and such to motors, I have yet to see a motor adapter for a slapper-arm. So I designed and printed one myself. The arm would be made of two thin plastic rods I had sitting around, so the adapter needed to have two mounting holes for the arm and a set of screw holes for the motor drive shaft.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-87i5Yhbbh2I/U2AUNSf5XZI/AAAAAAAABZI/b_-Htvh5J1Q/s1600/slapper_base2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-87i5Yhbbh2I/U2AUNSf5XZI/AAAAAAAABZI/b_-Htvh5J1Q/s1600/slapper_base2.png" height="395" width="400" /></a></div>
<br />
<br />
The strange V-shape is partially due to the limitations of 3D printing. Prints need to be fairly simple, strong, and preferably without any significant overhangs. The two plastic rods for the arm slide into the sets of three holes in the vertical parts, and screws are used to secure the bottom to an existing metal motor adapter.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-tg5AWi2810w/U2AVac_pGWI/AAAAAAAABZU/bbgCx2WYKiI/s1600/20140419_213530.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-tg5AWi2810w/U2AVac_pGWI/AAAAAAAABZU/bbgCx2WYKiI/s1600/20140419_213530.jpg" height="263" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">The wonders of 3D printing.</span></div>
<br />
I then glued the plastic rods in the adapter and glued a glove filled with cotton balls at the end of the rods. Ideally the glove should be filled with sand or rocks or something, but this was just a prototype. I would attach the motor to the wooden frame I made for my LED Planet project, since I'm all about reusing materials. Also, I didn't want to spend time or money on this.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-ucCaPxKBB6M/U2AmpMApybI/AAAAAAAABZw/8jXZPrpXdqw/s1600/20140429_155915.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-ucCaPxKBB6M/U2AmpMApybI/AAAAAAAABZw/8jXZPrpXdqw/s1600/20140429_155915.jpg" height="400" width="271" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">All it needs now is an awesome mustache.</span></div>
<br />
The programming for the bad cop was fairly simple. It would monitor the suspect, wait for a lie to be detected, then slap the suspect. After waiting a second or two, it would repeat this process forever. Testing the speed and intensity of the slapper was easy since it doubled as a high-fiving machine.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-iL36nVnlsmM/U2GOL16ozqI/AAAAAAAABaA/2AZBsryQRx0/s1600/highfive.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-iL36nVnlsmM/U2GOL16ozqI/AAAAAAAABaA/2AZBsryQRx0/s1600/highfive.gif" /></a></div>
<br />
<br />
<b>Thoughtful Questions</b><br />
<br />
The next step was to create the good cop. This half of the interrogation team needed to offer questions to the suspect in a non-accusing way. It didn't need to sound threatening, as the threat came from a glove being swung around on a wooden frame. Not only did it need a calm, unwavering voice to question the suspect, it also needed to come up with appropriate questions to ask. To accomplish these tasks, I let my laptop play the part of the good cop. It had the processing capacity to form sentences and play them through the built-in speakers. You can't make a powerful investigative duo without subtle non-verbal language, so I used a USB-to-FTDI cable to allow the good cop and bad cop to communicate.<br />
<br />
The text-to-speech was done using the free and open source tool <a href="http://espeak.sourceforge.net/">eSpeak</a>. In a unix terminal, turning a string of text immediately into audio that plays through your default audio channel is simple:<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">> espeak "Hello world."</span><br />
<br />
There is an <a href="http://espeak.sourceforge.net/speak_lib.h">API for eSpeak</a>, so commands like this can be translated into C code. Unfortunately, it's so much easier to just do a system() call and be done with it:<br />
<br />
<script src="https://gist.github.com/bgreer/9b04b7e68b415077608b.js"></script>
With that, my good cop code could talk in a calm and controlled voice to the suspect. To help placate the suspect, I added a simple visual to the code.<br />
<br />
The final step in constructing the good cop program was to give it a method for generating appropriate questions. First, I hand-coded a couple softball questions that the good cop could start out with just to get a baseline reading for the polygraph. Then, I handed the script off to my girlfriend who wrote a few more lines that I couldn't read beforehand. If I knew when the harder questions were coming and what they were, I might have been able to skew the polygraph readings in my favor. I wanted a truly authentic experience, so surprise was important.<br />
<br />
Suspects need alibis. The interrogators need to ask for an alibi. It seems common to ask the suspect about their actions on a particular date, so I added an automatic alibi-asker. It picks a random date within some range of years and forms a sentence that asks the suspect about the date.<br />
<br />
<script src="https://gist.github.com/bgreer/d8e41cc71516d92fb108.js"></script>
I'm not entirely sure what instance the interrogation team would be referring to, but you can't go wrong with a random number generator asking the questions. Next was to add a few more human-generated questions that I didn't have to come up with. I needed a large repository of good, thoughtful questions, and the answer was obvious: <a href="https://answers.yahoo.com/">Yahoo Answers</a>. This is a site that allows users to pose questions to the public and rate submitted answers. It's full of all kinds of wonderful questions, some of which are collected in <a href="http://yahooanswerswtf.tumblr.com/">aggregate sites</a>. I wrote another block of code to load a specified page of Yahoo Answers, parse through it for questions, then ask them sequentially.<br />
<br />
<a href="#codeld02" onclick="toggle_visibility('codeld02');">Show/Hide Code</a>
<br />
<div id="codeld02" style="display: none;">
<a href="https://www.blogger.com/null" name="codeld02"></a>
<script src="https://gist.github.com/bgreer/822c6c8d7e633de8678a.js"></script></div>
<br />
With all of the questions queued up, I positioned the good and bad cops for optimal interrogative power. I bravely volunteered myself for questioning by my own creation and recorded the results:<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen="" frameborder="0" height="315" src="//www.youtube.com/embed/QGUw7HPC84o?rel=0" width="560"></iframe></div>
<br />
<br />
As you can clearly see, the full interrogation system worked perfectly. The skin conductivity sensor provided accurate measurements for the low-level polygraph hardware and it was able to trigger the slapping response at appropriate times.<br />
<br />
I think this project has shown that an effective polygraph can be constructed using everyday materials like aluminum foil and 3D printers. I fully support any hobbyist following my lead and using their home-made polygraph to seek out the truth. Just remember, a useful lie detector relies on the understood and sometimes fully-realized threat of physical or emotional harm.gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com1tag:blogger.com,1999:blog-5124010431457427473.post-5717399042263629622014-03-19T12:00:00.003-06:002014-03-19T12:03:07.536-06:00LED Planet Software<script type="text/javascript">
<!--
function toggle_visibility(id) {
var e = document.getElementById(id);
if(e.style.display == 'block')
e.style.display = 'none';
else
e.style.display = 'block';
}
//-->
</script>
In my <a href="http://gperco.blogspot.com/2014/03/led-planet-hardware.html">last post</a>, I discussed the process of building a spiral-sphere of LEDs that could be controlled from my laptop. The purpose was to create visuals for each movement of Gustav Holst's <i>The Planets</i> for an upcoming concert of the <a href="http://bouldersymphony.org/">Boulder Symphony Orchestra</a>. I wrote about the math that went into the design, the process of 3D printing the structure in many pieces, and powering the LED strip that was glued to the outside.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-dhBTHea_tNk/Uyje5prVy-I/AAAAAAAABW4/--nerLSekyc/s1600/20140304_001823.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-dhBTHea_tNk/Uyje5prVy-I/AAAAAAAABW4/--nerLSekyc/s1600/20140304_001823.jpg" height="327" width="540" /></a></div>
<br />
The work required to get this sphere ready for filming did not end there. In fact, building the sphere was fairly simple compared to what followed. While the sphere satisfied all of the physical requirements for the film, there was still a set of requirements on how the sphere would act. I've said that the LEDs would be controlled from my laptop, but in more concrete terms, the list of requirements was:<br />
<ul>
<li>Independent control over each LED from laptop</li>
<li>Full-sphere refresh rate of at least 30 Hz</li>
<li>Pre-coded, modifiable effects (pulsing, twinkling, etc)</li>
<li>Image and video projection capabilities</li>
<li>Effect mixing</li>
<li>Keyframed effect attributes</li>
</ul>
Each task on its own was a formidable challenge. Implementing all of these together would require code written on multiple platforms, from a tiny 8-bit microcontroller running at 16 MHz to my workstation-laptop with a 64-bit quad-core processor running at 2.4 GHz. Luckily, every platform I had to work with could be programmed with C, so I could stick to one language throughout the whole project. While the entire code for this post (as well as my last 3 posts) is hosted on <a href="https://github.com/bgreer/led-projector">github</a>, it's an ugly beast of a code. I'll try to include relevant code examples that have been isolated from the main code and cleaned up for legibility.<br />
<br />
I considered splitting this software post into multiple sub-posts talking about the different levels of software I had to write to get everything to work. Instead of doing this, I've decided to lump it all into this post. I figure that it is the collaborative effort of all of these levels that make the software side of this project interesting. But to help organize this post, I've split it up into 5 sections:<br />
<br />
<b>Part 1 - Driving the LED Strip</b><br />
<b>Part 2 - Data Transfer</b><br />
<b>Part 3 - Parallel Protocol</b><br />
<b>Part 4 - Simple Effects</b><br />
<b>Part 5 - Sequenced Effects</b><br />
<br />
As I step through the different levels of the software, I will be talking about how long it takes to run certain segments of code. Since I had a target frame rate that I needed to hit, timing was of utmost importance. A frame rate of 30 Hz means that every bit of software has to work together to produce a new frame in less than 33 milliseconds. To get across the idea of how long various bits of code take relative to 33 ms, I'll be using a basic timing diagram:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-E6bCS1wTxb4/UyD0Mu_muoI/AAAAAAAABTg/waajYX7wrAY/s1600/timing00.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-E6bCS1wTxb4/UyD0Mu_muoI/AAAAAAAABTg/waajYX7wrAY/s1600/timing00.png" /></a></div>
<br />
The idea here is to visualize how long different blocks of code take by displaying them as, well, blocks. The horizontal bar that contains the <span style="color: red;"><b>red</b></span>, <span style="color: #f1c232;"><b>yellow</b></span>, <span style="color: orange;"><b>orange</b></span>, and <span style="color: #999999;"><b>white</b></span> blocks represents what a given processor is doing. As time passes from left to right, the processor performs different tasks, each colored differently. The lower bar that remains the same color represents the LED strip. It does not do any appreciable data processing, so I'll treat it as a passive item. The vertical arrows from the processor to the strip indicate the flow of data, and are positioned in the center of a code block that handles the transfer. Finally, the vertical lines in the background give the time-scale of things. I have placed these 33 ms apart, since that is our target update time. Their spacing on the screen may change between images as I zoom in and out of different features, but they will always represent the same time step. The goal is to transfer new data to the LED strip at least every 33 ms in order to get an update rate of 30 Hz, so in this kind of diagram we want to see one of those black arrows occur at least as often as the background lines. In the example above, the processor serving data to the LED strip is not fast enough.<br />
<br />
I've measured the time needed to run various blocks of code by hooking up a Salea Logic probe to a pin of the relevant computing platform and triggering responses by inserting commands to toggle the pin to a 1 or 0 in the code. In most instances, the toggling takes around 0.0001 milliseconds. Since I'll probably be rounding most timing figures to the nearest millisecond, I've treated the toggling as instantaneous.<br />
<br />
<b>Part 1 - Driving the LED Strip</b><br />
<br />
As with many of my projects, I used a flexible strip of WS2812s as my LEDs. These are surface-mount RGB LEDs with on-board memory and driving. Data for what color the LED should be is shifted down the strip to each LED through a single line. The <a href="http://www.adafruit.com/datasheets/WS2812.pdf%E2%80%8E">datasheet</a> for these LEDs gives details about how data must be formatted in order to properly communicate with them. The key to reliable data transfer to the strip is precise timing. Luckily, there are a few freely-available libraries on the internet that handle this timing. I happen to prefer the <a href="https://github.com/adafruit/Adafruit_NeoPixel">Adafruit Neopixel library</a>, but I have heard that there are others just as good. This library includes sections of Assembly code that are hand-tuned for various architectures and processor speeds, all intended for use on an Arduino-compatible microcontroller. Since I'm just barely comfortable reading Assembly, I'm glad someone else has taken the time to work out this library for others.<br />
<br />
For testing, I used my workhorse Arduino Mega. It runs at 16 MHz and has plenty of memory and 5V digital I/O pins. The interface for using the library is very simple:<br />
<br />
<a href="#code11" onclick="toggle_visibility('code11');">Show/Hide Code</a>
<br />
<div id="code11" style="display: none;">
<a href="https://www.blogger.com/null" name="acode11"></a>
<script src="https://gist.github.com/bgreer/8da15238718cfa96b700.js"></script></div>
<br />
To reach the target frame rate, I didn't need to be concerned with the time it took to initialize things. All that mattered was how often I could call strip.show(). Since the timing of the data transfer is determined by the strip, and not the processor sending data, updating 233 pixels takes roughly 7 ms. In my timing diagram, the above code looks like:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-2yPsfnU4yRM/UyD7E48vjDI/AAAAAAAABTw/9xiVMhMHyeg/s1600/timing01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-2yPsfnU4yRM/UyD7E48vjDI/AAAAAAAABTw/9xiVMhMHyeg/s1600/timing01.png" /></a></div>
<br />
If all we want to do is keep telling the strip to update, we can get a frame rate of over 130 Hz. Unfortunately, this doesn't allow for changing any of the LED colors ever. Hardly seems worth the frame rate if the image never changes. What we want is for a separate computer to do the complicated calculations that determine what image should be displayed, then the computer sends the LED values to the Arduino Mega via USB. The data is parsed on the Mega and then the LED strip is updated.<br />
<br />
<b>Part 2 - Data Transfer</b><br />
<br />
I wrote a short C code that ran on my computer, calculated some LED values, and sent the values out to the Mega. It packaged the information for 233 LEDs in a 2048-byte packet, 5 bytes per LED and some padding on the end. The color for any given LED is specified by 3 bytes, but I included a 'pixel start' byte and a 'pixel index' byte to reduce visual glitches. The USB specification indicates that rounding up to the nearest 64 bytes for my package would have worked, but for some reason the nearest 1024 worked better. The Mega ran a code that waited for incoming serial data from the USB to serial converter, copied the incoming data to a processing buffer, parsed through the buffer to set LED values, then updated the LED strip. The additional code (neglecting some of the elements from the last snippet for brevity) looks like this:<br />
<br />
<a href="#code12" onclick="toggle_visibility('code12');">Show/Hide Code</a>
<br />
<div id="code12" style="display: none;">
<a href="https://www.blogger.com/null" name="acode12"></a>
<script src="https://gist.github.com/bgreer/8af611a428bda4123604.js"></script></div>
<br />
On my laptop, I wrote a short code to begin serial communications, package LED color values into a 2048-byte packet, and send them along. The code to open the serial port in Linux was ripped from the example code <a href="http://www.pjrc.com/teensy/benchmark_usb_serial_receive.html">here</a>:<br />
<br />
<a href="#code13" onclick="toggle_visibility('code13');">Show/Hide Code</a>
<br />
<div id="code13" style="display: none;">
<a href="https://www.blogger.com/null" name="acode13"></a>
<script src="https://gist.github.com/bgreer/88b7e9bb3c5ce510ee04.js"></script></div>
<br />
The data packaging code assumes it is given a struct called a 'strip' that contains arrays of length NUMPIXELS for the red, green, and blue channels:<br />
<br />
<a href="#code14" onclick="toggle_visibility('code14');">Show/Hide Code</a>
<br />
<div id="code14" style="display: none;">
<a href="https://www.blogger.com/null" name="acode14"></a>
<script src="https://gist.github.com/bgreer/6e3c4519bc3d640d15c7.js"></script></div>
<br />
The timing diagram for the Mega code and laptop code working together is as follows:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-hc7u30-PZb4/UyEB5xwoHYI/AAAAAAAABUI/_D-fqyA8z7Q/s1600/timing02.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-hc7u30-PZb4/UyEB5xwoHYI/AAAAAAAABUI/_D-fqyA8z7Q/s1600/timing02.png" /></a></div>
<br />
I've added a new bar to show what my laptop was doing. The <span style="color: #3d85c6;"><b>blue</b></span> blocks are the USB transfer and the <b><span style="color: #999999;">white</span></b> blocks are delays introduced to keep things synchronized. On the Mega, the <span style="color: orange;"><b>orange</b></span> block still shows updating the LED strip, the <b><span style="color: #f1c232;">yellow</span></b> block is parsing the data buffer, and the <span style="color: red;"><b>red</b></span> block is receiving the USB data. As you can see, the data transfer rate is slow compared to the background vertical lines. My Mega was only able to copy data in at around 55 kB/s, resulting in almost 38 ms for just the data transfer. This is close to the peak transfer speeds found in <a href="http://www.pjrc.com/teensy/benchmark_usb_serial_receive.html">others' experiments</a>. My data parsing code (<span style="color: #f1c232;"><b>yellow</b></span>) also took around 20 ms. Not enough to push the frame rate past 33 ms by itself, but it certainly wasn't helping. At this point, I was updating the strip at 15 Hz. Too slow by a factor of two.<br />
<br />
Luckily, I had a solution. If the Mega was too slow, I just had to use a faster microcontroller! I happened to have a few Teensy 3.1s fitting around, which boast USB transfer rates up to 20x faster than the Mega. The Teensy is Arduino-compatible, but uses a slightly different library for driving LED strips such as the one I was working with. Still, the interface was basically the same and it took the same amount of time to update the strip (again, determined by the strip, not the controller). I ported my Mega code over to the Teensy and ran it with the laptop still supplying USB data.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-b2vyMiA-fnM/UyEFeAEUvjI/AAAAAAAABUU/ygr6zCFL9dc/s1600/timing03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-b2vyMiA-fnM/UyEFeAEUvjI/AAAAAAAABUU/ygr6zCFL9dc/s1600/timing03.png" /></a></div>
<br />
Blindingly fast! Well, at least compared to the sluggish Mega. The code on the Teensy is similar to that running on the Mega, but colored differently. <b><span style="color: #bfff00;">Lime</span></b> is the strip update, <b><span style="color: #00ff59;">green</span></b> is the data parsing, and <b><span style="color: #00bbff;">light blue</span></b> is the data receive. With the higher performance USB communication and 96 MHz clock (versus 16 MHz on the Mega), both the data transfer and the data parsing have sped up immensely. The frame rate here is about 77 Hz, more than twice what I need.<br />
<br />
So that's it, right? I can run at 77 Hz, assuming I can get my laptop to churn out a new frame every 13 ms. Unfortunately, no. The Teensy runs at 3.3V as opposed to the Mega's 5V, so the data signal to update the LED strip needs to be buffered. I used an 74HTC245 octal bus transceiver as suggested by <a href="https://www.pjrc.com/teensy/td_libs_OctoWS2811.html">this site</a> to bring the 3.3V digital signal up to 5V. It didn't work. For some reason, the LED strip did not like this voltage level. The only way I could get the strip to accept data was to drop the power supply 5V line to around 4.5V, but this was not something I could do continuously due to the nature of my power supply. Without an oscilloscope, it was nearly impossible to determine what the quality of the data line was like. It was entirely possible that the transceiver was giving me a crummy signal, but I had no way of knowing for sure. I was under a strict deadline to get the entire LED system up and running two days after this unfortunate discovery, so I had to consider my options:<br />
<br />
- the Mega could drive the strip, but couldn't get data fast enough<br />
- the Teensy could get data quickly, but couldn't drive the strip (for unknown reasons)<br />
<br />
The answer was suddenly clear: use both! The Teensy would receive data from the laptop, pass it to the Mega, and the Mega would update the strip. The only catch was that I needed a way of passing data from the Teensy to the Mega faster than the Mega could handle USB data.<br />
<br />
<b>Part 3 - Parallel Protocol</b><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
Let's start by looking at how you can read in data manually to the Mega. Using some bits of non-Arduino AVR-C, the code to quickly read in a single bit of data from a digital I/O pin is:<br />
<br />
<a href="#code15" onclick="toggle_visibility('code15');">Show/Hide Code</a>
<br />
<div id="code15" style="display: none;">
<a href="https://www.blogger.com/null" name="acode15"></a>
<script src="https://gist.github.com/bgreer/b2028f205cdef4ac4a6a.js"></script></div>
<br />
Not bad for two lines of code. The port values (DDRA, PINA) are found in <a href="http://public.dm1.livefilestore.com/y2pySG8rqRpWchtjGKYc1DR9t5r_khcWu85WU9usXGRHthv2DZr_nIL1h-nSTHf52T7R8ipnXiG_ENa6VBCG0Qc3Oc1c7hcvEFqb10_t0mLVro/PIN%20MAPPING%20ARDUINO%20MEGA.jpg?rdrts=67830957">tables</a> listing the internal registers of the Mega and the Arduino pin mappings. I use these bitwise commands instead of the standard Arduino digitalRead() in order to speed up the process of reading pin values. The above code isn't too useful, because the data being read in one bit at a time is being written to the same place in memory before the previous bit is stored anywhere. But before I go into how to manage data as it arrives, we can look at how to speed up the existing code that reads in data. You may think, how can you possibly speed up a single line of code that probably only takes a handful of clock cycles? Easy:<br />
<br />
<a href="#code16" onclick="toggle_visibility('code16');">Show/Hide Code</a>
<br />
<div id="code16" style="display: none;">
<a href="https://www.blogger.com/null" name="acode16"></a>
<script src="https://gist.github.com/bgreer/b06eafc67a9137d53830.js"></script></div>
<br />
By <i>removing</i> some of the code, we've sped up data input by a factor of 8. What exactly did I do? I removed the mask that blocked out the other 7 pins available on PORTA, allowing a mapping of 8 digital I/O pins to 8 bits of internal memory. Now the code will read in 8 bits simultaneously every time that line of code is executed. Not a bad improvement. How does this compare to the serial USB communication from before? The average data rate I could get before was 55 KB/s, or roughly 284 clock cycles of the Mega for each byte. The act of grabbing a whole byte of data using this new method takes only a single clock cycle, leading to an impressive 15 MB/s. This is of course, completely unrealistic. The Mega doesn't only have to read the value of the port, but also store the value somewhere in memory. If I want to do something useful with the incoming data, the Mega also has to make sure each new byte of data gets stored in a unique location. Then there is the issue of synchronization with whatever is supplying the incoming data. You wouldn't want to miss an incoming byte of data, or even read the same byte twice before it is updated. Solving each of these issues creates overhead that will drastically slow down the data rate. The hope is that even with appropriate data management and synchronization, this method is still faster than using serial communication.<br />
<br />
To manage where each new byte of data goes in the Mega, I used a 2048-byte buffer that could store the entire package sent for each update for the LED strip. As each byte is read in, it is placed in the buffer, and a counter is incremented to keep track of where in the buffer the next byte should go. To handle synchronization, I added two more data pins to the existing 8 pins of this parallel protocol. One pin is the Mega-Ready pin (RDY), which signals to the data source that it is prepared to accept data. The other new pin is the Data-Clock pin (CLK), which the data source uses to tell the Mega that a new byte has been made available and that it should read it in. I used an interrupt on the Mega triggered by the CLK signal to read in a byte. The Mega code to read in data and process it once there is enough looks like this:<br />
<br />
<a href="#code17" onclick="toggle_visibility('code17');">Show/Hide Code</a>
<br />
<div id="code17" style="display: none;">
<a href="https://www.blogger.com/null" name="acode17"></a>
<script src="https://gist.github.com/bgreer/79b4762251c6674f4059.js"></script></div>
<br />
I found that the process of jumping to the interrupt routine and running the code within took around 6 microseconds. Assuming the data source can instantaneously provide a new byte once the Mega has finished reading the previous one, this allows for an overall data rate of 162 KB/s. Not nearly as good as the overly-ideal 15 MB/s, but much better (3x) than the original serial 55 KB/s.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-3Wg-ED-mvi0/UycqhK8mVwI/AAAAAAAABWk/Gs9cy6ei9rw/s1600/20140317_094814.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-3Wg-ED-mvi0/UycqhK8mVwI/AAAAAAAABWk/Gs9cy6ei9rw/s1600/20140317_094814.jpg" height="280" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Teensy on left, Mega on right. 8 data lines plus RDY and CLK.</span></div>
<br />
The data source for the Mega is the Teensy controller. Since it can handle fast communication with the laptop and has a faster clock speed than the Mega, it handles reading in USB data and splitting each byte up into one bit per pin of the Mega.<br />
<br />
<a href="#code18" onclick="toggle_visibility('code18');">Show/Hide Code</a>
<br />
<div id="code18" style="display: none;">
<a href="https://www.blogger.com/null" name="acode18"></a>
<script src="https://gist.github.com/bgreer/793d08b0f67063f38c41.js"></script></div>
<br />
Going back to the timing diagrams, we can look at how the laptop, Teensy, and Mega work together to pass data along to the LED strip in an optimal way:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-nz9noJOEf4Q/UyYfl4NofKI/AAAAAAAABWU/n3dChTLGPcQ/s1600/timing04.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-nz9noJOEf4Q/UyYfl4NofKI/AAAAAAAABWU/n3dChTLGPcQ/s1600/timing04.png" /></a></div>
<br />
Starting with the top bar, the laptop spends its time computing what color each LED should be (<b>black</b>), sending the 2048-byte packet over USB to the Teensy (<span style="color: #351c75;"><b>purple</b></span>), and sitting around waiting for everyone else to catch up (<b><span style="color: #999999;">white</span></b>). Next, the Teensy waits for data from the laptop (<b><span style="color: #999999;">white</span></b>), receives and stores the data (<b><span style="color: #3d85c6;">blue</span></b>), then starts arranging the data and sending it on to the Mega using the parallel protocol (<b><span style="color: lime;">green</span></b>). The Mega receives the parallel data from the Teensy (<span style="color: red;"><b>red</b></span>), parses it into LED values (<b><span style="color: #f1c232;">yellow</span></b>), then updates the LED strip (<span style="color: orange;"><b>orange</b></span>). The most important part of this figure is that by using the parallel protocol, the overall time taken to update the LED strip is less than the 33 ms vertical lines. This means the entire system can finally run at faster than the initial goal of 30 Hz.<br />
<br />
It's interesting to see the relative time it takes to do various tasks on different computing platforms. The laptop can perform all sorts of complicated mathematical operations to determine what pattern will appear on the sphere in less time than it takes the Mega to just pass on the LED values to the strip. It's also neat to see how each platform independently handles its own share of the work, then joins with another platform to ensure a steady flow of data.<br />
<br />
And with that, I had a reliable way of updating every LED on the spiral-sphere at right above 30 Hz. While it may not have been the simplest solution out there, I ended up very fond of this method. It was an excellent exercise in simple data transfer protocols and the limitations of different hardware platforms. For the rest of this post, I will move past the Mega+Teensy hardware and focus only on what the laptop has to do to produce the desired LED colors at each timestep.<br />
<br />
<b>Part 4 - Simple Effects</b><br />
<br />
With the communication protocol worked out between each computing platform, my laptop had full control over the color of every LED and could update every value at 30 Hz. All it had to do was produce a properly-formatted 2048-byte packet containing the color values for each LED and send it off to the Teensy once every 33 ms. The next step in this project was to create a series of 'effects' that could be executed on the laptop and would determine what color values to send along at every update. These effects would later be the building blocks of the final visual product, so I needed a fairly general set that could be combined later.<br />
<br />
I knew that while most effects would need a common set of parameters to determine their appearance (color, fade), they would often need their own specialized parameters based on the effect (pulsing rate, image location, etc). Instead of coding each effect to have a unique interface, I created a 'handle' struct in the code:<br />
<br />
<a href="#code19" onclick="toggle_visibility('code19');">Show/Hide Code</a><br />
<div id="code19" style="display: none;">
<a href="https://www.blogger.com/null" name="acode19"></a>
<script src="https://gist.github.com/bgreer/b0bb3e0f5afff30151d9.js"></script></div>
<br />
The handle contained all of the parameters that any effect could need. It was then up the the individual effect to use the different values within a handle appropriately. The same handle could be applied to multiple effects, although I rarely did this. Simpler effects would use the <i>fade</i> and <i>color</i> variables, while more complicated ones would use the <i>attr </i>and<i> file</i> arrays for other effect attributes. The 'effect' struct contains miscellaneous variables related to a single effect (start time, which handle to use, unique pixel buffer, etc):<br />
<br />
<a href="#code20" onclick="toggle_visibility('code20');">Show/Hide Code</a><br />
<div id="code20" style="display: none;">
<a href="https://www.blogger.com/null" name="acode20"></a>
<script src="https://gist.github.com/bgreer/2d7865e1eebb098a326b.js"></script></div>
<br />
When an effect is run, the appropriate function is called at every time step and passed an effect struct, a handle struct, and the current time so that the effect can know how far along it is.With this fairly straightforward interface worked out, I started creating effects.<br />
<br />
<u>Effect: Solid</u><br />
<a href="#code21" onclick="toggle_visibility('code21');">Show/Hide Code</a><br />
<div id="code21" style="display: none;">
<a href="https://www.blogger.com/null" name="acode21"></a>
<script src="https://gist.github.com/bgreer/4ba0ab22a63eb8571814.js"></script></div>
<br />
Set every LED to the same value based on <i>color1</i> and <i>fade</i>.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-sDEgDxHt-Ts/UyOXdQc-zaI/AAAAAAAABVI/H1vEoMM5uSk/s1600/IMG_1917.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-sDEgDxHt-Ts/UyOXdQc-zaI/AAAAAAAABVI/H1vEoMM5uSk/s1600/IMG_1917.jpg" height="359" width="540" /></a></div>
<br />
<u>Effect: Image</u><br />
<a href="#code22" onclick="toggle_visibility('code22');">Show/Hide Code</a><br />
<div id="code22" style="display: none;">
<a href="https://www.blogger.com/null" name="acode22"></a>
<script src="https://gist.github.com/bgreer/bf039dbf986b540ac3fc.js"></script></div>
<br />
Use <a href="http://gperco.blogspot.com/2014/02/led-projection-mapping-with-math.html">projection mapping</a> to display an image from <i>file</i>. The projection method (planar, cylindrical, spherical) and the projection direction are specified in <i>attr</i>.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-RXKmJn2-ZQM/UyOXdSevLtI/AAAAAAAABVM/QAzBbl8ETEM/s1600/IMG_1945.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-RXKmJn2-ZQM/UyOXdSevLtI/AAAAAAAABVM/QAzBbl8ETEM/s1600/IMG_1945.jpg" height="359" width="540" /></a></div>
<br />
<u>Effect: Pulse</u><br />
<a href="#code23" onclick="toggle_visibility('code23');">Show/Hide Code</a><br />
<div id="code23" style="display: none;">
<a href="https://www.blogger.com/null" name="acode23"></a>
<script src="https://gist.github.com/bgreer/d35c7474a2c6da690d2a.js"></script></div>
<br />
Smoothly pulse the value of every LED between <i>color1</i> and <i>color2</i> at a rate determined by <i>attr</i>.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-T7WWffiAhMs/UyOXgHaFh1I/AAAAAAAABVY/oQh4rHeHaKY/s1600/gif01.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-T7WWffiAhMs/UyOXgHaFh1I/AAAAAAAABVY/oQh4rHeHaKY/s1600/gif01.gif" /></a></div>
<br />
<u>Effect: Circle</u><br />
<a href="#code24" onclick="toggle_visibility('code24');">Show/Hide Code</a><br />
<div id="code24" style="display: none;">
<a href="https://www.blogger.com/null" name="acode24"></a>
<script src="https://gist.github.com/bgreer/6d15c4a8f94c4cf2d947.js"></script></div>
<br />
Set LEDs within a certain distance of a specified location on the sphere to <i>color1</i>. The location and size of the circle are specified in <i>attr</i>, as well as a tapering parameter. This effect loads information on the 3D location of each LED from elsewhere.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-xXh_gNMqwCA/UyOXkv4pSSI/AAAAAAAABVg/MtxADu3-qR8/s1600/Sequence+01_2.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-xXh_gNMqwCA/UyOXkv4pSSI/AAAAAAAABVg/MtxADu3-qR8/s1600/Sequence+01_2.gif" /></a></div>
<br />
<u>Effect: Flicker</u><br />
<a href="#code25" onclick="toggle_visibility('code25');">Show/Hide Code</a><br />
<div id="code25" style="display: none;">
<a href="https://www.blogger.com/null" name="acode25"></a>
<script src="https://gist.github.com/bgreer/f36e54608a346cb35651.js"></script></div>
<br />
Every time step, tells some number of LEDs to begin lighting up to either <i>color1</i> or <i>color2</i>. The number of LEDs to light up per second, the fade in time, and the fade out time are all specified in <i>attr</i>. I've used a <a href="http://en.wikipedia.org/wiki/Poisson_distribution">Poisson distribution</a> to calculate how many LEDs to light up after every time step.<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-_iXoHW0RjTA/UyOXrzB_GVI/AAAAAAAABVw/ffkIzdJoW0Y/s1600/Sequence+01_2.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-_iXoHW0RjTA/UyOXrzB_GVI/AAAAAAAABVw/ffkIzdJoW0Y/s1600/Sequence+01_2.gif" /></a></div>
<br />
<u>Effect: Packet</u><br />
<a href="#code26" onclick="toggle_visibility('code26');">Show/Hide Code</a><br />
<div id="code26" style="display: none;">
<a href="https://www.blogger.com/null" name="acode26"></a>
<script src="https://gist.github.com/bgreer/2fabcf2b624586191f95.js"></script></div>
<br />
Send a packet of light up or down the LED strip, ignoring the spherical geometry. The values in <i>attr</i> give the packet velocity, origin, and width.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-l4Us1PMYfBM/UyOXwEqLjuI/AAAAAAAABV4/gR2AcabnPaU/s1600/Sequence+01.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-l4Us1PMYfBM/UyOXwEqLjuI/AAAAAAAABV4/gR2AcabnPaU/s1600/Sequence+01.gif" /></a></div>
<br />
<u>Effect: Ring</u><br />
<a href="#code27" onclick="toggle_visibility('code27');">Show/Hide Code</a><br />
<div id="code27" style="display: none;">
<a href="https://www.blogger.com/null" name="acode27"></a>
<script src="https://gist.github.com/bgreer/eabad5ba88ceea459850.js"></script></div>
<br />
Given a start and end LED, lights up LEDs between them with a travelling wave of color. The start LED, end LED, wave speed, and wave length are specified in <i>attr</i>.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-KtHU20UgoVY/UyOX0MaPEWI/AAAAAAAABWA/8QBcAt4ESxQ/s1600/Sequence+01_2.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-KtHU20UgoVY/UyOX0MaPEWI/AAAAAAAABWA/8QBcAt4ESxQ/s1600/Sequence+01_2.gif" /></a></div>
<span id="goog_1341565839"></span><span id="goog_1341565840"></span><br />
I wrote three more effects that were mostly variations on the ones above. There is a Marching effect that is similar to Flicker, but synchronizes the LEDs into two groups instead of letting each be random. The Video effect uses the same projection mapping as Image, but projects a frame of a video based on the elapsed time of the effect. The Single-Line Packet is a slight variation on Packet, where the packet of light is constrained to move along a single loop of the spiral-sphere. These effects were all introduced with the specific intention of using them in in the final video production.<br />
<br />
As mentioned at the beginning of this section, each effect that needs to be running is called upon at each time step. If multiple effects are called, their unique pixel buffers are added together (additive blending) to create the final frame that is sent out to the electronics. For the Image and Video effects, I had the code pre-load the appropriate files before starting into the main loop. This reduced the amount of time spent calculating these effects during playback.<br />
<br />
<b>Part 5 - Sequenced Effects</b><br />
<br />
With a complete set of effects written up, the next step of this project was to sort out how to create a sequence of effects that could be run as a unit. I not only needed multiple effects to run simultaneously, but to start and stop independent of each other. I also needed to be able to change the attributes within each handle as the sequence progressed. The simplest example of this would be a fade, where the <i>fade</i> attribute in a handle varies smoothly from 0 to 1 over the course of a few seconds, causing an effect to fade into view.<br />
<br />
Ideally, I wanted <a href="http://en.wikipedia.org/wiki/Key_frame">key-framing</a>. Every attribute of every handle would be able to change at any moment, and the changes would be defined by a set of key frames. Effects would be launched at specified times and produce visuals based on these key framed attributes. Since I had two days to set up the framework for this, I went with text-based key framing instead of a GUI. This required creating a 'language' that my code would interpret and turn into key frames. To build the syntax for this language, I wrote out some psuedo-code for how I wanted it to work:<br />
<br />
<span style="color: #073763; font-family: Courier New, Courier, monospace;"># allow comments starting with a "#"</span><br />
<span style="font-family: Courier New, Courier, monospace;">at 0:00, set fade of handle 1 to 0.0</span><br />
<span style="font-family: Courier New, Courier, monospace;">at 0:00, set color1 of handle 1 to (255, 0, 0)</span><br />
<span style="font-family: Courier New, Courier, monospace;">at 0:00, launch effect "solid" using handle 1</span><br />
<span style="font-family: Courier New, Courier, monospace;">at 0:10, set fade of handle 1 to 1.0</span><br />
<br />
The above example would create a handle with <i>color1</i> set to red and <i>fade</i> set to 0. The effect Solid would be linked to this handle and launched at the beginning, but since the <i>fade</i> would be set to 0, it would not appear. Over the first ten seconds of the sequence, the <i>fade</i> would gradually increase until it equaled 1.0 (full on) at the end.<br />
<br />
This example shows the two main commands I needed in my language; adding a key frame to an attribute of a handle and launching an effect linked to a handle. Shortening the above example to the final syntax I chose:<br />
<br />
<span style="color: #073763; font-family: Courier New, Courier, monospace;"># comments still work</span><br />
<span style="font-family: Courier New, Courier, monospace;">0:00 h1 fade 0.0</span><br />
<span style="font-family: Courier New, Courier, monospace;">0:00 h1 color1 255 0 0</span><br />
<span style="font-family: Courier New, Courier, monospace;">0:00 effect solid 1</span><br />
<span style="font-family: Courier New, Courier, monospace;">0:10 h1 fade 1.0</span><br />
<br />
Writing the code to parse these commands was fairly straightforward. Lines in a sequence file would start either with a #, indicating a comment, or a time stamp. In the latter case, the next bit of text after the space would either be "effect", indicating the launching of an effect at that time stamp, or "h*", indicating the addition of a key frame for an attribute of a handle. Since string parsing is an ugly matter in C, I'll save you the burden of looking at the code I wrote for this.<br />
<br />
An array of empty handles and effects would be created at the start of the main program and filled with information as the sequence file was parsed. To keep track of key frames, the handle struct was modified to include arrays of keys:<br />
<br />
<a href="#code28" onclick="toggle_visibility('code28');">Show/Hide Code</a><br />
<div id="code28" style="display: none;">
<a href="https://www.blogger.com/null" name="acode28"></a>
<script src="https://gist.github.com/bgreer/6c66f679ecb33eed32cd.js"></script></div>
<br />
At each time step during the execution of the sequence, every attribute of every handle would be updated to reflect the appropriate value based on the nearest key frames:<br />
<br />
<a href="#code29" onclick="toggle_visibility('code29');">Show/Hide Code</a><br />
<div id="code29" style="display: none;">
<a href="https://www.blogger.com/null" name="acode29"></a>
<script src="https://gist.github.com/bgreer/1b7833e655e6a30a0943.js"></script></div>
<br />
While this might not be the most efficient method of key framing, I only ever dealt with around 30 handles and key frames. If I had a need for hundreds or thousands, I would have worked some on optimizing this algorithm. I wrote up a quick testing sequence and recorded the result:<br />
<br />
<a href="#code30" onclick="toggle_visibility('code30');">Show/Hide Code</a><br />
<div id="code30" style="display: none;">
<a href="https://www.blogger.com/null" name="acode30"></a>
<script src="https://gist.github.com/bgreer/b1293c5739e19170eb41.js"></script></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='540' height='340' src='https://www.youtube.com/embed/l-uuV6NTBto?feature=player_embedded' frameborder='0'></iframe></div>
<br />
Finally, with all of the effects playing nicely inside the sequencer, I began writing up the sequences for each movement of <i>The Planets</i>. I began by translating storyboards made by my <a href="http://julierooney.com/">collaborator</a> on this project, then filled in gaps and introduced fades between different effects. After about 20 hours of translating pictures to text, I had the sequences written down in my made-up language:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-PFPVASnIhvY/UyjomSNgxRI/AAAAAAAABXY/cAjVB3-VkcU/s1600/allseq.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-PFPVASnIhvY/UyjomSNgxRI/AAAAAAAABXY/cAjVB3-VkcU/s1600/allseq.png" height="675" width="540" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">In retrospect, maybe a GUI would have been worth an extra few days of work..</span></div>
<br />
For filming the sphere in action, I would tell the main code to execute one of the sequences. It would pre-load any images and videos, then wait for my command to begin playback. With fabric draped around the sphere and cameras rolling, I would tell the code to play through the sequence. The visuals were completely determined by the written sequences, so we could do multiple takes and get identical results. I could start the sequences at any time stamp and fast-forward through the playback to get a quick preview before we committed to anything.<br />
<br />
With around 55 minutes of content to produce, filming took quite a while. Writing sequences for the first time was a burden, but going through and editing them during the filming process was a nightmare. Even answering questions like "why is the sphere blue at 4:31" was difficult due to the nature of the sequencing language. The text-based key framing was a simple solution to a problem I had to solve in a short amount of time, but spending an extra few days implementing a graphical interface would have saved me quite a few headaches.<br />
<br />
When the final full-length video is posted online, I will link to it here. For now, I can give you two still shots.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-iZKiKkETp4A/Uyjh2jIMkeI/AAAAAAAABXA/rgRkiw3zMxg/s1600/Screenshot+from+2014-03-18+18:11:30.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-iZKiKkETp4A/Uyjh2jIMkeI/AAAAAAAABXA/rgRkiw3zMxg/s1600/Screenshot+from+2014-03-18+18:11:30.png" height="301" width="540" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Venus, Bringer of Peace </span></div>
<br />
In all, I'm happy with the way this project turned out. Some of the technical bits may have been a little rough around the edges, but I've learned that's what happens when you have only a few weekends to get things working and an ideal budget of $0.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-udYT7_16sBA/Uyjh5HivE2I/AAAAAAAABXI/vd1PqC7LDWA/s1600/Screenshot+from+2014-03-18+18:13:12.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-udYT7_16sBA/Uyjh5HivE2I/AAAAAAAABXI/vd1PqC7LDWA/s1600/Screenshot+from+2014-03-18+18:13:12.png" height="302" width="540" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Uranus, the Magician</span></div>
<br />
While the Planets project is over, the LED sphere and accompanying electronics are still fully-functional. I'll have to see if I can incorporate them in some other personal project down the road. It might even make for a nice hanging lamp in my apartment.gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com0tag:blogger.com,1999:blog-5124010431457427473.post-49324621538175021692014-03-10T16:30:00.000-06:002014-03-21T10:21:21.572-06:00LED Planet Hardware<script type="text/x-mathjax-config"> MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}}); </script> <script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"> </script>
Well, I <a href="http://gperco.blogspot.com/2013/12/robot-arm-reaching-for-stars.html">made a joke</a> about how my blog posts are ending up just being about blinking a couple of color LEDs, and here I am talking about them again. I suppose that's what I get for thinking I can avoid falling into the pit of trendy <a href="http://en.wikipedia.org/wiki/Maker_culture">maker</a> projects. This is a continuation of my <a href="http://gperco.blogspot.com/2014/02/led-projection-mapping-with-math.html">exploration</a> of <a href="http://gperco.blogspot.com/2014/03/led-projection-mapping-with-webcams-and.html">projection possibilities</a> with LED strips, but also the start of a specific project to utilize my new toys.<br />
<br />
A few weeks ago I was approached by my friend <a href="http://www.julierooney.com/">Julie</a> to collaborate on a new art project. The goal was to create a video that could accompany a live performance of <a href="http://en.wikipedia.org/wiki/The_Planets">Gustav Holst's <i>The Planets</i></a>. <i>The Planets</i> is a 55-minute, seven-movement orchestral suite where each movement is named after a planet of the Solar System. With Julie's expert artistic skills and my graduate education in astrophysics, we figured we could have a lot of fun with this challenge. As I write this post, the video is being prepared for its debut in one week, so I'll save my lengthy discussion about the video itself until later. In this post and my next one, I'll cover the hardware and software (respectively) that was created in order to create the final product.<br />
<br />
Without dragging on about the artistic implications, I should explain briefly what our goal was. We wanted to film a spiral-sphere of LEDs hanging in mid-air, surrounded by fabric. Both the fabrics and the images displayed on the sphere would change based on the movement being played.<br />
<br />
<br />
The first step was to design the sphere. We wanted a spiral sphere, so I set off figuring out exactly what shape we wanted. I used parametric equations to describe the curve in 3D space:<br />
<br />
\[ x = cos(2 \pi N t) \sqrt{r^2 - z^2} \\ y = -sin(2 \pi N t) \sqrt{r^2 - z^2} \\ z = r (2 t - 1) \]<br />
<br />
Here, $t$ goes from 0 to 1, $N$ is the number of turns on the spiral from top to bottom, and $r$ is the sphere radius.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-bYngY4IAlf0/Uxykv7EmZpI/AAAAAAAABSI/eKhW_UPtSWI/s1600/sphere.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-bYngY4IAlf0/Uxykv7EmZpI/AAAAAAAABSI/eKhW_UPtSWI/s1600/sphere.png" height="269" width="320" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Spiral ends at bottom when gnuplot gets tired of plotting.</span></div>
<br />
I knew exactly how many LEDs I had on one strip (about 4 meters worth), and we wanted around 8 turns of the strip as it went down the sphere. This pinned the size of the sphere to a diameter of 19.7cm (7.75in). Our first idea was to bend stainless steel wire into the right shape and then glue the LED strip on the outside. Since I had no method of bending wire very accurately, and the preciseness of the spiral-sphere shape was important to the overall visual effect, we figured we needed a method less prone to human error. Luckily, I had a small 3D printer sitting on my desk.<br />
<br />
Since they don't exactly offer pre-designed spiral-spheres of 19.7cm diameter on <a href="http://www.thingiverse.com/">Thingiverse</a>, I had to figure out how to design the 3D model myself. I saw online that <a href="http://www.openscad.org/">OpenSCAD</a> was a good choice for people comfortable with programming, so I went for that. The learning curve is not bad at all, at least for those with good spatial reasoning. Since I had already figured out the equations that described our spiral-sphere, I could get an incredibly accurate print of what we wanted. I've posted the .scad and .stl files for the entire sphere <a href="http://www.thingiverse.com/thing:277534">here</a>.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-kTflHVUhvA4/UxvNysf3BTI/AAAAAAAABRI/JAam7PYg9IE/s1600/20140203_212127.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-kTflHVUhvA4/UxvNysf3BTI/AAAAAAAABRI/JAam7PYg9IE/s1600/20140203_212127.jpg" height="400" width="347" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">A spiral-sphere for ants.</span></div>
<br />
The build volume of the printer is small, about 4 inches on a side. There was no way of printing the entire sphere in one go, but I could print it piece by piece and glue it together in the end. I started by splitting the sphere into octants that could be joined along internal 'ribs'. Due to the curvature of each octant, I needed to allow for a lot of support material in the printing process.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-wTWbfSdnsGE/UxvOKzntc1I/AAAAAAAABRQ/WS3EFd4Y2HA/s1600/20140205_092632.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-wTWbfSdnsGE/UxvOKzntc1I/AAAAAAAABRQ/WS3EFd4Y2HA/s1600/20140205_092632.jpg" height="223" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">3D printers are mostly good for scribbling in air.</span></div>
<br />
The octant method did not go well. While my printer is a sturdy little fellow, it was not up to the task of printing for 16 hours on end without fault. This method of splitting up the sphere would not work. I decided a better way was to drop the supports ribs and print small sections of the spiral one at a time. That way each print was only about an hour long, and any printing failures would no longer cause a loss of an entire eighth of the sphere. The spiral ended up in 32 pieces:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-ETVpDFclPEQ/UxvUKpb5qEI/AAAAAAAABRg/w-afivGpS8M/s1600/20140206_163043.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-ETVpDFclPEQ/UxvUKpb5qEI/AAAAAAAABRg/w-afivGpS8M/s1600/20140206_163043.jpg" height="400" width="317" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Suspiciously fewer than 32 pieces.</span></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-Uu5SOy9qiaQ/UxvUcfxW3EI/AAAAAAAABRo/OvFMHw8kFS8/s1600/20140212_190949.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-Uu5SOy9qiaQ/UxvUcfxW3EI/AAAAAAAABRo/OvFMHw8kFS8/s1600/20140212_190949.jpg" height="400" width="348" /></a></div>
<br />
I then used hot glue to stick the LED strip on the outside of the sphere.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-vqlXriI1oRU/UxvU3eBdbGI/AAAAAAAABRw/8KBLMZ9tmJ4/s1600/20140212_193936.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-vqlXriI1oRU/UxvU3eBdbGI/AAAAAAAABRw/8KBLMZ9tmJ4/s1600/20140212_193936.jpg" height="400" width="327" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Slight deformation.</span></div>
<br />
Without the internal supports I had originally designed for the spiral-sphere, it would deform under its own weight. I decided to print out the ribs I had removed and glue them to the inside of the sphere to help keep the spiral in line.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-KAgXiJAu-Wo/UxvVKyHSGuI/AAAAAAAABR4/Ob476rOPwg4/s1600/20140216_143402.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-KAgXiJAu-Wo/UxvVKyHSGuI/AAAAAAAABR4/Ob476rOPwg4/s1600/20140216_143402.jpg" height="400" width="323" /></a></div>
<br />
With the LED spiral-sphere built, the next step was to power it. The <a href="https://www.google.com/search?q=ws2812+datasheet">datasheet</a> for the LEDs is readily available, so I could get the exact voltage and current requirements. The whole strip runs off 5V, and each color of each RGB LED draws about 20mA of current when on. With 233 LEDs, keeping the entire strip at full brightness requires about 14A (70W). This is a fair amount of power, certainly enough to cause some damage. It's far too much current to use a linear voltage regulator to drop any appreciable amount of voltage, and too low of a voltage for most wall adapters. Luckily, I happened to have an old computer power supply sitting unused in the corner of my apartment.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-bwzEyb2XFsg/UxzSdEaddPI/AAAAAAAABSY/aH4BqazjtTU/s1600/S75QB_photo_03.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-bwzEyb2XFsg/UxzSdEaddPI/AAAAAAAABSY/aH4BqazjtTU/s1600/S75QB_photo_03.jpg" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Relevant connector circled.</span></div>
<br />
The power supply is capable of delivering a total of 30A on its 5V line, plenty for what I needed. There are quite a few tutorials online showing how to use a desktop power supply for hobby electronics, so I won't go into too much detail about how I got it to work. There is a standard 20ish pin plug (circled in red) on all power supplies like this one with a <a href="http://en.wikipedia.org/wiki/Power_supply_unit_(computer)#Wiring_diagrams">standard wiring scheme</a>. Black is ground, red is 5V, and the rest don't matter as much. If I had simply routed a pair of black and red wires to my strip and flipped on the power supply, nothing would have happened. Before the power supply will supply anything on its main power lines, it has to be told to turn on by the circuit it is powering. There is a single green wire in the standard plug that when shorted to ground, tells the power supply to turn on. I spliced apart another connector I picked up at my local hardware store and routed the wires I needed to a protoboard:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-ToEzaO6Uiig/UxzTHQvU5RI/AAAAAAAABSg/PpsOQ9qBWdo/s1600/20140216_172152.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-ToEzaO6Uiig/UxzTHQvU5RI/AAAAAAAABSg/PpsOQ9qBWdo/s1600/20140216_172152.jpg" height="400" width="356" /></a></div>
<br />
When the power supply is on (but not 'running') and plugged into this board, the PS Status LED lights up. When I flip the 5V Enable switch, the green power supply wire is shorted to ground and the power supply starts running. The 5V Status LED lights up when the 5V line is active. I hooked up the LED strip power lines to the blue screw terminal and had a reliable power supply.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-s8mQxdIpeCk/UxzT7a-ZKdI/AAAAAAAABSs/K7JXYLSYCuI/s1600/20140218_194706.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-s8mQxdIpeCk/UxzT7a-ZKdI/AAAAAAAABSs/K7JXYLSYCuI/s1600/20140218_194706.jpg" height="340" width="600" /></a></div>
<br />
The strip uses a single wire for sending data in. Using 5V logic and reasonably precise timing, data for what color each LED should be is sent down the strip. I used my workhorse Arduino Mega to interpret commands sent from my computer and send the appropriate data to the LED strip. It takes about 30 microseconds to send the data for a single LED, so around 7.3 milliseconds for the entire strip, allowing for some overhead. If you just wanted to keep updating the strip with the exact same colors over and over, or modified the colors in a trivial way, you could update the strip at a wonderful 130 Hz. I would eventually need a 'frame rate' of at least 24 Hz, so just the act of telling the strip what colors to show would not be a problem. I'll go into the details about the software side of this project in a later post, but I can assure you there were enough other problems to make it interesting.<br />
<br />
Using the projections methods I've developed over the last two blog posts, I projected my standard test image, the <a href="http://en.wikipedia.org/wiki/SMPTE_color_bars">SMPTE color bars</a>:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/--l1U_Pzm2i0/Ux0XqmlwU9I/AAAAAAAABTE/6_rcnV5eF3w/s1600/20140226_174729.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/--l1U_Pzm2i0/Ux0XqmlwU9I/AAAAAAAABTE/6_rcnV5eF3w/s1600/20140226_174729.jpg" height="400" width="290" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Upside-down, but otherwise correct.</span></div>
<br />
With a working spiral-sphere of LEDs, the final step was to suspend it inside a wooden frame that could support the fabrics that would complete the picture. I grabbed a few 8' long 1x2 strips at Home Depot, dug out my power tools (hand tools where I yell "POWER" as I use them), and got to work. Since boxes have significantly simpler geometry than a spiral-sphere, it was a simple task.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-KbYp2gLBj0c/Ux0XVzf6P2I/AAAAAAAABS8/do-91NacFh0/s1600/20140301_195253.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-KbYp2gLBj0c/Ux0XVzf6P2I/AAAAAAAABS8/do-91NacFh0/s1600/20140301_195253.jpg" height="373" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Ignore the screws sticking out everywhere.</span></div>
<br />
And there we have it, an LED planet ready for its big film debut. My next post will cover the flow of data from computer to LEDs, and what software was needed to control the flow. After that (and after the film debut), I'll do a short writeup on the conceptual side of this planet project (artistic goals, scientific influence, historical context). For now, I'll leave you with a picture of the setup we used for filming.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-xcJ2dAxXRSs/Ux0Yi1NlnDI/AAAAAAAABTM/0vOJRzg9-xw/s1600/20140302_114855.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-xcJ2dAxXRSs/Ux0Yi1NlnDI/AAAAAAAABTM/0vOJRzg9-xw/s1600/20140302_114855.jpg" height="336" width="600" /></a></div>
<span style="font-size: x-small;">Battlestation nearly ready.</span><br />
<br />gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com0tag:blogger.com,1999:blog-5124010431457427473.post-46430571007044400532014-03-07T19:26:00.003-07:002014-03-17T11:09:44.256-06:00LED Projection Mapping with Webcams (and Math)<script type="text/x-mathjax-config"> MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}}); </script> <script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"> </script>
<script type="text/javascript">
<!--
function toggle_visibility(id) {
var e = document.getElementById(id);
if(e.style.display == 'block')
e.style.display = 'none';
else
e.style.display = 'block';
}
//-->
</script>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-fWarX6vTR9Y/Uv7FRihbCiI/AAAAAAAABQI/rLmrerTUdDI/s1600/20140214_183614.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-fWarX6vTR9Y/Uv7FRihbCiI/AAAAAAAABQI/rLmrerTUdDI/s1600/20140214_183614.jpg" height="147" width="200" /></a></div>
<br />
In my <a href="http://gperco.blogspot.com/2014/02/led-projection-mapping-with-math.html">last post</a>, I covered the initial steps to setting up an LED projection system that can handle arbitrary LED locations. The LEDs were all constrained to a single strip, mostly because I can't commit to dedicating the LEDs to any single project and cutting the strip up. I wrapped the strip around a cylinder and was able to project a few images and a video with reasonable success. The biggest problem was the imperfect placement of the LEDs.<br />
<br />
My projection system depends on knowing the 3D position of every LED in the display in order to properly project an image. Using simple math to describe the positions like I did before only works if the LEDs are placed very accurately. To allow for projections on to an LED strip that is not placed on any sort of simple mathematical curve, I've worked out a method of automatically locating every LED in 3D space using some webcams and some more math.<br />
<br />
<b>Single-Camera Mapping</b><br />
<br />
To start, let's look at how a webcam can be used to recognize an LED.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-7U-PO9sGm6Y/UvLf9JNGOLI/AAAAAAAABO0/xBF9KQ1F6cE/s1600/vlcsnap-2014-02-05-17h34m21s18.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-7U-PO9sGm6Y/UvLf9JNGOLI/AAAAAAAABO0/xBF9KQ1F6cE/s1600/vlcsnap-2014-02-05-17h34m21s18.png" height="300" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">An appropriately messy pile of LEDs.</span></div>
<br />
In an image recorded from the camera, the strip appears as a mostly white strand with very small non-white regions denoting the location of an LED. Not a great situation for automatic feature detection. What happens when we turn on an LED? This is done by sending the appropriate commands to my Arduino LED strip controller to turn on exactly one LED.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-ThQwDnVawAc/UvLgBBWnmmI/AAAAAAAABO8/Y9_ujRzfQy0/s1600/vlcsnap-2014-02-05-17h36m18s160.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-ThQwDnVawAc/UvLgBBWnmmI/AAAAAAAABO8/Y9_ujRzfQy0/s1600/vlcsnap-2014-02-05-17h36m18s160.png" height="300" width="400" /></a></div>
<br />
<br />
Now we have a bright spot showing us the location of a single LED. To automatically locate this LED, we could write an algorithm to search through the camera image to find the brightest spot and return its position. Unfortunately, this kind of method is easily confused by anything else in the camera image that is bright. A way to fix this is to subtract off the image with the LED off to eliminate anything that hasn't changed between the two frames.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-3RBc67tpjNw/UvLgE4P-jkI/AAAAAAAABPE/PpQXoDgwtCI/s1600/Screenshot+from+2014-02-05+17:39:57.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-3RBc67tpjNw/UvLgE4P-jkI/AAAAAAAABPE/PpQXoDgwtCI/s1600/Screenshot+from+2014-02-05+17:39:57.png" height="300" width="400" /></a></div>
<br />
<br />
Now all we have left is a single bright point so search for. Using OpenCV to handle capturing and storing the webcam images, the location of the bright spot can be located as such:<br />
<br />
<a href="#acode01" onclick="toggle_visibility('code01');">Show/Hide Code</a><br />
<div id="code01" style="display:none"><a name="acode01"></a>
<script src="https://gist.github.com/bgreer/45c46e13be9e430677e4.js"></script></div><br />
<br />
These positions are not the full 3D positions of each LED that we will eventually need, but just the projection of each LED position on to the webcam field of view. We can loop through each LED in the strip and determine this projected position:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-ttZinxLTbhk/UvLtz8c2XGI/AAAAAAAABPw/N9lnIM6f2BE/s1600/animation.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-ttZinxLTbhk/UvLtz8c2XGI/AAAAAAAABPw/N9lnIM6f2BE/s1600/animation.gif" /></a></div>
<br />
You might notice that when an LED lights up a significant portion of the paper the strip is taped to, the LED-finding algorithm picks a point somewhere between the actual LED and the lit up paper. This is because the algorithm is finding the center of light in the image as opposed to the peak of light. I feel this is more appropriate in this case due to the fact that some of the LEDs are pointed away and will only be seen in the light they shine on the paper. Instead of treating those LEDs as lost, I might as well register the lit up paper as the source of light from the viewers perspective and map it appropriately.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-_IkydTU0ZD8/UvLgLW7ul2I/AAAAAAAABPM/ccGxzOrmOlk/s1600/pixel_locations_2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-_IkydTU0ZD8/UvLgLW7ul2I/AAAAAAAABPM/ccGxzOrmOlk/s1600/pixel_locations_2.png" height="300" width="400" /></a></div>
<br />
<br />
This is enough information to perform a projection mapping that is only viewable from the perspective of the webcam.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-JpM-dTTD_ZE/UvLqSI4gDNI/AAAAAAAABPk/9UsI7sEQfsk/s1600/Screenshot+from+2014-02-05+18:45:29.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-JpM-dTTD_ZE/UvLqSI4gDNI/AAAAAAAABPk/9UsI7sEQfsk/s1600/Screenshot+from+2014-02-05+18:45:29.png" height="132" width="400" /></a></div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-Ii8F3EVH4qc/UvLp-c_vfVI/AAAAAAAABPc/Ljrx9R1kFcg/s1600/vlcsnap-2014-02-05-18h47m43s1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-Ii8F3EVH4qc/UvLp-c_vfVI/AAAAAAAABPc/Ljrx9R1kFcg/s1600/vlcsnap-2014-02-05-18h47m43s1.png" height="300" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
This single-camera method is not able to determine the distance from the camera to any of the LEDs. All it can do it tell you the projected position of each LED on the image plane. In order to generalize this projection mapping, we need some method of supplementing the 2D position information provided so far with an estimate of the camera-to-LED distance.<br />
<br />
<b>Multi-Camera Mapping</b><br />
<br />
Consider a single camera looking at a single lit LED. The LED position on the camera image gives us the angle relative to the direction the camera is pointing that the LED is located. From the LED position (as determined in the above algorithm), we can work out a vector line that originates from the camera and hits the LED at some point:<br />
\[ \vec{f} = \vec{c} + t\hat{\vec{n}} \]<br />
Here, $\vec{c}$ is the position of the camera, $\hat{\vec{n}}$ is the normal vector originating from the camera and pointing towards the LED, and $t$ is the vector line distance parameter. The issue here is that we don't know how far along this vector the LED sits, or rather, what the appropriate value of $t$ is. But suppose we add a second camera that can see the single LED but is positioned elsewhere. Again we can define a vector line originating from this new camera that hits the LED. If we know the positions and orientations of the two cameras relative to each other, we know the equations for each line in a common coordinate system.<br />
\[ \vec{f_1} = \vec{c_1} + t\hat{\vec{n_1}} \]<br />
\[ \vec{f_2} = \vec{c_2} + s\hat{\vec{n_2}} \]<br />
Ideally, these two lines will intersect ($\vec{f_1} = \vec{f_2}$) exactly where the LED exists in 3D space. Solving for this intersection point using some high school math gives us this point. In this way, we can use two cameras to determine the 3D position of each LED.<br />
<br />
Unfortunately, these two vector lines will rarely intersect. Due to imperfections in the camera positions, the measurements thereof, and the camera optics, the two vector lines will most often be <a href="http://en.wikipedia.org/wiki/Skew_lines">skew</a>. Instead of finding the point of intersection, we need to find the closest point between these two lines. To do this, we need to find the values for $s$ and $t$ that minimize the distance between the two lines. One way to solve for these values is to write out the distance between the two lines for any ($s$,$t$) and set its derivative with respect to each quantity equal to zero. This is a pain to write out, so here's an easier way of doing it.<br />
<br />
First we define the distance vector:<br />
<br />
\[ \vec{d}(s,t) = \vec{f_1} - \vec{f_2} = \vec{c_1} - \vec{c_2} + t\hat{\vec{n_1}} - s\hat{\vec{n_2}} \]<br />
<br />
We want to know the values of $s$ and $t$ that minimize the length of this vector. We also know from geometry that when this vector distance is minimized, it will be perpendicular to both original lines. We can express this by saying that there will be no component of the distance line along the original lines:<br />
<br />
\[ \hat{\vec{n_1}} \cdot \vec{d}(s,t) = 0 \]<br />
\[ \hat{\vec{n_2}} \cdot \vec{d}(s,t) = 0 \]<br />
<br />
Expanding these out and expressing the two equations with two unknowns as a matrix problem,<br />
<br />
\[ \left( \begin{array}{cc}<br />
\hat{\vec{n_1}} \cdot \hat{\vec{n_1}} & -\hat{\vec{n_1}} \cdot \hat{\vec{n_2}} \\<br />
\hat{\vec{n_2}} \cdot \hat{\vec{n_1}} & -\hat{\vec{n_2}} \cdot \hat{\vec{n_2}} \end{array} \right)<br />
\begin{pmatrix} s \\ t \end{pmatrix} =<br />
\begin{pmatrix} -\hat{\vec{n_1}} \cdot (\vec{c_1} - \vec{c_2}) \\ -\hat{\vec{n_2}} \cdot (\vec{c_1} - \vec{c_2}) \end{pmatrix}<br />
\]<br />
<br />
The diagonal of the 2x2 matrix can of course be simplified to 1 and -1. Solving this matrix problem for $s$ and $t$ allows us to compute where on each original vector line the distance vector hits. Since I have no reason to favor one camera over the other, I assume the LED is most likely located half-way between these two points.<br />
<br />
With the matrix inversion expanded out, the code to find the LED based on input vectors looks like this:<br />
<br />
<script src="https://gist.github.com/bgreer/abf29894e3d89437b51d.js"></script>
This of course depends on having a decent estimate of the two camera-to-LED vectors. I've decided to place my two cameras so that they both are looking at a common point in space that I decide is the origin of the coordinate system. This way, I can simple measure the location of each camera with a ruler, and know not only the vector positions but also the normal vector that is produced when looking at the middle of each camera image. When a camera locates an LED in its field of view, the normal vector is then determined relative to the one pointing to the origin. The process of finding these normal vectors is a little tedious and requires a few 3D <a href="http://en.wikipedia.org/wiki/Rotation_matrix">rotation matrices</a>. I'll leave the math and code for that process out of this post for simplicity.<br />
<br />
To test out this method, I set up two identical webcams in two identical 3D printed frames pointed at a sphere of LEDs:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-_CU-mz1syDE/Uxp9XGMmfZI/AAAAAAAABQ4/fwK4QR1oMIg/s1600/20140216_143402.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-_CU-mz1syDE/Uxp9XGMmfZI/AAAAAAAABQ4/fwK4QR1oMIg/s1600/20140216_143402.jpg" height="320" width="266" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<span style="font-size: x-small;">Spiral Sphere of LEDs, for another project</span></div>
<br />
Similar to before, I march through each LED turning it off and on to produce background-subtracted images that make locating the LED easy. Instead of mapping the image coordinates directly to a source image, I run the two estimates of LED location through the 3D mapping algorithm described above. The resulting point cloud looks promising:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-ISZu_KDQDNY/Uxp5i0RRxTI/AAAAAAAABQs/Casjb5bSV3s/s1600/Sequence+01.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-ISZu_KDQDNY/Uxp5i0RRxTI/AAAAAAAABQs/Casjb5bSV3s/s1600/Sequence+01.gif" /></a></div>
A few pixels were poorly located by one or both cameras and ended up having strange 3D positions. The most obvious case is the single point floating outside of the sphere. A stronger criteria on whether or not a camera has successfully located an LED might eliminate these outliers. There is also a reasonable amount of distortion on what I expected to be one side of a perfect sphere. I'll be honest on that part, I think I messed up the math in my code that converts an LED location in image coordinates to a normal vector. Before I wrote up this algorithm in C, I prototyped it in an easier language to make sure it worked. Somewhere in translating to C I must have missed a minus sign somewhere, but it seems to be hidden well. If I ever plan on using this method for a more serious project, I'll have to go back and find that bug to fix this distortion.<br />
<br />
The reason so few LEDs made it into the final point cloud is that only LEDs visible through both cameras can be located in 3D space. Spacing the cameras out more increases the accuracy of this method, but decreases the number of LEDs on the sphere they can both see. One solution to this problem would be to add more cameras around the sphere. This would increase the number of LEDs seen with two cameras, and introduces the possibility of using three cameras at once to locate a single LED. When using three or more cameras, finding the point in space that minimizes the distance to each camera-to-LED vector line becomes a <a href="http://en.wikipedia.org/wiki/Least_squares">least squares problem</a>, similar in form to the one I went through on my <a href="http://gperco.blogspot.com/2013/12/robot-arm-reaching-for-stars.html">robot arm project</a>.<br />
<br />
The next step in this project on LEDs is to do something visually interesting with this mapping technique. So far I've just been projecting the colorbar test image to verify the mapping, but the real point is to project more complicated images and videos onto a mass of LEDs. In the next few weeks, I will be talking about the process of using this projection mapping to do something interesting with the 3D printed spiral sphere I've shown at the end of this post.gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com0tag:blogger.com,1999:blog-5124010431457427473.post-8140079475002791892014-02-05T12:35:00.000-07:002014-02-05T12:35:07.487-07:00LED Projection Mapping with Math<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-fYo7TZkIctM/UvKRiZKihvI/AAAAAAAABOk/lBpeOxyjBCw/s1600/20140203_215638.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-fYo7TZkIctM/UvKRiZKihvI/AAAAAAAABOk/lBpeOxyjBCw/s1600/20140203_215638.jpg" height="149" width="200" /></a></div>
<br />
I have a love-hate relationship with RGB-LED strips. Before starting my <a href="http://gperco.blogspot.com/2013/10/daft-punk-helmets-lots-of-lights.html">Daft Punk Helmet project</a> last year, I was getting used to using individual 4-pin RGB-LEDs for my colored-lighting needs. Once I realized I would need 96 of these LEDs for the gold helmet (384 leads, 288 channels to multiplex), I broke down and ordered a strip of WS2812s. These are surface-mount RGB-LEDs with built-in memory and driving. With three wires at one end of the strip to provide power and data, you can easily control the color of each individual LED on the strip with an Arduino or similar controller. I was in love.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-a-uAoEvHxQw/Uu09KTN_yvI/AAAAAAAABMo/cYhxcSLOcUE/s1600/480659_10201129201982143_2108583368_n.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-a-uAoEvHxQw/Uu09KTN_yvI/AAAAAAAABMo/cYhxcSLOcUE/s1600/480659_10201129201982143_2108583368_n.jpg" height="315" width="400" /></a></div>
<br />
<br />
But then I started to see the dark side of easy-to-use LED strips like this. Like so many other LED strip projects floating around the internet, my ideas started to lack originality. Instead of exploring the visual capabilities of over 16 million unique colors possible per pixel, I was blinking back and forth between primary colors and letting rainbow gradients run wild. Maybe I was being overwhelmed with possibilities; maybe the low difficulty made me lazy. Either way, the quality of my projects was suffering, and I hated it.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<img border="0" src="http://1.bp.blogspot.com/-hbc6Lg5Nc4k/UuwKPOPNBGI/AAAAAAAABMY/c5IEIiqjpdY/s1600/20131215_125559.jpg" height="320" width="277" /></div>
<div class="separator" style="clear: both; text-align: center;">
<span style="font-size: x-small;">Zero-effort xmas lights</span></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
It's entirely possible that I'm being overly dramatic, but this is probably not the case. Still, something has to be done to break the cycle of low-hanging fruit projects. To do this, I've decided to embark on a project to create an arbitrary-geometry LED display. I want to be able to bend, wrap, and coil an LED strip around any object and still be able to display an image with little distortion. I expect this project will lead me on a mathematical journey that will be covered in at least two (2!) posts.<br />
<br />
To start this off, let's differentiate the goal of this project from the projection mapping commonly used in art and advertising these days. In standard <a href="http://en.wikipedia.org/wiki/Projection_mapping">projection mapping</a>, one or more regular projectors are used to display a 2D image on an arbitrary 3D surface. Specialized software determines how to warp the projected images so that they appear normal on the projection surface. For my project, I have pixels that are placed arbitrarily on the arbitrary 3D surface. This added complication is like placing a glass cup in front of the projector in standard projection mapping and expecting the warped image to still yield a comprehensible picture. In a sense, two mappings have to be done. 1 - Find the mapping of pixels to real space, 2 - find the mapping of real space to source image space.<br />
<br />
While I'm using my laptop to do the heavy lifting of computing the appropriate pixel mapping, I need to use a microprocessor to interface with the LED strip. I'm using my prototyping Arduino Mega to interpret commands sent by the laptop via USB and send data to the LED strip. For my first arbitrary-geometry LED display, I wrapped about 4 meters of LEDs (243 total) around a foam roller.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-dgxf2al3Kyg/Uu23YPyCIfI/AAAAAAAABM4/yN50_YZvW44/s1600/20140201_200826.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-dgxf2al3Kyg/Uu23YPyCIfI/AAAAAAAABM4/yN50_YZvW44/s1600/20140201_200826.jpg" height="400" width="353" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Nothing says professional like duct tape and styrofoam.</span></div>
<br />
The first step in the process is to find the physical coordinates of each LED in real space. Luckily, I chose a configuration that can be described mathematically. The strip wraps around a cylinder in a helix with some spacing between each successive loop. In my projection code, I slowly march up and around an imaginary cylinder placing LEDs every now and then.<br />
<br />
<script src="https://gist.github.com/bgreer/1a46eba2028192f2f691.js"></script>
We can plot the locations of every LED to confirm this works:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-H3kHTOVW_f0/UvBwBPdg8rI/AAAAAAAABNQ/oiYUFEcOdzA/s1600/projection1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-H3kHTOVW_f0/UvBwBPdg8rI/AAAAAAAABNQ/oiYUFEcOdzA/s1600/projection1.png" height="320" width="286" /></a></div>
<br />
<br />
Next, we have to project the real-space location of each LED back on to the source image we want to display. The real-space locations are three-dimensional while the source image is two-dimensional, so we need a way of 'flattening' the LEDs down. The way in which we chose to do this depends on both the display geometry and how we want to view the display. For our cylinder example, we can start by unwrapping the LEDs:<br />
<br />
<script src="https://gist.github.com/bgreer/ef11486ad49303e2692c.js"></script>
Plotting the location of each LED on the source image gives:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-wAuZxbNjOAU/UvBxiiRWTJI/AAAAAAAABNc/5BRjw2DedRk/s1600/colorbar_full_projected1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-wAuZxbNjOAU/UvBxiiRWTJI/AAAAAAAABNc/5BRjw2DedRk/s1600/colorbar_full_projected1.png" height="132" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Source, Image Coordinates, Result</span></div>
<br />
Here, I've chosen the LED colors based on the nearest color found in the <a href="http://en.wikipedia.org/wiki/SMPTE_color_bars">SMPTE color bars</a>. Displayed on the LED strip, this kind of mapping shows the source image stretched around the cylinder.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-4qGYAwKSD68/UvFxDYbpGGI/AAAAAAAABN8/0t6Q7CMN8kY/s1600/20140203_215756.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-4qGYAwKSD68/UvFxDYbpGGI/AAAAAAAABN8/0t6Q7CMN8kY/s1600/20140203_215756.jpg" height="400" width="282" /></a></div>
<br />
<br />
From any one direction, the source image is difficult to see. Another method of projecting the LED positions on to the source image is to flatten them along a single direction perpendicular to the cylinder axis.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-4X-ayoIw8HY/UvBxmT1E4lI/AAAAAAAABNk/69O4UC_fPx0/s1600/colorbar_full_projected2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-4X-ayoIw8HY/UvBxmT1E4lI/AAAAAAAABNk/69O4UC_fPx0/s1600/colorbar_full_projected2.png" height="131" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Source, Image Coordinates, Result</span></div>
<br />
This allows the viewer to see the entire image from one direction.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-J3aZ6N3sUiQ/UvFwfTFpzGI/AAAAAAAABN0/9nXpp5eiOyg/s1600/20140203_215638.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-J3aZ6N3sUiQ/UvFwfTFpzGI/AAAAAAAABN0/9nXpp5eiOyg/s1600/20140203_215638.jpg" height="400" width="308" /></a></div>
<br />
<br />
Of course, the image is also projected on to the other side of the cylinder, but appears backwards. With this more useful projection, we can try to watch a short video:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-pTEr7cLPRLI/UvG_mZDAxVI/AAAAAAAABOM/JJU99W_2_GE/s1600/Sequence+01_9.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-pTEr7cLPRLI/UvG_mZDAxVI/AAAAAAAABOM/JJU99W_2_GE/s1600/Sequence+01_9.gif" /></a></div>
<div style="text-align: center;">
<a href="https://www.youtube.com/watch?v=tXY0UphFCLQ">Youtube version (with sound).</a></div>
<br />
In this example, I'm using OpenCV to grab frames from a source video, compute the projection on to display pixels, and send the pixel values out to the Arduino Mega. Without spending too much time on optimization, I've been able to get this method to run at around 15 frames per second, so the recording above has been sped up a bit to match the source video.<br />
<br />
An important thing to note in that recording is the low amount of image distortion. Even though the LEDs are wrapped around a cylinder and directed normal to the surface, the projection prevents the source image from appearing distorted when viewed from the correct angle. However, if you look at the various color bar tests I've done, You can see that vertical lines in the source image do not always appear completely vertical when projected. This is not due to any inherent flaw in the projection math, but instead caused by improper assumptions on the LED locations. I've assumed that I wrapped the LED strip perfectly around the cylinder so that my mathematical model of a helix matches the locations perfectly. In reality, the spacing and angle of each loop around the cylinder was slightly off, introducing slight variations between the real positions and the programmed positions that got worse towards the bottom. To increase the accuracy of this projection exercise, these variations need to be dealt with.<br />
<br />
There are three ways I see of correcting for the issue of imperfect LED placement:<br />
<br />
<ol>
<li>Place them perfectly. Use a caliper and a lot of patience to get sub-millimeter accuracy on the placement of each LED.</li>
<li>Place them arbitrarily, but measure the positions perfectly. Use accurate measurements of each LED position instead of a mathematical model.</li>
<li>Place them arbitrarily, get the computer to figure out where they are.</li>
</ol>
Since options 1 and 2 would most certainly drive me insane, I'm going with option 3. It's like I always say: why spend 5 hours doing manual labor when you can spend 10 hours making a computer do it?<br />
<br />
At this point, I've already made a decent amount of progress on letting the projection code figure out the location of each LED. I'm using a few stripped down webcams as a stereoscopic vision system to watch for blinking LEDs and determine their positions in 3D space. I'll save the details for later, but here's a quick shot of one of the cameras mounted in a custom printed frame:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-62bJQVQiHJY/UvKQzzaJyhI/AAAAAAAABOc/fjgvqMaR4zo/s1600/1723720_10201679316094652_950175106_n.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-62bJQVQiHJY/UvKQzzaJyhI/AAAAAAAABOc/fjgvqMaR4zo/s1600/1723720_10201679316094652_950175106_n.jpg" height="225" width="400" /></a></div>
<br />
The code for this project (including the stereoscopic vision part) can be found on <a href="https://github.com/bgreer/led-projector">my github</a>.gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com6tag:blogger.com,1999:blog-5124010431457427473.post-43826584846755947152014-01-16T23:13:00.000-07:002014-01-16T23:26:34.030-07:00Autonomous Vehicles: Basic Algorithms<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-N-wLUrMzvdw/UtjMjpo-ojI/AAAAAAAABL4/gU4AQNQMVSc/s1600/IMG_0107.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-N-wLUrMzvdw/UtjMjpo-ojI/AAAAAAAABL4/gU4AQNQMVSc/s1600/IMG_0107.JPG" height="232" width="320" /></a></div>
<br />
Last June, I stopped by the Boulder Reservoir to watch the <a href="https://avc.sparkfun.com/2013">2013 Sparkfun Autonomous Vehicle Competition</a> (AVC). There were two races going on, an air race and a ground race. The basic idea is that teams design some kind of vehicle to compete in either race that is capable of navigating the course autonomously. At the start of the race, the teams are allowed to do something simple like pressing a "go" button on their vehicle, but the rest of the race must be completed without human intervention. Scores are based on time to complete the course, as well as additional points for special actions such as avoiding obstacles or hitting small targets.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-A0DBOZnp8UQ/UtjLsQIpWDI/AAAAAAAABLw/H6PZNbCBQsg/s1600/IMG_0098.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-A0DBOZnp8UQ/UtjLsQIpWDI/AAAAAAAABLw/H6PZNbCBQsg/s1600/IMG_0098.JPG" height="204" width="320" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Chasing your robot, laptop in hand. A common sight that day.</span></div>
<div>
<br /></div>
<div>
The air race was actually not as exciting as I had anticipated. There were a couple custom flying rigs that gave a good show, but due to the recent popularity of GPS-enabled quadcopters, there were a few cookie-cutter entries. I'm not saying it's easy to get a quadcopter to fly in a specific pattern (I spent months tuning my own hexacopter before <strike>going insane</strike> getting tired of it), but the availability of advanced pre-made flight controllers for quadcopters reduced some of the excitement of the event. It's a relatively new event, so I'm hoping next year will see some changes.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-QhWJsZVCSwI/UtjMuTwgEgI/AAAAAAAABMA/5eam2cXrE7E/s1600/IMG_0050.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-QhWJsZVCSwI/UtjMuTwgEgI/AAAAAAAABMA/5eam2cXrE7E/s1600/IMG_0050.JPG" height="197" width="320" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Ready to fly over the water.</span></div>
<br /></div>
<div>
The ground race, however, was hilarious. I've never seen so many home-made robots try so hard to go around a parking lot at top speed while avoiding fences, barrels, and each other. The sheer variety of vehicles running the course offered some interesting scenes. The amount of experience and money needed to make each vehicle was fairly apparent, from a tiny LEGO Mindstorms car with pre-programmed directions to a sponsored RC car equipped with <a href="http://en.wikipedia.org/wiki/Real_Time_Kinematic">RTK GPS</a> (super-accurate GPS). In the middle were groups of students trying to do their best with limited budgets, hacking apart RC cars and adding Arduino controllers for brains and ultrasonic sensors to avoid obstacles. Most groups seemed capable of building a sturdy ground vehicle to run the course, but that was only the first step.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-ZuZfJqf0RfA/UtjLeZkp_aI/AAAAAAAABLo/wVOorPWKlSw/s1600/IMG_0095.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-ZuZfJqf0RfA/UtjLeZkp_aI/AAAAAAAABLo/wVOorPWKlSw/s1600/IMG_0095.JPG" height="202" width="320" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Needs more LEGO people.</span></div>
<br /></div>
<div>
How do you get a robot to win a race autonomously? Easy! You have to write up some code that the robot brain runs during the race that allows it to continually make progress despite obstacles. But how does the robot know what "progress" is? And how does it know what an "obstacle" is? What if it runs into a wall? What if it gets turned around? All of these questions must be considered when writing an algorithm for a race like this.</div>
<div>
<br /></div>
<div>
I'm interested in building a robot to compete in the next AVC, but before I commit any real resources, I figured I should see how hard it is to write an algorithm for an autonomous robot. Without a physical vehicle to test the algorithm on, I need a simulation that will run my code and give a decent guess as to how well it performs. This way I can start with a simple algorithm and work my way towards more advanced solutions while seeing the effect each addition has on the race outcome.</div>
<div>
<br /></div>
<div>
To make the implementation of the algorithm as realistic as possible, I went for a simulation environment that allows the use of pseudo-Arduino code. Each algorithm is written as if it were running on a generic Arduino controller with a setup() and loop() structure:</div>
<div>
<br /></div>
<script src="https://gist.github.com/bgreer/c8f66c255bfa4072d9e0.js"></script>
<br />
<div>
<br /></div>
<div>
The simulation initializes a robot at the starting position and runs the setup() method. At every timestep, the simulation moves the robot around the course based on the values for the motor speeds. It checks for collisions with the course boundaries and obstacles to make sure the robot stays on track. Throughout the course of my simulations, I'll add various physical effects that may change the behaviour of the robot. The loop() method is run as often as possible, and when the robot calls the delay() method, control is given back to the simulation for some number of time steps.<br />
<br />
<b>Algorithm 1: Pre-Programmed Route</b><br />
The first method of autonomous racing I wanted to try was where you tell the robot how far to go in each direction and just let it run. Assuming the robot starts facing the correct way, all you need to do is move forward and turn left a few times. How hard could that be?<br />
<br />
<script src="https://gist.github.com/bgreer/aaf02cb432d8e09c4ac7.js"></script>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-uUcSacPpfco/Uthy2nCwPtI/AAAAAAAABJQ/X75E97CzB3I/s1600/robot_plot0.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-uUcSacPpfco/Uthy2nCwPtI/AAAAAAAABJQ/X75E97CzB3I/s1600/robot_plot0.png" height="291" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
I've drawn black lines showing the inner boundary of the course. It's not exactly the Sparkfun AVC course, but it's close enough for now. The outer boundary is at 0 and 75 in each direction. I say this, but it's not like any of the boundaries matter for this algorithm. It's perfect. You tell it where to drive and it goes there. The only way this method can fail is if you give it the wrong directions, right?<br />
<br />
In a perfect world, this would be true. But a few days ago I stepped in a puddle of ice water thinking it was frozen solid, so I know for a fact that this is not a perfect world. In the real world, you tell a robot to drive straight and it will drive <i>mostly</i> straight. There will be small deviations from the intended path due to motor imperfections, wheel imperfections, ground imperfections, etc. Imagine the robot driving over a rough patch of concrete: one wheel might lose traction a bit, turning the robot by a few degrees. To model these issues in my simulation, I made it so that every now and then, the heading of the robot is altered slightly in a random direction. Then, instead of testing the robot just once, I ran thousands of simulations to see how often the robot would still succeed.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-oYtn3MoHppE/Uth1OzvTpXI/AAAAAAAABJc/lOz9EgNPKwA/s1600/robot_plot1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-oYtn3MoHppE/Uth1OzvTpXI/AAAAAAAABJc/lOz9EgNPKwA/s1600/robot_plot1.png" height="393" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Success=100%, Avg Time=149s</span></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
Here, I'm showing a logarithmic heatmap of the paths taken by 10,000 simulated robots. The warmer the color, the more often the robots went there. With a small amount of continuous deflection added, there is a bit of spread in where the robots end up, but 100% of them still make it to the finish line. What if our robot is not so robust and the deflection is larger?<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-DfoEremdvmU/Uth2KtFyNgI/AAAAAAAABJk/8ZO1kFV1t88/s1600/robot_plot2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-DfoEremdvmU/Uth2KtFyNgI/AAAAAAAABJk/8ZO1kFV1t88/s1600/robot_plot2.png" height="393" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Success=45%, Avg Time=147s</span></div>
<br />
Much better. Less than half of our pre-programmed robots made it to the end. Of course, the real AVC course has a few obstacles, so what happens when we place some barrels on the first stretch?<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-VsV71tA8dNs/Uth3xgJ1PmI/AAAAAAAABJw/AjLETH-qH7s/s1600/robot_plot3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-VsV71tA8dNs/Uth3xgJ1PmI/AAAAAAAABJw/AjLETH-qH7s/s1600/robot_plot3.png" height="393" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Success=29%, Avg Time=147s</span></div>
<br />
Even though the barrels are placed away from the pre-determined path, a significant fraction of the robots were deflected into them. The barrels create neat shadows in the data where no robots ended up travelling. Another real-world effect that had a profound impact on some of the robots I saw in the 2013 AVC was collisions with other robots. I've modelled this as an event that happens randomly, but more likely near the start of the race. When it happens, the robot is spun around in a random direction and moved by a few centimeters. This is somewhat optimistic, since a few robots in the AVC broke apart on impact with a competitor.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-cXY88est52U/Uth7a56i-EI/AAAAAAAABJ8/Og6blROrUtM/s1600/robot_plot4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-cXY88est52U/Uth7a56i-EI/AAAAAAAABJ8/Og6blROrUtM/s1600/robot_plot4.png" height="393" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Success=13%, Avg Time=147s</span></div>
<br />
I could keep adding more real-world effects that might lower the success rate of this first algorithm, but I think you get the point. Anything could go wrong and completely throw off the pre-determined path. It's time for a better algorithm.<br />
<br />
<b>Algorithm 2: Random Walk</b><br />
The first algorithm was too strict. It didn't allow for any freedom, and subsequently did poorly. It's time to give our robot a mind of its own and let it soar. In this algorithm, the robot drives forward for a few seconds, then proceeds to change it's speed and direction randomly every now and then. It's perfectly adaptable to the course in that it doesn't care about anything. Even winning.<br />
<br />
<script src="https://gist.github.com/bgreer/8964f9ec01dc05334d5f.js"></script><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-eRvCDYhqBfE/Uth88W2MkzI/AAAAAAAABKI/Q1qdcEfWfNo/s1600/robot_plot5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-eRvCDYhqBfE/Uth88W2MkzI/AAAAAAAABKI/Q1qdcEfWfNo/s1600/robot_plot5.png" height="393" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Success=0%, Avg Time=N/A</span></div>
<br />
Well that didn't go well. None of the 10,000 robots I sent out ever made it to the finish line. They all either got stuck on walls or ran over their 5 minute time limit. On the plus side, they didn't seem to be concerned with the barrels. Based on these results, I estimate that around one in a billion might make it. So there's a small chance that a robot set to move completely randomly could complete a course like this. As fond as I am of this algorithm, it's probably time to try something a little more intelligent.<br />
<br />
<b>Algorithm 3: Bumper Cars</b><br />
Let's try a classic algorithm. Let the robot drive in a straight line until it hits something. The hit can be detected a few ways in real life, either with a bumper sensor or even an ultrasonic proximity sensor. When the robot detects an obstacle, it turns itself around to point in a random new direction and continues from there. It repeats this process until it either wins or dies.<br />
<br />
<script src="https://gist.github.com/bgreer/985d7dd416e2e4205eef.js"></script><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-qPhdVwOgsZg/Uti78rzfnPI/AAAAAAAABLY/B94uX_em1w0/s1600/robot_plot6.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-qPhdVwOgsZg/Uti78rzfnPI/AAAAAAAABLY/B94uX_em1w0/s1600/robot_plot6.png" height="393" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<span style="font-size: x-small;">Success=11%, Avg Time=445s</span></div>
<br />
This time, we get some robots making it to the finish line. Still, this algorithm acts pretty oblivious when it comes to taking corners the right way. How can we inform our robot as to which way is the correct way to go?<br />
<br />
<b>Algorithm 4: GPS Positioning</b><br />
So far the algorithms presented have been pretty dumb. They don't know anything about the course and they just guess which way is the right way to go. Of course, we can't expect much else since we haven't enabled the robots with any kind of sensors except one that sends an alert when an obstacle is hit. It's time to allow the robot to sense the environment a little.<br />
<br />
GPS is a common method of giving robots the ability to know where they are outside. If we know the position of the course we want to race around, then we can make the robot know how far it is from various parts of the course. For this algorithm, the robot has a set of way-points (one near each corner of the map) that it will head towards. When it gets close enough to the first one, it targets the next one. If the robot hits an obstacle, it backs up and tries again.<br />
<br />
<script src="https://gist.github.com/bgreer/ce2bbb353f5f1c812c78.js"></script><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-zg3sBCE_xPU/Uti42_l0RWI/AAAAAAAABLE/fsmHp9iPO74/s1600/robot_plot7.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-zg3sBCE_xPU/Uti42_l0RWI/AAAAAAAABLE/fsmHp9iPO74/s1600/robot_plot7.png" height="393" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<span style="font-size: x-small;">Success=100%, Avg Time=154s</span></div>
<br />
And now we see why all those entries from 2013 had GPS on them. With the ability to know exactly where it is at any point in time, the robot is perfectly capable of finding its way around the course. But is this realistic? The GPS system is neither exact nor instantaneous, so what we have so far is a gross overestimation of the capabilities of this algorithm. I've seen claims of <1 meter accuracy with a standard GPS unit, but I'll put a random noise of 2 meters on the GPS position to be safe. The GPS heading isn't truly measured, but instead just estimated from comparing the current position to the previous one. Finally, I'll set the update rate to 1 Hz, a typical value for cheap GPS units.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-5icHkSE2V6o/Uti5WAh1zLI/AAAAAAAABLM/osjMwxwlUY4/s1600/robot_plot8.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-5icHkSE2V6o/Uti5WAh1zLI/AAAAAAAABLM/osjMwxwlUY4/s1600/robot_plot8.png" height="393" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Success=99%, Avg Time=275s</span></div>
<br />
With more realistic values for the GPS unit, the results are more believable. The success rate is still very high, but the amount of time the robots typically take has almost doubled. With less accurate measurements of the current position and heading, the robots tend to bounce around the course a little more and spend more time going around.<br />
<br />
<br />
<b>Other Methods</b><br />
So far I've only discussed a few simple algorithms for autonomously navigating a known course. While I've tried to add in a few realistic effects like obstacles, continuous deflections, and collisions with competing robots, there is an endless list of physical effects I could try out to see if they are important. The model I've created for the robots is also very simple with two motors, a method of detecting collisions (but no information on direction), and a GPS module.<br />
<br />
At some point I'd like to continue this simulation project to include more physical effects of driving a robot around a course and more ways for the robots to sense their surroundings. Some of the sensors I'd like to model are an <a href="http://en.wikipedia.org/wiki/Optical_flow_sensor">optical flow sensor</a>, an <a href="http://en.wikipedia.org/wiki/Inertial_measurement_unit">IMU</a>, and possibly a <a href="http://en.wikipedia.org/wiki/Computer_stereo_vision">stereoscopic vision sensor</a>. Eventually, I'd also like to actually build a simple wheeled vehicle and try my hand at implementing one of my algorithms on it.</div>
gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com1tag:blogger.com,1999:blog-5124010431457427473.post-70665468508854561912013-12-20T18:40:00.001-07:002014-01-23T22:22:46.035-07:00Robot Arm: Reaching for the Stars<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-_lE91D1yOP4/UrTr7Wv0dsI/AAAAAAAABIw/L2oaylAS_PA/s1600/IMG_1597_ed.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-_lE91D1yOP4/UrTr7Wv0dsI/AAAAAAAABIw/L2oaylAS_PA/s200/IMG_1597_ed.jpg" height="142" width="200" /></a></div>
<br />
<br />
<script type="text/x-mathjax-config"> MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}}); </script> <script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"> </script> It's the holiday season, and this is a blog that has a history of talking about RGB LEDs, Arduino controllers, and simple algorithms. As such, I will now spend three hours writing up how I spent 20 minutes hooking up my randomized Christmas lights:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-4dEDAA4Tzuc/UrJNKxpVDmI/AAAAAAAABH4/SajAPLbTnhc/s1600/Sequence+01_1.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-4dEDAA4Tzuc/UrJNKxpVDmI/AAAAAAAABH4/SajAPLbTnhc/s1600/Sequence+01_1.gif" /></a></div>
<br />
Just kidding, Santa's bringing you coal this year. You get a post on the math behind Inverse Kinematics and Regularized Least Squares along with some code examples applying these concepts to an Arduino-controlled robot arm. Pour yourself a glass of eggnog with an extra shot of bourbon, it's going to be a long night.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-GXnYgLYK-cA/UaVXNNv1YOI/AAAAAAAAAvw/q9H0opuC84U/s1600/IMG_3430.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-GXnYgLYK-cA/UaVXNNv1YOI/AAAAAAAAAvw/q9H0opuC84U/s320/IMG_3430.JPG" height="212" width="320" /></a></div>
<br />
<a href="http://gperco.blogspot.com/2013/05/robot-arm-part-1.html">Months ago</a> I received a robot arm as a gift. I immediately set out to wire up an Arduino Prop Mini and some H-bridges to control the motors, along with a couple potentiometers to provide live feedback on the motor positions. I did a short write up <a href="http://gperco.blogspot.com/2013/05/robot-arm-part-1.html">here</a>. At the time I wrote some basic code to control the arm. All it would do is take a set of target angles and move the motors until the measured angles were close to the targets. You could tell the arm do do things like "point straight up" and it would do it with ease. Since the arm is not capable of rotating very far around any joint, I implemented a check that would prevent the arm from ever moving out of bounds. If it did find itself in an especially contorted state, it would slowly restore itself to a more proper state and then resume whatever it was attempting to do before.<br />
<br />
This method works just fine if all you want to do is tell the arm what angles to have. It's not super useful if you want the gripper to move itself to a specific position in space. Since all the arm knows so far is the angle each joint is at, it can't work backwards to determine how to get the gripper to a specific spot. Fortunately, there is a solution to this problem: <a href="http://en.wikipedia.org/wiki/Inverse_kinematics">Inverse Kinematics</a>.<br />
<br />
To understand how inverse kinematics works, first we must look at forward kinematics. We know the current angles of each joint based on our live measurements, and we know the shape and connectivity of the arm. Each segment of the arm can be seen as a simple rod of length $l_i$ connected to the previous joint and rotated by some angle $\theta_i$:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-wKYr5UocJe4/UrTKzJMwDvI/AAAAAAAABIM/5spG_0tSpRQ/s1600/20131220_155154.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-wKYr5UocJe4/UrTKzJMwDvI/AAAAAAAABIM/5spG_0tSpRQ/s400/20131220_155154.jpg" height="336" width="400" /></a></div>
<br />
From these quantities we can work out the position of the gripper ($\vec{p_{\mathrm{g}}}$) in physical space. For now I will restrict the math to two dimensions ($x$, $z$) since motion in the $y$ direction is determined by the base rotation motor, and is trivial to work out. However, I will soon generalize to vector positions and states so that it can be extended to any number of dimensions and angles.<br />
\[ \vec{p_{\mathrm{g}}} = \vec{f}(\vec{\theta}) = \begin{pmatrix} l_3 cos(\theta_1 + \theta_2 + \theta_3) + l_2 cos(\theta_1 + \theta_2) + l_1 cos(\theta_1) \\ l_3 sin(\theta_1 + \theta_2 + \theta_3) + l_2 sin(\theta_1 + \theta_2) + l_1 sin(\theta_1) \end{pmatrix}. \]<br />
Solving for the position of the gripper based on knowing the angles of the joints is the forward problem. We want the arm to solve the opposite, or inverse problem. Given a target position for the gripper, determine the angles that each joint must be at.<br />
<br />
If the function $f$ were simpler, we might have a chance of finding the inverse function and applying it to any target position $\vec{p_t}$ to find the target state $\vec{\theta_t}$. Unfortunately, we can't do this for our case so we must resort to other methods of solving inverse kinematics. Suppose our target position is not very far from our current position so that we may approximate the target as a small step in angle away from the current position:<br />
\[ \vec{p_{\mathrm{t}}} = \vec{p_{\mathrm{g}}} + J \vec{\delta \theta} + \epsilon \]<br />
where the $\epsilon$ indicates additional terms that will be neglected. This is a multi-dimensional <a href="http://en.wikipedia.org/wiki/Taylor_series">Taylor expansion</a> of the forward kinematics equation. The quantity $J$ is the Jacobian matrix containing the partial derivatives of each component of the function $f$ with respect to each component of $\theta$:<br />
\[ J_{ij} = \frac{\partial f_i}{\partial \theta_j}. \]<br />
We know the target position, the current position, and the derivatives of $f$ at the current location, so we are able to rearrange our approximation to get an equation for the step in angle required.<br />
\[ \vec{\delta \theta} = J^{-1} (\vec{p_{\mathrm{t}}} - \vec{p_{\mathrm{g}}}) \]<br />
Since this is just an approximation, we need to apply this formula to our current position many times in order to settle on the target position. The first iteration will hopefully provide a $\delta \theta$ that moves the gripper towards the target position, but will not be exact. Successive applications of a new $\delta \theta$ computed at each new position should cause the gripper position to converge on the target position. This is sometimes called the Jacobian Inverse method.<br />
<br />
The robot arm I'm dealing with has 5 motors. I haven't settled on a good way of providing feedback on the gripper motor, so I'm ignoring that one for now. The base rotation motor determines the direction the arm is pointing in the $x$-$y$ plane, but reformulating the math in cylindrical coordinates ($r$, $\phi$, $z$) makes the target angle of this motor trivial ($\theta_{base} = \phi$). This leaves two non-trivial dimensions for the positions ($r$, $z$) and three joint angles that need to be determined ($\theta_1, \theta_2, \theta_3$). For various math and non-math reasons, I added one further constraint on the position vector. To specify the angle that the gripper ends up at, I added a third 'position' $\psi$ defined as<br />
\[ \psi = \theta_1 + \theta_2 + \theta_3 \]<br />
From this, we can formulate our forward kinematic function $f$ as:<br />
\[ \vec{f}(\vec{\theta}) = \begin{pmatrix} l_1
cos(\theta_1) + l_2 cos(\theta_1 + \theta_2) + l_3 cos(\theta_1 + \theta_2 + \theta_3) \\ l_1 sin(\theta_1) + l_2 sin(\theta_1 + \theta_2) + l_3 sin(\theta_1 + \theta_2 + \theta_3) \\ \theta_1 + \theta_2 + \theta_3 \end{pmatrix}. \]<br />
The partial derivatives of each of these three components with respect to each joint angle gives the analytic form of the Jacobian matrix $J$.<br />
<br />
I coded this up in C as an IKSolve() function which reads and write to global variables for the current and target positions and angles. The function performs a single iteration of updating the target angle and allows the rest of the code to work on moving the arm to those angles in-between iterations.<br />
<br />
<script src="https://gist.github.com/bgreer/12c47f62b89854b74499.js"></script>
<br />
The 3-by-3 matrix inversion is performed using a function adapted from a post found on stackexchange.<br />
<br />
<script src="https://gist.github.com/bgreer/cabe8dc31b410a1fbac1.js"></script>
I added this to my robot arm code and uploaded it to the Arduino Pro Mini. The target position has to be hard-coded, so each new target required uploading the code again. When given a target position that is within it's reach, the arm typically does a good job moving to it. Here's a gif of the arm moving a pen down to touch a piece of paper:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-hDpsPWMxmUQ/UrTp_9o4FqI/AAAAAAAABIk/wOFpKc719qs/s1600/Sequence+01_3.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-hDpsPWMxmUQ/UrTp_9o4FqI/AAAAAAAABIk/wOFpKc719qs/s1600/Sequence+01_3.gif" /></a></div>
<br />
The algorithm performs fairly well. The arm may not have taken the most optimal path to get to the target position, but we have not put any constraints on the path. The arm can also be told to move to a position that is outside it's reach. Here's a gif of the arm trying to place the pen directly above the base, slightly out of it's reach:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-kp5StzqwmjA/UrTpwHjM4_I/AAAAAAAABIc/UNFcbECWfb8/s1600/Sequence+01_5.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-kp5StzqwmjA/UrTpwHjM4_I/AAAAAAAABIc/UNFcbECWfb8/s1600/Sequence+01_5.gif" /></a></div>
<br />
As you can see, the algorithm isn't perfect. The arm seems to struggle to rotate itself to the correct angle, and instead of converging on a final resting place it seems to oscillate around the target. Finally, it goes unstable and wanders off. The Jacobian algorithm used in this example is attempting to converge on the exact answer provided, and when it physically cannot, it starts doing strange things. It will get the arm as close as it can get, but will keep iterating, thinking that it will just need a few more steps to get to the answer. While testing the arm I found this instability to show up much more often than I would have liked. If the arm could not move itself to exactly the correct position, it would start to shake and eventually drift away from the target completely. I needed a new approach.<br />
<br />
To fix this instability, I resorted to using a technique I am familiar with for my grad school research. I do <a href="http://en.wikipedia.org/wiki/Helioseismology">helioseismology</a>, which involves measuring oscillations on the surface of the sun to infer that is happening on the inside. Specifically, I'm interested in the flow velocities deep in the solar interior. Every flow measurement we make is related to the actual flow inside the sun by some reasonably <a href="http://en.wikipedia.org/wiki/Convolution">convoluted</a> math. The process of turning a set of measurements into a flow map of the solar interior involves solving an inverse problem. This is similar to what I have outlined for the robot arm, except the math for the forward problem is very different. We have a few methods of solving the inverse problem, one of which is using Regularized Least Squares.<br />
<br />
<a href="http://en.wikipedia.org/wiki/Least_squares">Least squares</a> is often used in data analysis to 'fit' an analytical model of data to the measured data. The model is reliant on a user-defined set of numerical paramaters that are optimized during the fitting to produce a model that appropriately matches the data. The measure of how appropriate the match is depends on the sum of the difference between the data and the model squared. This way, the optimized parameters are the ones that provide the least deviance between the model and the data. In the case of solving inverse problems, we want to find the angles (the parameter set) which will position the arm (the model) closest to our target position (the data). Additional terms can be added to the model to bias the optimization to find a solution that we prefer (regularization).<br />
<br />
The <a href="http://en.wikipedia.org/wiki/Non-linear_least_squares">wikipedia page for non-linear least squares</a> provides a nice derivation of how to turn the problem into matrix algebra which can be iterated to find an optimal solution. In the end, you can express the iteration technique in a similar way to the Jacobian Inverse method outlined above:<br />
<br />
<b>Jacobian Inverse:</b> \[ \vec{\delta \theta} = J^{-1} (\vec{p_{\mathrm{t}}} - \vec{p_{\mathrm{g}}}) \]<br />
<b>Least Squares:</b> \[ \vec{\delta \theta} = (J^T J)^{-1} J^T (\vec{p_{\mathrm{t}}} - \vec{p_{\mathrm{g}}}) \]<br />
<b>Regularized Least Squares with Weighting:</b> \[ \vec{\delta \theta} = (J^T W J + R)^{-1} J^T W (\vec{p_{\mathrm{t}}} - \vec{p_{\mathrm{g}}}) \]<br />
<br />
Without regularization or weighting, the least squares method looks strikingly similar to the Jacobian method. Once the fancy extras are added back in things look a little messier, but the essence is still there. Since I already had code set up to solve the Jacobian method, it was easy enough to modify it to handle a few more matrix operations:<br />
<br />
<script src="https://gist.github.com/bgreer/4ff0059892e78a9efabd.js"></script>
<br />
The weighting vector simply weights one or more components of the target position more or less than the others. If you want to make sure the gripper is in the right location but you don't care so much for what angle it ends up at, you can de-weight the third component. I set the regularization to penalize solutions which caused excess bending at any of the joints. This way, if there happened to be multiple ways of the arm positioning itself correctly, this would bias the algorithm to pick the solution that is 'smoothest'.<br />
<br />
With this more advanced approach, the robot arm was acting much more sane. I could still confuse it with particularly troublesome targets, but overall it was much more stable than before. If a target position was out of reach or required moving out of bounds for any joint, the arm would settle on a position that was fairly close. With high hopes, I stuck a pen in the gripper and told it to draw circles on a piece of paper.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-umnwVboJpeU/UrTsqgnSaaI/AAAAAAAABI4/s7tmFhfFr1A/s1600/IMG_1607_ed.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-umnwVboJpeU/UrTsqgnSaaI/AAAAAAAABI4/s7tmFhfFr1A/s400/IMG_1607_ed.jpg" height="291" width="400" /></a></div>
<br />
The code will need a little more tuning before I sent the arm off to art school. For now, the dunce hat remains. I've been able to confirm that the arm generally works, but I think the next step is to see how well I can fine tune both the code and the sensors to achieve a modest level of accuracy in the positioning (<1 cm).<br />
<br />
I don't always like to look up the solution to my problems right off the bat. Often I will see if I can solve them myself using what I know, then sometime near the end check with the rest of the world to see how I did. For this project, I was happy to find that different types of modified least squares methods are commonly used to do inverse kinematics. I certainly didn't expect to create a novel method on my own, but even more, I wasn't expecting to land on a solution that the rest of the world already uses.gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com2tag:blogger.com,1999:blog-5124010431457427473.post-10638564257469961922013-11-30T22:23:00.000-07:002013-12-01T11:20:43.195-07:0012 Hour Project: Arduino Mechanomyography<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-fOPfH4sIsR0/UpjzZaM4yjI/AAAAAAAABFc/obGScFT0fG0/s1600/wt1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="127" src="http://1.bp.blogspot.com/-fOPfH4sIsR0/UpjzZaM4yjI/AAAAAAAABFc/obGScFT0fG0/s200/wt1.jpg" width="200" /></a></div>
<br />
It's the weekend after Thanksgiving. Not only are my grad student responsibilities minimal, but the servers I get most of my data from are down over the holiday. I've tried sitting around doing nothing, but that was too hard. Instead I've decided to challenge myself to a 12-hour electronics/programming project.<br />
<br />
Seeing as it's just past Thanksgiving, leftovers are the highlight of the week. I've gathered quite a few miscellaneous electronic parts over the last year, mostly from projects that never got off the ground. I've got leftover sensors, microcontrollers, passive components, various other integrated circuits, and a couple protoboards. To keep myself busy and to get rid of a couple of these spare parts, I've decided to do a short 12-hour project. I set out a few rules for myself:<br />
1) I could only use parts I had sitting around.<br />
2) I had 12 hours to complete it<br />
3) I had to build something useful (OK, maybe just interesting)<br />
<br />
I did allow myself to think about the project before I started working. One topic I've been interested in for a long time but haven't yet pursued is bio sensing. This is basically the act of measuring some kind of biological signal (heart rate, blood pressure, brain waves, skin conductivity, etc.). Real medical equipment designed to measure these things is often very accurate, but is fairly expensive. As the cost of the sensor goes down, the accuracy usually goes down too. The goal is then to find a cheap method of measuring these biological signals well enough. While a hospital may need to know your O2 sats to two or three significant digits in order to save your life, an outdoor enthusiast might just be generally curious how their sats change as they hike up a 14,000 foot mountain.<br />
<br />
As an avid <a href="http://en.wikipedia.org/wiki/Bouldering">boulderer</a>, I've been particularly interested in muscle use while climbing. A big thing that new climbers notice is how much forearm strength is required to get off the ground. I wanted to see if I could measure forearm muscle activation using whatever I had lying around. The first method of measuring muscle activity that came to mind was <a href="http://en.wikipedia.org/wiki/Electromyography">electromyography</a> (EMG). This method measured the electrical potential generated by muscle cells. I figured this would be difficult to do without special probes, so I went on to <a href="http://en.wikipedia.org/wiki/Mechanomyogram">mechanomyography</a> (MMG). This method measures vibrations created by activated muscles. The vibrations can be picked up using many different sensors, including microphones and accelerometers. Luckily, I had a few of the latter sitting around from a previous project. The night before I planned on starting any work on this, I set up a camera to take a time-lapse of my work area for the next day.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-upnlPc0mIaE/Upq2u9FCABI/AAAAAAAABGQ/eYJslauSLjM/s1600/G0010124.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="243" src="http://3.bp.blogspot.com/-upnlPc0mIaE/Upq2u9FCABI/AAAAAAAABGQ/eYJslauSLjM/s400/G0010124.JPG" width="400" /></a></div>
<br />
<br />
I started working bright and early at 10 am. I found an old MPU-6050 on a breakout board and soldered wires to the power and I2C lines so I could begin collecting data for analysis using an Arduino Mega. I wrote a quick code for the Mega that would grab raw accelerometer readings from the MPU-6050 and print them directly to the serial port every few milliseconds. I simply copy and pasted the data over to a text file that I could read with my own analysis codes.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-z31oEKaw1qU/UprCCO_TRZI/AAAAAAAABHI/YYQmgTDAiXM/s1600/20131130_104026.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="357" src="http://4.bp.blogspot.com/-z31oEKaw1qU/UprCCO_TRZI/AAAAAAAABHI/YYQmgTDAiXM/s400/20131130_104026.jpg" width="400" /></a></div>
<br />
For this project, I relied heavily on wavelet transforms to help interpret the sensor readings. Since the expected muscle signal would be a transient oscillation with unknown frequency and unknown amplitude, I needed a method that would break down any wave-like signal in both time and frequency. I was lucky enough to find an excuse to learn about wavelets for my grad school research (it ended up not being the best method, go figure) so I started this project with a few analysis codes to work from. Before I get into the results, I can try to give a quick run-down of why wavelets are useful here.<br />
<br />
<b><span style="font-size: x-small;">Interlude on Wavelet Transforms</span></b><br />
The wavelet transform is an <a href="http://en.wikipedia.org/wiki/Integral_transform">integral transform</a> in that it maps our signal from one domain into another. In many analysis problems, a signal can be difficult to interpret in one domain (time) but easier in another (frequency). An example would be a signal comprised of two sine waves of different frequency added together. The addition creates a strange looking signal that is difficult to interpret, but when mapped on to the frequency domain, the two component sine waves appear distinct and separated. Why is this useful for our problem? Well, I should start by talking about a simpler transform, the <a href="http://en.wikipedia.org/wiki/Fourier_transform">Fourier Transform</a>.<br />
<b><span style="font-size: x-small;">Interlude on Fourier Transforms (FTs)</span></b><br />
Just kidding, if I follow this I'll fall down a rabbit hole of math. All you need to know is the following: any signal can be thought of as the combination of many sine waves (oscillations) that are at many different frequencies and different amplitudes. An FT is a way of displaying which particular frequencies are prominent in the signal. The issue is that it analyzes the entire signal at once, so if the composition of the signal changes over time, this change gets mixed up with the rest of the signal and you get just a single measure of the components.<br />
<b><span style="font-size: x-small;">Back to Wavelets</span></b><br />
Suppose we have a signal we want to analyze in terms of simple oscillations (like a muscle vibrating), but the oscillating component we are interested in changes with time. Maybe the frequency changes, maybe the amplitude changes. A wavelet transform allows us to see the individual frequency components of the signal as a function of time. For an example, I'll use a microphone recording I did for the Music Box project of a few notes played on a violin.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-M7sn_pJQ84Y/Upjw7LmSTrI/AAAAAAAABFM/7xeeZo7EXRs/s1600/violin.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="141" src="http://2.bp.blogspot.com/-M7sn_pJQ84Y/Upjw7LmSTrI/AAAAAAAABFM/7xeeZo7EXRs/s400/violin.jpg" width="400" /></a></div>
<br />
I pulled the data into a little IDL code I adapted from examples <a href="http://paos.colorado.edu/research/wavelets/software.html">here</a> and plotted the results.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-fOPfH4sIsR0/UpjzZaM4yjI/AAAAAAAABFY/6SM_1aYYh9E/s1600/wt1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="255" src="http://3.bp.blogspot.com/-fOPfH4sIsR0/UpjzZaM4yjI/AAAAAAAABFY/6SM_1aYYh9E/s400/wt1.jpg" width="400" /></a></div>
<br />
The warm/white colors show parts of the signal that have a large amount of power, or are particularly strong. The vertical axis gives the frequency of the component, and the horizontal axis gives the time during the original signal that it occurs. As time progresses, you can see that the microphone picked up a series of notes played at successively higher frequencies. For any of these notes played, you can see multiple prominent components of the signal showing up at distinct frequencies. These are the harmonics of the base note that show up while playing any instrument. If you are used to seeing plots like this and are wondering why it looks so awful, that's because I 'recorded' this by sampling a microphone at about 5kHz using an Arduino, printing the analog value to the serial port, then copy and pasting the results into a text file. I also may have gotten the sampling rate wrong, so the entire plot might be shifted a little in frequency.<br />
<br />
Now that we've got the math out of the way, how does this relate to the project at hand? Muscles vibrate at particular frequencies when moving. The frequency at which they vibrate depends slightly on which muscle it is and how much force the muscle is exerting. For more information on these oscillations, check out <a href="http://www.biomedical-engineering-online.com/content/4/1/67">this paper</a>. It's an interesting read and has quite a few useful plots. My hope is to identify both the frequency and amplitude of the signal I'm interested in by looking at the wavelet transform of the accelerometer output.<br />
<br />
Going all the way back to my simple MPU-6050 + Arduino Mega setup, I set the sensor down on my workbench next to me and recorded data for a few seconds. Near the end of the recording, I dropped a pen a few inches away from the sensor to see what it would pick up.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-Nk3mYZLkUS4/Upp29JoEFAI/AAAAAAAABFo/MJjvUed8zgE/s1600/wt_noise_drop.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="640" src="http://2.bp.blogspot.com/-Nk3mYZLkUS4/Upp29JoEFAI/AAAAAAAABFo/MJjvUed8zgE/s640/wt_noise_drop.png" width="473" /></a></div>
<br />
I've broken the sensor analysis into the three distinct directional components from the accelerometer (x,y,z). The colors once again show relative amplitude of the signal, and anything near the dark cross-hatching is not to be believed.<br />
<br />
For the first 15 seconds, we see the noise signature of the sensor. There appears to be a slight frequency dependence, in that there is a little more power near 25 Hz and 150 Hz than in other places. I'm not sure where this is coming from, but it's low enough amplitude that I'm not concerned. At 15.5 seconds, the pen hits the desk and we see a spike in power at high frequency, along with a few of the bounces the pen made in the second after first impact.<br />
<br />
Next I duct taped the sensor to my forearm with the x-axis pointing down the length of my arm, the z-axis pointing out from my skin, and the y-axis obviously perpendicular to both. I started by recording data with my arm as limp as possible.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-ArwFsXh3frQ/Upq8KenbJ3I/AAAAAAAABGg/lKToIZV7Uvg/s1600/wt_arm_resting.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="640" src="http://3.bp.blogspot.com/-ArwFsXh3frQ/Upq8KenbJ3I/AAAAAAAABGg/lKToIZV7Uvg/s640/wt_arm_resting.png" width="464" /></a></div>
<br />
Similar to the previous plot, but with a constant signal between 2 and 10 Hz. I don't really know what this is coming from wither, but it's roughly consistent with my heart rate when I have a sharp metal object glued tight to my arm.<br />
<br />
Next I recorded data while squeezing a hand exerciser twice to see if there was a significant difference between that and my relaxed arm.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-IJfk_6-Y5Vg/Upq9QzzI36I/AAAAAAAABGo/XF6Lv9tYg6c/s1600/wt_arm_flex2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="640" src="http://4.bp.blogspot.com/-IJfk_6-Y5Vg/Upq9QzzI36I/AAAAAAAABGo/XF6Lv9tYg6c/s640/wt_arm_flex2.png" width="468" /></a></div>
<br />
Luckily, it was obvious when I was flexing and when I wasn't. There was a significant increase in the power around 13 Hz whenever I engaged my forearm muscle, and a spike in power right when I relaxed it. After a few more tries, I found this signal to be fairly robust. Even when I was moving my whole arm around to try to confuse the accelerometer, I could see a distinct signal around 13 Hz whenever I activated the muscles underneath the sensor. I decided that the x-component of the accelerometer data was giving the highest signal-to-noise ratio, so for the rest of my analysis I only used that.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-9MLtDEqIr3g/UprCSG8QwiI/AAAAAAAABHQ/QkhyNpC0ox0/s1600/20131130_115006.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://3.bp.blogspot.com/-9MLtDEqIr3g/UprCSG8QwiI/AAAAAAAABHQ/QkhyNpC0ox0/s400/20131130_115006.jpg" width="315" /></a></div>
<br />
<br />
Since the power at 13 Hz wasn't the only thing present in the accelerometer data, I had to filter out everything around it. I used a 4th-order Butterworth band-pass filter centered on the frequency I wanted. I found this <a href="http://www.schwietering.com/jayduino/filtuino/index.php?characteristic=bu&passmode=bp&order=4&usesr=usesr&sr=267&frequencyLow=10&noteLow=&frequencyHigh=17&noteHigh=&pw=pw&calctype=float&run=Send">nice site</a> that calculates the proper coefficients for such filters and even gives filtering code examples in C. After implementing the filter, I compared the new signal to the unfiltered one.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-C_CvKCB8DaA/UprAA9rjylI/AAAAAAAABG0/Oy_yIRUktp0/s1600/wt_adv_filter_cropped.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="365" src="http://4.bp.blogspot.com/-C_CvKCB8DaA/UprAA9rjylI/AAAAAAAABG0/Oy_yIRUktp0/s400/wt_adv_filter_cropped.png" width="400" /></a></div>
<br />
With the signal I wanted extracted from the rest of the accelerometer data, I implemented a quick algorithm for determining how much power was in the signal at any given time. Over a 0.1 second window, the Arduino would record the maximum amplitude of the filtered signal and feed that in to a running average of every previous maximum from past windows. This created a somewhat smooth estimate of the amplitude of the 13 Hz oscillations. I modified the Arduino code to only print this amplitude estimation to the serial port and recorded the result while flexing a few times.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-MU-LaXqJJS4/UprAzXCQScI/AAAAAAAABG8/DZch--WcsvE/s1600/amplitude_response.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="155" src="http://4.bp.blogspot.com/-MU-LaXqJJS4/UprAzXCQScI/AAAAAAAABG8/DZch--WcsvE/s400/amplitude_response.png" width="400" /></a></div>
<br />
I finally had a simple quantity that gave an estimate of muscle activation. The next step was to add some kind of visual response. I had an extra RGB LED strip sitting around from my Daft Punk Helmet project, so I went with that. The strip of 7 LEDs I planned on using only needed three connections, Vcc, Gnd, and Data. I had plenty experience getting these strips to work with the helmet project, so that was a breeze.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-HBbXHFiqydY/UprDSdqK-cI/AAAAAAAABHY/QfsjKV-E0cw/s1600/20131130_155331.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="345" src="http://3.bp.blogspot.com/-HBbXHFiqydY/UprDSdqK-cI/AAAAAAAABHY/QfsjKV-E0cw/s400/20131130_155331.jpg" width="400" /></a></div>
<br />
Next was to transfer the code over to a little 16 MHz Arduino Pro Mini and wire everything to it. To power the setup I added a little Molex connector so it would be compatable with the 5V battery packs I made for my helmets (a 3-cell LiPo dropped to 5V with a switching regulator).<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-Oi8zhupdMkg/UprEihFbPRI/AAAAAAAABHg/eZW_NI0950I/s1600/20131130_220803.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="273" src="http://1.bp.blogspot.com/-Oi8zhupdMkg/UprEihFbPRI/AAAAAAAABHg/eZW_NI0950I/s400/20131130_220803.jpg" width="400" /></a></div>
<br />
I added a simple conversion between signal amplitude and LED color to the code and tried it out. Unfortunately, I couldn't find any plastic or cloth straps that would work for securing the sensor and display to my arm, so I duct taped them on. Over the course of the day I had ripped around 5 pieces of tape off my arm so I was getting used to the pain.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-pwSeXoStNCs/Upqp7S49VEI/AAAAAAAABGA/gfhDMqKGoZs/s1600/squeeze.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-pwSeXoStNCs/Upqp7S49VEI/AAAAAAAABGA/gfhDMqKGoZs/s1600/squeeze.gif" /></a></div>
<br />
At this point I declared the project a success. Not only had I built a contraption to measure muscle activity using components I had sitting around on my desk (and found a use for wavelet transforms), but I had done it in my 12-hour limit (actual time ~10.5 hrs). I've placed the final code on <a href="https://github.com/bgreer/sketchbook/blob/master/mmg/mmg.ino">github</a> for those interested.<br />
<br />
I've got close to 10,000 still frames from the time-lapse I recorded while working, but it will take me a day or two to get that compiled and edited. I'll update once it's live!gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com4tag:blogger.com,1999:blog-5124010431457427473.post-1376901502368147672013-11-17T23:41:00.002-07:002013-11-17T23:41:47.821-07:00Daft Punk Helmets: Controllers and Code<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-TSajzF_4dRE/UomxnMBXLYI/AAAAAAAABEY/H0tfhHQqIdY/s1600/IMG_1559.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="281" src="http://3.bp.blogspot.com/-TSajzF_4dRE/UomxnMBXLYI/AAAAAAAABEY/H0tfhHQqIdY/s400/IMG_1559.jpg" width="400" /></a></div>
<br />
I'm hoping this will be my last post about the Daft Punk helmets that I made for Halloween. While they were an incredibly fun project to design, build, and program, it has been a fairly draining process. I'm ready to move on to a new project, but first I'd like to wrap this one up.<br />
<br />
Before I get into how I controlled the numerous lights embedded in each helmet, I should take a moment to discuss the power requirements. This is one area that I was uncertain about coming from a background of knowing absolutely nothing about electronics. The first problem was what the power source would be. I listed by options and their pros/cons:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-MSvecMd3ZdA/UoRTLpskoRI/AAAAAAAABCA/wRv17TsF2cE/s1600/list.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://4.bp.blogspot.com/-MSvecMd3ZdA/UoRTLpskoRI/AAAAAAAABCA/wRv17TsF2cE/s400/list.jpg" width="275" /></a></div>
<br />
After much deliberation I settled on using some LiPo batteries I had sitting around from another project. I had two 3-cell (11.1V nominal) 4500mAh rechargeable batteries and two helmets to power. A perfect match. Unfortunately, all of the LED drivers needed 5V to operate. Since connecting them to the batteries as-is would most definitely end painfully, I needed a way of dropping the voltage from 11.1V to 5V. The beginner electrical engineer in me immediately thought of using the ubiquitous LM7805, a linear voltage regulator set up to drop an input voltage to a steady 5V on output. A quick check with the datasheet for the pinout and I was ready to go.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-V-WMTuA7kEg/UoRV21CaQwI/AAAAAAAABCM/wOubOiuA3Mg/s1600/Screenshot+from+2013-11-13+21:44:28.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="162" src="http://3.bp.blogspot.com/-V-WMTuA7kEg/UoRV21CaQwI/AAAAAAAABCM/wOubOiuA3Mg/s400/Screenshot+from+2013-11-13+21:44:28.png" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Uh oh.</span></div>
<br />
What does this mean? The datasheet for any electrical component typically provides a large amount of information on how to operate the component, what the standard operating characteristics are, and what the limits of operation are. Above, I've highlighted in appropriately fire-orange the thermal resistance of the component. The units for this are degrees Celsius per Watt, and the quantity describes how much the device will heat up when dissipating some amount of power. Why is this of any concern to me? Well, linear voltage regulators drop the input voltage down to the output voltage while maintaining some amount of current by dissipating the excess power as heat. If you don't understand that last sentence, either I'm explaining it poorly or you should read up on basic electronics (V=IR, P=IV). I prefer to believe the latter.<br />
<br />
So how much power will I need to dissipate? I know the voltage drop (6.1V), but what is the current draw? Let's consider the pseudo-worst case where every LED in each helmet is on full brightness, but the LED drivers and other components don't require any power. For the red LED panels, that makes 320 LEDs using 20mA of current each, or 6.4 Amps total. For the RGB LEDs, there are 96 LEDs with three sub-components each drawing 20mA resulting in 5.8 Amps. So in dropping 6.1V while powering every LED, we need to dissipate up to 40 Watts of power (~45% efficiency). Our handy datasheet tells us that the regulator will set up a temperature differential relative to the ambient air temperature of 65 degrees Celsius for every Watt we dissipate. This leaves us with... 2600 degrees. That's hot enough to melt iron. It's also hot enough to completely destroy the regulator, so using a linear voltage regulator is not an option. There is the option to use a heatsink to help dissipate the waste heat more efficiently, and the datasheet helpfully tells us that in that case, we only heat up by 5 degrees C per Watt. This gets us to a toasty 200 degrees C, still too hot to handle. We need another option.<br />
<br />
Enter the DC-DC <a href="http://en.wikipedia.org/wiki/Switched-mode_power_supply">switched-mode regulator</a>. I'm not enough of an expert on these circuits to give an intuitive explanation for how they work, but the end result is a drop in voltage with high efficiency (75-95%). The higher efficiency means we won't be dissipating nearly as much energy as heat compared to the linear regulator. I grabbed a couple cheap step-down converters (a switched-mode regulator that can only step voltages down) from eBay and made two battery packs. Each converter had a small dial for picking the output voltage, so I tuned that to output 5V and glued it in place.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-yT9_v-7y-9w/UofPA_oTXNI/AAAAAAAABCc/YtWYoOYAS4w/s1600/20131116_125112.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="211" src="http://2.bp.blogspot.com/-yT9_v-7y-9w/UofPA_oTXNI/AAAAAAAABCc/YtWYoOYAS4w/s400/20131116_125112.jpg" width="400" /></a></div>
<span style="font-size: x-small;">Top: finished battery pack. Bottom: Just the switching regulator.</span><br />
<br />
The rest of the circuitry for each helmet was fairly simple. I used an Arduino Pro Mini 5V in each helmet to control the lights and hooked up a tiny push button to each to provide some basic user input. The LED drivers for the red LED panels needed three data lines connected to the Arduino, and the RGB strips just needed one per strip. With all of this hooked up, the helmets were ready to be programmed.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-KhWE04l4Xks/UomhQy2nBOI/AAAAAAAABD4/4aqF1jKkzy4/s1600/IMG_1578.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="213" src="http://1.bp.blogspot.com/-KhWE04l4Xks/UomhQy2nBOI/AAAAAAAABD4/4aqF1jKkzy4/s320/IMG_1578.jpg" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-t38M28pnaRM/UomhRAkgrpI/AAAAAAAABD8/ru9h73xFMU0/s1600/IMG_1580.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="213" src="http://1.bp.blogspot.com/-t38M28pnaRM/UomhRAkgrpI/AAAAAAAABD8/ru9h73xFMU0/s320/IMG_1580.jpg" width="320" /></a></div>
<br />
Before I get into the details of the code, the whole thing can be <a href="https://github.com/bgreer/sketchbook/tree/master/helmets">viewed on my github</a>.<br />
<br />
There are a couple steps to thinking about the code for each helmet. I think it's easiest to work out the code from the lowest level to the highest. The first step is figuring out how to talk to the LED drivers. Luckily I had fairly complete drivers in each helmet that handled power, PWM, and had an internal data buffer. The first bit of code I wrote just interfaced with the different drivers. The second step is figuring out how to do an internal data/pixel buffer in each Arduino so that you don't have to compute which LEDs to turn on while attempting to talk to the LED drivers. This causes significant lag, and for the RGB LED drivers, is impossible (more on that later). The third step is deciding how to fill the internal pixel buffers. This is where I get to decide what patterns will show on each helmet, how quickly they update, etc. The code will basically 'draw' an image in the pixel buffer and expect it to be sent out to the LED drivers by whatever code was written for step two. The fourth and final step is writing the overall flow of the code. This is what handles which drawing mode is running, how the push button is dealt with, and how often the pixel buffers and LED drivers should be updated.<br />
<br />
While the overall theme of both codes roughly follow these steps, there were differences in implementation for each helmet. I'll go through a few of the key points for each step.<br />
<br />
<b>The Silver Helmet (Thomas)</b><br />
Step 1: The drivers were essentially acting as shift registers with 16 bits of space in each. The first 8 bits were an address and the second 8 bits were the data payload. The addresses 0x01 to 0x08 pointed to the 8 different columns of pixels attached to each driver. Sending a data payload of 0x24 (binary 00100100) with an address of 0x02 would set the third and sixth pixels of the second column on.<br />
Step 2: Since each pixel is either on or off, only a single bit is needed for each pixel. The display was conveniently 8 pixels high, so a single byte for each of the 40 columns was used as the internal buffer.<br />
Step 3: There ended up being 6 possible modes for this helmet. Some displayed text by copying individual characters from memory onto the pixel buffer, while others manipulated each pixel to create a unique effect. Below I have a gif of each mode except the 'blank' mode, which you may guess is not very interesting.<br />
Step 4: The button was coded up as an internal interrupt in the Arduino, so at any point, pressing the button would increment the mode counter. Every few milliseconds, the main loop would allow the code pertaining to the current mode to take a 'step' in modifying the pixel buffer, then the main loop would go ahead and push the pixel buffer out to the LED drivers.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-9OEQbu2lH54/UolJrSiiJlI/AAAAAAAABC8/a5FGe9O-O1Q/s1600/final01.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-9OEQbu2lH54/UolJrSiiJlI/AAAAAAAABC8/a5FGe9O-O1Q/s1600/final01.gif" /></a><br /><span style="font-size: x-small;">Robot / Human</span></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-76lGDTVYxDU/UolJptACDrI/AAAAAAAABCs/vy1HBitFPWA/s1600/final02.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-76lGDTVYxDU/UolJptACDrI/AAAAAAAABCs/vy1HBitFPWA/s1600/final02.gif" /></a><br /><span style="font-size: x-small;">Random Bytes</span></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-OZRq9ofWivE/UolJqFBztFI/AAAAAAAABCw/4NmxIaEXJz8/s1600/final03.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-OZRq9ofWivE/UolJqFBztFI/AAAAAAAABCw/4NmxIaEXJz8/s1600/final03.gif" /></a><br /><span style="font-size: x-small;">Around the World</span></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-tOWju-D1D5I/UolJsUWgoJI/AAAAAAAABDA/rJWGmXr_nKI/s1600/final04.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-tOWju-D1D5I/UolJsUWgoJI/AAAAAAAABDA/rJWGmXr_nKI/s1600/final04.gif" /></a><br /><span style="font-size: x-small;">Starfield</span></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-r5L09X0Wvf4/UolJs9lDrfI/AAAAAAAABDI/PI1BbnRisNI/s1600/final05.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-r5L09X0Wvf4/UolJs9lDrfI/AAAAAAAABDI/PI1BbnRisNI/s1600/final05.gif" /></a><br /><span style="font-size: x-small;">Pong</span></div>
<br />
The red pixel near the bottom of the display is not a stuck pixel. It's actually the on-board LED for the Arduino shining through from the backside of the helmet. A few notes on the modes. The Robot/Human mode would actually choose which of those two words to display at random after the previous one had finished scrolling by. The random bytes was done through bit-shifting, byte-swapping, and random bit-flipping. The starfield was boring so I never left it on. The game of Pong actually kept score with itself on the edges and the ball would get faster as the game progressed.<br />
<br />
<b>The Gold Helmet (Guy)</b><br />
<br />
Step 1: The LED drivers found on each RGB LED require precise timing in order for them to accept data. Luckily, there is a <a href="https://github.com/adafruit/Adafruit_NeoPixel">wonderful library</a> put out by the lovely folks at <a href="http://www.adafruit.com/">adafruit </a>that handles this timing. Their library uses hand-tunes Assembly code to ensure the timing is accurate and that the Arduino plays nice with the drivers.<br />
Step 2: The NeoPixel library sets up a pixel buffer on its own, but I decided to also keep one outside the library. Each pixel requires 8 bits of data per color channel and I was using 8 strips of 6 LEDs on each side of the helmet. This is 96 LEDs (288 individual color channels), but I was only interested in having a single color for each strip of 6 LEDs. This limits the number of color values that need to be held in memory to 48. I kept a single array for each color and side (right-red, right-green, ..., left-red, ...), each 8 bytes long.<br />
Step 3: There were only four modes (including a blank one) that I came up with. The gifs are below.<br />
Step 4: Again, the push button acted as an interrupt for the Arduino to switch modes. The main loop stepped forward the appropriate mode and copied my own pixel buffers to the NeoPixel buffers so the data could be sent out.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-Y1-rFjR0OmE/UolN3UTE7TI/AAAAAAAABDo/T3Iq_7PDplU/s1600/final01_2.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-Y1-rFjR0OmE/UolN3UTE7TI/AAAAAAAABDo/T3Iq_7PDplU/s1600/final01_2.gif" /></a><br /><span style="font-size: x-small;">Classic Blocks</span></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-FVKpIREe7ew/UolN1P5c-_I/AAAAAAAABDc/Ulrh6BM6pc8/s1600/final02_2.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-FVKpIREe7ew/UolN1P5c-_I/AAAAAAAABDc/Ulrh6BM6pc8/s1600/final02_2.gif" /></a><br /><span style="font-size: x-small;">Cycling Colors</span></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-y_sESLEnncY/UolN1MyvWVI/AAAAAAAABDY/3CcKXKlDe9U/s1600/final03_2.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-y_sESLEnncY/UolN1MyvWVI/AAAAAAAABDY/3CcKXKlDe9U/s1600/final03_2.gif" /></a><br /><span style="font-size: x-small;">Slow Beat</span></div>
<br />
The Classic Blocks mode was created based on some videos I had seen of other similar helmets in action. I felt it had a nice 90s feel to it. The Cycling Colors mode was copied directly from my <a href="http://gperco.blogspot.com/2013/09/music-box-software.html">Music Box project</a>, but sped up by a few times. The Slow Beat mode used the colors from the Classic mode, but kept them mostly steady with a slow pulsation of the brightness.<br />
<br />
With that, the code to drive both helmets had been finished. The LED drivers had been wired to the LEDs; the LEDs had been soldered together; the visors had been tinted and melted into place; the helmets had been painted and glossed, sanded and molded, reinforced with fiberglass and resin; and some pieces of paper had been folded into the shape of two helmets. It had been a hectic but incredibly fun project for the eight weeks it took to get to completion. Not everything went as planned and some of the edges were rougher than intended, but I personally think the end result was well worth the effort. I hope these few posts covering the build progress have been interesting or even useful to you. To wrap this up, here are a few snapshots of the helmets being worn for Halloween 2013:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-VtJZBC0m5jQ/Uom2a_qFfII/AAAAAAAABEo/BweYWEE8NSA/s1600/603165_775910167862_53690579_n.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://1.bp.blogspot.com/-VtJZBC0m5jQ/Uom2a_qFfII/AAAAAAAABEo/BweYWEE8NSA/s320/603165_775910167862_53690579_n.jpg" width="240" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-znIZm3fIYZA/Uom2bDQgDjI/AAAAAAAABEs/_TLjWRN8nm0/s1600/1381803_775910172852_1422648750_n.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://3.bp.blogspot.com/-znIZm3fIYZA/Uom2bDQgDjI/AAAAAAAABEs/_TLjWRN8nm0/s320/1381803_775910172852_1422648750_n.jpg" width="240" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-Uww39L_uKLY/Uom2azrglTI/AAAAAAAABEw/ygyw5QvCzPI/s1600/1385254_775910212772_1508897995_n.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="http://4.bp.blogspot.com/-Uww39L_uKLY/Uom2azrglTI/AAAAAAAABEw/ygyw5QvCzPI/s320/1385254_775910212772_1508897995_n.jpg" width="320" /></a></div>
<br />gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com0tag:blogger.com,1999:blog-5124010431457427473.post-56424158526826117272013-11-11T20:33:00.002-07:002013-11-11T20:33:25.920-07:00Robobear and a Note on Failure<span style="font-size: x-small;">My next post will be something interesting, probably more about my Daft Punk helmets. But for now, something a little less flashy.</span><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-DoEnwt0dWuQ/UmWJiJRYEPI/AAAAAAAAA8w/tZdvMiGgPLA/s1600/2013-08-14_18-15-54_873.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="130" src="http://3.bp.blogspot.com/-DoEnwt0dWuQ/UmWJiJRYEPI/AAAAAAAAA8w/tZdvMiGgPLA/s200/2013-08-14_18-15-54_873.jpg" width="200" /></a></div>
<br />
<br />
I would like to talk about a recent project that ended up a failure. This is not the first failure I've had with projects related to this blog, but typically I just move on to a new project that has a better chance of succeeding. There's a lot that can be learned from failure, but often each lesson is small and not worth an entire written discussion. Collected together, these small lessons from failure provide an interesting look at personal development over time. Here are some of my own recent discoveries:<br />
<span style="font-size: x-small;"><br /></span>
<span style="font-size: x-small;">- Don't connect the terminals of a battery together to see if the battery is dead. </span><br />
<span style="font-size: x-small;">- When soldering, wait for the iron to reach the right temperature before you begin.</span><br />
<span style="font-size: x-small;">- Don't let a linear voltage regulator try to dissipate 10 Watts of power, they are not light bulbs.</span><br />
<span style="font-size: x-small;">- Know the structural limitations of your materials (duct tape, hot glue, epoxy, solder, nuts and bolts, wood, metal, acrylic, carbon fiber, fiberglass, etc.). You don't want them breaking at just the wrong time.</span><br />
<span style="font-size: x-small;">- Don't waste a microcontroller on something that can be done with a cheap IC (LED driver, motor driver, etc).</span><br />
<span style="font-size: x-small;">- Before you buy a bunch of electronic components, check eBay. Someone might be offering it for 10% the cost you would have payed elsewhere.</span><br />
<span style="font-size: x-small;">- If you need to make analog measurements (like a microphone), make sure the power source is stable and you've taken care to reduce electromagnetic interference.</span><br />
<span style="font-size: x-small;">- Prototype, collect data, and analyze your idea before committing time and money. This is like the advanced version of "measure twice, cut once".</span><br />
<br />
Failure in the context of a hobby is a very productive kind of failure, since the risk is low. All you can waste is your own time and money. There isn't a risk of disappointing someone else (usually), and there isn't a risk of getting fired. The reason behind a failed project can be discovered and the lesson digested on your own time. Working on my own projects in my free time has allowed me the opportunity to stop and smell the roses and then truly understand why soaking roses in chloroform is a bad idea.<br />
<br />
<b>WARNING</b>: The following is a long-winded story about the Robobear project and how it failed. If you are not interested in the details of how I attempted to motorize a wooden cart that would carry a bass player and a taxidermy bear, skip it. Alternately, here is a <a href="http://en.wikipedia.org/wiki/Wikipedia:Too_long;_didn%27t_read">tldr</a> summary: My job was to motorize a big wooden cart so it could move silently across a stage, the project failed despite solving many problems along the way.<br />
<br />
A few weeks ago I was approached by my good friend <a href="http://julierooney.com/">Julie</a> to come up with some ideas on how to get a wooden cart to move across a stage with minimal human interaction. The cart was 9 feet long, 4 feet wide, and only about 3 inches high. The purpose of the cart was to ferry a string bass player and a taxidermy bear on to and off of a stage for an artistic performance. I've heard of strange requirements from clients, but this was certainly unique.<br />
<br />
I saw a few ways of moving the cart:<br />
1 - Have someone push/pull the cart<br />
2 - Have two ropes; one tied to the back for pulling, and one attached to a wind-up mechanism underneath the cart to make the cart move away when the second rope is pulled.<br />
3 - Motorize some of the wheels and use wired, wireless, or fully automated controls.<br />
After discussing the options, she chose to move forward with the motorized option and keep the other two as backup plans. And so, the idea of Robobear was born. I started planning, prototyping, and ordering parts. The budget was tight, so I had to plan carefully and keep cost in mind when designing the system.<br />
<br />
The primary concern throughout this whole project was torque. The motors have to be capable of exerting enough torque to push the cart. If we pretend the cart is in a frictionless world (except the friction that allows the tires to push against the ground) and the ground is completely flat, then any amount of torque will do. Having more torque available would just increase the possible acceleration of the cart. Since we unfortunately don't live in such a world, we need the motors to be able to overcome friction and bumps in the ground before they can even start to move the cart. The first step in this project was estimating the necessary torque to get the cart rolling. By pulling the cart by hand with various amounts of weight on it (very scientific), I was able to estimate that it needed about 30 pounds of horizontal force.<br />
<br />
To convert between force and torque, we need to know the lever arm distance (torque = force * distance). In this case that is the radius of the wheels that will be used. There wasn't a whole lot of space underneath the cart, so I had few options. Assuming 35mm radius wheels, the combined torque of the motors needed to be at least 4.7 Nm, or 660 oz-in. This is a fair amount of torque, I think. I decided to stick with DC gearmotors, and surfed around some websites I knew of that sold high powered ones. On Pololu, the most powerful motors they offer have around 200 oz-in of torque. Four of <a href="http://www.pololu.com/catalog/product/1105">these</a> would be necessary to get the cart moving. The speed matched up well enough, since a 70mm wheel rotating at 150rpm gives a max speed of 0.5m/s (1.8 fps). I went with <a href="http://www.pololu.com/catalog/product/1420">plastic wheels</a> found on the same site, and modified some mounting hubs to connect the large motors to such small wheels. Of course, I could have tried for heftier motors, like a golf cart motor or electric scooter motors. Unfortunately the size limitation was fairly strict, so I had to stick with motors that could fit underneath the cart.<br />
<br />
For the motor driver, I went with a <a href="http://www.pololu.com/catalog/product/2502">pre-built controller</a> that could supply 24 Amps of current total, just enough to allow each of the four motors to stall (6A stall current) if they got stuck. Making a motor driver myself would have probably been cheaper, but most likely could not have supplied as much current without serious heat issues. Sometimes it's best to pay for peace of mind.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-2dy_WeJIvFY/UmWJO8B_azI/AAAAAAAAA8k/TAZAOUM1eDc/s1600/2013-08-14_16-44-22_894.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="223" src="http://2.bp.blogspot.com/-2dy_WeJIvFY/UmWJO8B_azI/AAAAAAAAA8k/TAZAOUM1eDc/s400/2013-08-14_16-44-22_894.jpg" width="400" /></a></div>
<br />
Two 3S, 2200mAh LiPo batteries were taken from an old hexacopter project to supply power. I connected them in series, providing a whopping 22V with a possible 40A of current. If you have put those figures together in your head and decided this was a bad idea, then I congratulate you both on your ability to calculate electric power and your good sense. The motors would end up drawing only 10 Amps at peak for only a few seconds, so 2200mAh was enough capacity for the job. The brains of the operation was an Arduino Pro Mini running a program that would interpret serial commands and run the motor driver. With all of the parts in place, I started mounting things to the cart.<br />
<br />
<div style="text-align: center;">
<a href="http://4.bp.blogspot.com/-DoEnwt0dWuQ/UmWJiJRYEPI/AAAAAAAAA8s/PacbGVF9N7M/s1600/2013-08-14_18-15-54_873.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="260" src="http://4.bp.blogspot.com/-DoEnwt0dWuQ/UmWJiJRYEPI/AAAAAAAAA8s/PacbGVF9N7M/s400/2013-08-14_18-15-54_873.jpg" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-9v8JHKxu_FE/UmWKFEvgEiI/AAAAAAAAA80/rSd0P5NDuTQ/s1600/2013-09-08_17-23-37_702.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://4.bp.blogspot.com/-9v8JHKxu_FE/UmWKFEvgEiI/AAAAAAAAA80/rSd0P5NDuTQ/s400/2013-09-08_17-23-37_702.jpg" width="281" /></a></div>
<br />
<br />
After only a few days of work, I was ready to test the system on the floor of Julie's studio. It didn't budge. The floor of the studio was a slick, polished concrete surface that was impossible to get any traction on. It was also difficult to dance on, but that's another story. When I lowered the motors a little to press harder on the floor, the motor brackets just bent back under the weight of the cart. The motors seemed capable of exerting the planned amount of torque, but that torque was unable to make it to the ground to move the cart. Things were not looking good. To increase the traction, I added an extra wheel to each motor, and reinforced the motor brackets with stainless steel wire.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-ZVb3USTF9Y4/UmWKYrbPXpI/AAAAAAAAA88/fmdsDmUC8U8/s1600/20130918_211444.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://2.bp.blogspot.com/-ZVb3USTF9Y4/UmWKYrbPXpI/AAAAAAAAA88/fmdsDmUC8U8/s400/20130918_211444.jpg" width="311" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-dYdDHHfpcI4/UmWKtgyZB7I/AAAAAAAAA9E/AJSarCNhDqU/s1600/20130929_170858.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://1.bp.blogspot.com/-dYdDHHfpcI4/UmWKtgyZB7I/AAAAAAAAA9E/AJSarCNhDqU/s400/20130929_170858.jpg" width="271" /></a></div>
<br />
With 8 wheels making contact with the slick ground, the cart was finally able to move itself. However, it couldn't handle moving all of the weight of a human and taxidermy bear yet. The wheels were still slipping, but I accepted that the stage that the cart would need to perform on would not be so slick. The next step was to test the cart on the right floor and hope for the best. Unfortunately, the stage required special access and reservations, so by the time we were able to run the test, there were only 2 weeks left before the final rehearsals. We had to accept that if the cart could not move the required amount of weight by the end of the stage test, the motorization plan would have to be abandoned. We just couldn't keep hoping for a fix right up until the end and risk having an inoperable cart with no backup plan acted on.<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-q3F2mOKThmk/UmWL6mQRddI/AAAAAAAAA9U/CUYdNLOrB_c/s1600/20130922_165834.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://4.bp.blogspot.com/-q3F2mOKThmk/UmWL6mQRddI/AAAAAAAAA9U/CUYdNLOrB_c/s400/20130922_165834.jpg" width="263" /></a></div>
<br />
We finally got the cart on to the stage and I began testing. At first, it didn't work. I tightened some screws that had gotten loose during transportation and it worked a little better. I tuned the motor response in software to prevent slipping (like traction control on a car) and it worked a little better again. I adjusted the height of each motor to allow more pressure to be applied evenly to each motor and it worked a little better. I could add around 200 pounds to the cart and it could move a few feet in each direction. Finally, the cart seemed to work.<br />
<br />
I loaded up the cart with the human half of the intended payload and set it to move about 15 feet in one direction. It started up just fine, then stopped 5 feet short. I told it to reverse direction, thinking it was caught on a wire or bump in the floor, but all I got was a repetitive clicking sound from some of the motors. The motors had enough torque to move the cart, and the wheels had enough contact with the ground to exert enough horizontal force, but the rubber tires on the wheels were getting ripped from the plastic wheels after a few seconds of use.<br />
<br />
I quickly came up with a few ideas that could fix this problem. One would be to fuse the tires to the wheels using heat or epoxy. Another way would be to create my own tires out of silicone rubber and mold them directly to the wheels. Unfortunately, every solution I came up with required another week or two of work and testing to confirm that it would fix the problem. We just didn't have the time to try something new. The project failed due to unforeseen complications and lack of time.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-2xS6CCKht74/UmWLR3w7LLI/AAAAAAAAA9M/ChoHBj0Awqo/s1600/20131006_182324.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="249" src="http://1.bp.blogspot.com/-2xS6CCKht74/UmWLR3w7LLI/AAAAAAAAA9M/ChoHBj0Awqo/s640/20131006_182324.jpg" width="640" /></a></div>
<br />
<br />
So the big question is: what did I gain from this failure? Some leftover motors and half broken wheels? An intimate hatred for slick floors? No, despite my reluctance I probably learned something about getting a job done on someone else's time-frame. Most of my projects are done at my own leisure, so unexpected setbacks can last as long as I like. It was a very different experience to have a looming deadline for something I consider to be a hobby. Not only was the idea of a deadline new, but the idea that there were smaller deadlines on the way to make sure the project as a whole was worth continuing. The biggest one here was making sure the wheels could get enough traction on the theater floor far enough in advance.<br />
<br />
In all I came away with two big lessons. They are so big, they deserve a big font.<br />
<b>1) Given a large goal, set many smaller goals that lead up to it.</b> This not only helps keep things on track and on time, but forces you to consider how realistic the main goal is.<br />
<b>2) Don't make your hobby into a job.</b> If you do something for fun, keep it fun. Don't set deadlines or have other people depending on your for their job. I say this not because I regret doing this project, but because I've learned that failing at something is easier when no-one is watching.gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com0tag:blogger.com,1999:blog-5124010431457427473.post-47710126750785391482013-10-30T19:37:00.002-06:002013-10-30T22:35:10.145-06:00Daft Punk Helmets: Lots of Lights<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-hiCliQEr7s0/UnGzAJo1mUI/AAAAAAAABAs/DKA6JQYGJqY/s1600/20131024_164132.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img border="0" height="117" src="http://4.bp.blogspot.com/-hiCliQEr7s0/UnGzAJo1mUI/AAAAAAAABAs/DKA6JQYGJqY/s200/20131024_164132.jpg" width="200" /></a></div>
<br />
<br />
Previously, I covered the construction of both of the Daft Punk helmets in a series of posts. Starting with cardstock, they were assembled with superglue, reinforced with fiberglass, molded with Bondo, painted with metallic paint, and visors were made from heated plastic. They were nice and shiny and ready to be worn.<br />
<br />
While the black-on-metal look matches the current era of Daft Punk helmets, they haven't always been so simple. Over the years they have gone through a series of visual changes, mostly involving the embedded electronics. You can find examples of when both helmets had an unreasonable number of LEDs hidden behind the visors doing all kinds of flashy things. The silver helmet can have a red LED panel stretching across the inside of the visor with other colored lights at the ends, while the gold helmet can have rainbow blocks of color running down the sides of the visor with a large yellow LED display panel covering the center. Mimicking these elaborate setups would take far too much of my own time and money, so I decided to boil down the lights to what I thought would represent the 'core' of the display.<br />
<br />
For the silver helmet I decided to make a 40x8 red LED panel that would hide behind the visor and display words and patterns. For the gold helmet I went with RGB strips lined up down the sides of the visor. Instead of trying to diffuse the colored lights to make a solid block of color, I went with gluing the lights right to the back of the visor to create a slightly harsher and more modern look.<br />
<br />
I'll start with the work that went into the 40x8 red LED display, since that one was much more difficult to assemble. The first step was to decide on how to control all of the LEDs. I went with a few MAX7219 controllers, each capable of controlling a single 8x8 LED panel. To prototype the controller, I set up a spare Arduino and pre-made LED panel:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-dJLlheOVex8/UnGwSY5y6oI/AAAAAAAAA_w/nyqFAqXB9pA/s1600/20130924_182033.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="223" src="http://1.bp.blogspot.com/-dJLlheOVex8/UnGwSY5y6oI/AAAAAAAAA_w/nyqFAqXB9pA/s400/20130924_182033.jpg" width="400" /></a></div>
<br />
Thanks to numerous examples found online of how to interface with the controller, I set off assembling the red LED panels. Since the display would sit right in front of my eyes inside the helmet, I needed the panel to have enough empty space between LEDs in order to allow me to still see out. To do this I set up a grid of holes in some cardboard and used it as a frame to solder the LEDs together. I made 5 8x8 panels where each column had common cathodes and every row had common anodes.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-FzZOKMsGS_I/UnGxBH5zrgI/AAAAAAAAA_4/Jlzti4sAops/s1600/20130927_185248.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="223" src="http://4.bp.blogspot.com/-FzZOKMsGS_I/UnGxBH5zrgI/AAAAAAAAA_4/Jlzti4sAops/s400/20130927_185248.jpg" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-7oWAHA5cKsI/UnGxG26x4QI/AAAAAAAABAA/PfQ5DG_kZQA/s1600/20130927_234416.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="223" src="http://1.bp.blogspot.com/-7oWAHA5cKsI/UnGxG26x4QI/AAAAAAAABAA/PfQ5DG_kZQA/s400/20130927_234416.jpg" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-FQnIvRmZm0Q/UnGxzx8If4I/AAAAAAAABAM/ReJvoqVsTfs/s1600/20131010_220228.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="196" src="http://2.bp.blogspot.com/-FQnIvRmZm0Q/UnGxzx8If4I/AAAAAAAABAM/ReJvoqVsTfs/s400/20131010_220228.jpg" width="400" /></a></div>
<br />
After quite a few hours of soldering, I had all of the LEDs soldered into panels. I wanted the LED drivers to sit on their own small protoboards near the panels.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-29rMOaD3wVY/UnGyEH_M9QI/AAAAAAAABAU/7xqcd8M5_Lg/s1600/20131002_112918.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="263" src="http://3.bp.blogspot.com/-29rMOaD3wVY/UnGyEH_M9QI/AAAAAAAABAU/7xqcd8M5_Lg/s400/20131002_112918.jpg" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-BcfOaYly7M4/UnGyVtMgkCI/AAAAAAAABAc/8ZsSrbbH2Lc/s1600/20131023_171729.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="223" src="http://4.bp.blogspot.com/-BcfOaYly7M4/UnGyVtMgkCI/AAAAAAAABAc/8ZsSrbbH2Lc/s400/20131023_171729.jpg" width="400" /></a></div>
<br />
With every 8x8 panel connected to a driver, I was able to hot glue all of the panels together side-by-side to make the single large display. I put little wooden sticks at the joints of each panel to allow the panel to be glued into the helmet without having to secure the LEDs themselves to the helmet.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-_VXUYd34dcg/UnGywgL_7ZI/AAAAAAAABAk/PeogXvGXDSA/s1600/20131023_215619.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="250" src="http://2.bp.blogspot.com/-_VXUYd34dcg/UnGywgL_7ZI/AAAAAAAABAk/PeogXvGXDSA/s400/20131023_215619.jpg" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-hiCliQEr7s0/UnGzAJo1mUI/AAAAAAAABAs/DKA6JQYGJqY/s1600/20131024_164132.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="235" src="http://4.bp.blogspot.com/-hiCliQEr7s0/UnGzAJo1mUI/AAAAAAAABAs/DKA6JQYGJqY/s400/20131024_164132.jpg" width="400" /></a></div>
<br />
Since the drivers could all be daisy-chained together, there were only 5 wires needed to fully control the panel. Two for power, one as a clock, one as a latch, and one as a data line. These would be hooked up to an Arduino controller, but more on that in a later post.<br />
<br />
The lights for the gold helmet were someone easier once I decided to buy an addressable RGB LED strip from ebay. The one I picked was 144 LEDs crammed into a single meter with a built-in driver on each LED. As long as every LED was daisy-chained, only one data line was needed to set every LED to a unique color. I cut the strip into segments that were 6 LEDs long and re-soldered them back into a chain with wires to space them out.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-5ZqenzyRfhs/UnGzNenpaZI/AAAAAAAABA0/b7LtUByBKBk/s1600/20131002_130613.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="218" src="http://4.bp.blogspot.com/-5ZqenzyRfhs/UnGzNenpaZI/AAAAAAAABA0/b7LtUByBKBk/s400/20131002_130613.jpg" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-OX5YnLBfgAo/UnGzv6X_QWI/AAAAAAAABBE/LL-ISbvUwL0/s1600/20131010_205109.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://4.bp.blogspot.com/-OX5YnLBfgAo/UnGzv6X_QWI/AAAAAAAABBE/LL-ISbvUwL0/s400/20131010_205109.jpg" width="367" /></a></div>
<br />
This way, I could orient the strips to run parallel by bending each wire segment by 180 degrees, yet still only use a single data wire to control them all. <br />
Since the LED drivers were all on-board, I was able to test the LEDs at many stages of the assembly.<br />
<br />
The first thing I really realized was how bright the strip could get. With every RGB LED running at full brightness, the helmet would pull over 5A of current just to light it up. I was looking for a colorfully lit helmet, not a head-mounted search beacon. I decided the best way to solve this was in software, so I just kept the brightness in mind while testing the hardware. I marked off on the visor where the lights needed to be glued and went to work.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-fVf8mmzvtHc/UnG0KG1E8qI/AAAAAAAABBM/q4RaGuJbxio/s1600/20131030_184116.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="309" src="http://4.bp.blogspot.com/-fVf8mmzvtHc/UnG0KG1E8qI/AAAAAAAABBM/q4RaGuJbxio/s320/20131030_184116.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-q1TabavaajA/UnHd9bXEOiI/AAAAAAAABBk/dZDS-p-M38Q/s1600/IMG_1524.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://1.bp.blogspot.com/-q1TabavaajA/UnHd9bXEOiI/AAAAAAAABBk/dZDS-p-M38Q/s400/IMG_1524.jpg" width="303" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
With all of the strips attached, I hooked up the power and data lines to a testing Arduino and lit it up.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-4hwxNQsgyps/UnG0SxdU_lI/AAAAAAAABBU/IpyJERSGlHE/s1600/20131030_185150.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://1.bp.blogspot.com/-4hwxNQsgyps/UnG0SxdU_lI/AAAAAAAABBU/IpyJERSGlHE/s400/20131030_185150.jpg" width="312" /></a></div>
<br />
<br />
At this point, both helmets had lights installed and ready to be driven. The end of the project was in sight, with only the control electronics, power source, and coding to sort out. Once again, more on that later. gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com2tag:blogger.com,1999:blog-5124010431457427473.post-20958342316384111632013-10-25T13:47:00.002-06:002013-10-30T19:34:14.213-06:00Daft Punk Helmets: Vacuumed Visors<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-a5BF_eMBZp4/UmrIvVc19OI/AAAAAAAAA_I/FSIXkMPe0kY/s1600/20131020_190505.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="http://2.bp.blogspot.com/-a5BF_eMBZp4/UmrIvVc19OI/AAAAAAAAA_I/FSIXkMPe0kY/s200/20131020_190505.jpg" width="160" /> </a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
Previously, I covered the process of shaping the helmets and getting them ready for painting. In doing that I removed the opaque visors, since they would be replaced with clear copies. The goal was to mold clear plexiglass on to each visor so that I could control the shape of each precisely. Once I had the clear replacements, I could shade them black. After estimating how much plexiglass would be needed for each visor, I collected my materials.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-BfLjSyQv1nQ/UmrEjYOVzEI/AAAAAAAAA9k/t0rZQERrSUg/s1600/20131012_162711.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="238" src="http://4.bp.blogspot.com/-BfLjSyQv1nQ/UmrEjYOVzEI/AAAAAAAAA9k/t0rZQERrSUg/s400/20131012_162711.jpg" width="400" /> </a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-h-t50Jxk03A/UmrEnwZycUI/AAAAAAAAA9s/ZqDiXxsttIQ/s1600/20131010_193826.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="290" src="http://3.bp.blogspot.com/-h-t50Jxk03A/UmrEnwZycUI/AAAAAAAAA9s/ZqDiXxsttIQ/s400/20131010_193826.jpg" width="400" /></a></div>
<br />
<br />
The basic procedure is to heat up the plexiglass until it is malleable, then while it is warm, let it 'slump' on to the opaque visors. As the plexiglass cools back down, it hardens into the desired shape. The felt is to keep the plexiglass from sticking to anything. I had seen many people on the internet using vacuum formers to aide in the slumping process, but I had a hunch that it wasn't necessary. I was wrong, but more on that later. First I did the simple visor.<br />
<br />
<br />
I heated the 0.125" thick plexiglass in my oven at 285F for about 10 minutes. I could see that it was ready for slumping when it started to sag in the oven. I carefully took the first sheet out and placed it on the first visor. Using some extra felt to push the plexiglass down in the corners, I made sure it would solidify with the right shape. Once it cooled, I had an plexiglass replica of the opaque visor (along with some extra material around the edges) <br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-EYryISMXu0U/UmrFGTCLCCI/AAAAAAAAA90/5ySgHAxL0dY/s1600/20131013_011728.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="223" src="http://1.bp.blogspot.com/-EYryISMXu0U/UmrFGTCLCCI/AAAAAAAAA90/5ySgHAxL0dY/s400/20131013_011728.jpg" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<img border="0" height="298" src="http://4.bp.blogspot.com/-pMRvpfasfN0/UmrFhOGQLzI/AAAAAAAAA98/s-YfA5Rw8aA/s400/20131013_012456.jpg" width="400" /></div>
<span style="font-size: x-small;">Using felt to avoid scratches</span><br />
<br />
I then carefully cut off the excess plexiglass from the edges, then sanded where I had cut for a smooth edge.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-X7lDoLuTDwo/UmrFzKcygtI/AAAAAAAAA-E/mgD4ABDCBBg/s1600/20131013_170509.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="253" src="http://1.bp.blogspot.com/-X7lDoLuTDwo/UmrFzKcygtI/AAAAAAAAA-E/mgD4ABDCBBg/s400/20131013_170509.jpg" width="400" /></a></div>
<br />
Since this visor was easy enough to make, I went ahead and tried making the more curved visor with the same method. I stuck a sheet of plexiglass in the oven, and prepared to mold it by hand.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-KxnjmKge-JE/UmrGOSeG-nI/AAAAAAAAA-M/VRjJrjmQPPg/s1600/20131021_173715.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://1.bp.blogspot.com/-KxnjmKge-JE/UmrGOSeG-nI/AAAAAAAAA-M/VRjJrjmQPPg/s400/20131021_173715.jpg" width="305" /></a></div>
<br />
It failed. The plexiglass was too thick and would not bend in two directions without folding around the edges. I decided the plexiglass itself was the issue, so I bought a few sheets of 0.06" and 0.02" thick PETG. This plastic becomes malleable at a lower temperature, and I figured the thinness would also help. After heating up the oven, I followed the same procedure as before.<br />
<br />
<div style="text-align: center;">
<a href="http://4.bp.blogspot.com/-MiGsnuwM0wA/UmrGeaKBuwI/AAAAAAAAA-Y/19Rwrc6mUZU/s1600/20131021_173731.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://4.bp.blogspot.com/-MiGsnuwM0wA/UmrGeaKBuwI/AAAAAAAAA-Y/19Rwrc6mUZU/s400/20131021_173731.jpg" width="333" /></a></div>
<br />
My process still wasn't working, but I could see I was on the right track. Kind of. I decided it was time to use a vacuum former for help. Since I was running out of time on the whole helmet project, I decided to first look at local plastic companies that do vacuum forming. I contacted one about my project, and they said both that it would cost nearly $2000 to do it, and that they simply would not attempt it. This is not specifying a type of plastic or tolerance level, just the job of vacuum forming anything around our model visor. With few options left, I decided to build a vacuum former as fast as possible and for as cheap as possible.<br />
<br />
If you search "diy vacuum former" you can find quite a few good examples of cheap vacuum formers that can be made with scrap wood and a shop vac. After glancing through a few instructables on the subject, I went to work. I was lucky enough to be able to borrow a nice 6hp shop vac through some friends, so that would keep the cost down. I was able to get the rest of the materials at Home Depot for around $25.<br />
<br />
The basic idea behind a vacuum former is to have a flat surface that creates a uniform vacuum to pull a piece of heated plastic down. The model you want replicated sits on the vacuum surface and holds the plastic up away from the bottom. The plastic is heated so that it will bend and stretch around your model while being sucked down around it. Once the plastic cools, it will retain the shape of the model.<br />
<br />
The vacuum former has two parts. The first is the frame which holds the plastic while it is being heated. The second is the vacuum box which supports the model and creates an even vacuum surface. I built the frame out of some 1x2 wood pieces and a couple nuts and bolts to tighten the two sides around the plastic. I built the vacuum box out of MDF, 1x4 pieces, and some rubber weather strips.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-qesLb1ElOQQ/UmrG75yBIlI/AAAAAAAAA-c/vMecDDXg7s0/s1600/20131018_170726.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="313" src="http://4.bp.blogspot.com/-qesLb1ElOQQ/UmrG75yBIlI/AAAAAAAAA-c/vMecDDXg7s0/s400/20131018_170726.jpg" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<img border="0" height="296" src="http://4.bp.blogspot.com/-OggtgEXRLu4/UmrHZj4YZNI/AAAAAAAAA-k/yv-Du-VMA1I/s400/20131019_230708.jpg" width="400" /></div>
<br />
The edges of the box were sealed with duct tape. I drilled around 200 1/8" diameter holes in the top of the box on a rough grid I traced out and taped the shop vac hose to a hole I cut in the side of the box. The rubber strips had an adhesive backing, so I just stuck them right where the frame would sit.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-AGmH15rpliY/UmrH0-tFvMI/AAAAAAAAA-s/XmxNTolAgTc/s1600/20131020_151158.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://2.bp.blogspot.com/-AGmH15rpliY/UmrH0-tFvMI/AAAAAAAAA-s/XmxNTolAgTc/s400/20131020_151158.jpg" width="297" /></a></div>
<br />
With the vacuum former ready to go, I loaded a new sheet of plexiglass into the frame and put the frame in the oven. This time I got a little fancier with my baking and did 8 minutes at 300F and finished with about 3 minutes of broiling. The plastic was sagging by about 3" when I took it out and had just started to create tiny bubbles in a few places. With the shop vac running, I quickly placed the frame over the vacuum box and pressed down to ensure a nice seal between box and frame.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-morhgOov8sY/UmrIFX1tpvI/AAAAAAAAA-0/70_an0BcekA/s1600/20131021_173749.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://2.bp.blogspot.com/-morhgOov8sY/UmrIFX1tpvI/AAAAAAAAA-0/70_an0BcekA/s400/20131021_173749.jpg" width="270" /></a></div>
<br />
<br />
Yet another failure. At this point I was seriously worried for the helmets. Without a visor, the gold helmet was pretty useless. I took some time to think through how this attempt had failed, and luckily saw a reason to try again. A big issue for this attempt was that the plexiglass had fallen free of the frame in one corner during the heating process. I found this was because the plexiglass has slipped to one side while tightening the frame, so it was not being clamped down on that corner at all. This could be fixed by being more careful during the set-up. I also added some cardboard in-between the frame and the plexiglass in order to get a better seal. I wondered if I had drilled enough holes, so I computed the collective cross-section of every hole together and compared it to the cross-section of the shop vac hose. The shop vac hose was about 5 square inches, while my holes were only adding up to around 2. I went through and widened most of the holes in the top of the vacuum box to bring that figure much closer to the hose area.<br />
<br />
With these modifications done, I set up for another attempt. Luckily the plexiglass sheets were only costing me about $7 a piece and I could get them cut to the right size at my local hardware store. <br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-gW2XejAzJd4/UmrIee279YI/AAAAAAAAA-8/B0c49qDsObA/s1600/20131020_175359.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://2.bp.blogspot.com/-gW2XejAzJd4/UmrIee279YI/AAAAAAAAA-8/B0c49qDsObA/s400/20131020_175359.jpg" width="353" /></a></div>
<br />
<br />
It finally worked. Well, mostly. Under the pressure of the vacuum, the model visor actually bent out of shape, causing the bottom of the visor to become wider than I wanted. This was surprising because the model was cured Bondo reinforced with thick fiberglass. I decided this visor would work well enough with some bending, and I was too sick of molding plastic to try again. I cut out the visor and prepared it for tinting.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-a5BF_eMBZp4/UmrIvVc19OI/AAAAAAAAA_E/CLQLYgGvIjs/s1600/20131020_190505.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://4.bp.blogspot.com/-a5BF_eMBZp4/UmrIvVc19OI/AAAAAAAAA_E/CLQLYgGvIjs/s400/20131020_190505.jpg" width="320" /></a></div>
<br />
<br />
Both if the visors were tinted black using Nite-Shades. This is a spray intended for vehicle lights to make them look black when there is no light inside shining out. I tested the spray on the many failed visor attempts to make sure I wouldn't accidentally ruin the visors I had finally made.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-bKNqWMGCIu4/UmrJAbfKJcI/AAAAAAAAA_M/tY5WJ1A3FCY/s1600/20131014_164902.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://2.bp.blogspot.com/-bKNqWMGCIu4/UmrJAbfKJcI/AAAAAAAAA_M/tY5WJ1A3FCY/s400/20131014_164902.jpg" width="261" /></a></div>
<br />
<br />
Once both visors had been tinted, I began attaching them to the helmets.<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-rJhPs91heKU/UmrJZiuY71I/AAAAAAAAA_U/W8Qg5wikjI8/s1600/20131021_211221.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="312" src="http://2.bp.blogspot.com/-rJhPs91heKU/UmrJZiuY71I/AAAAAAAAA_U/W8Qg5wikjI8/s400/20131021_211221.jpg" width="400" /></a></div>
<br />
<br />
The second visor did not fair so well. After a day or two the plexiglass became brittle and cracked down the side. I carefully glued it back together with special plexiglass adhesive, but once I tried bending it again, another crack formed. The visor was doomed. After 6 failed attempts at making a visor for this helmet, I was pretty let down. Luckily, after 6 failed attempts I was getting better at making them. I ordered a sheet of PETG to make a new one, with the intention of vacuum forming it just like the plexiglass.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-D87TKFTL6vA/UmrJthHUY9I/AAAAAAAAA_c/JMkfB1H4B9k/s1600/20131021_185202.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="278" src="http://3.bp.blogspot.com/-D87TKFTL6vA/UmrJthHUY9I/AAAAAAAAA_c/JMkfB1H4B9k/s400/20131021_185202.jpg" width="400" /></a></div>
<br />
I'll stop here now that I've covered my process for making and tinting both visors, even though the second one is not 100% done. I'll try to update with a picture or two when it has been fixed.<br />
<br />
UPDATE: It seems like 0.03" PETG is the way to go. I've formed a new visor that is reasonably strong and fairly flexible. The material was much easier to heat up (~300F for ~3 mins) and actually allowed me to redo the vacuum forming process twice. On the first attempt the PETG didn't get pulled down as well as the plexiglass did, probably due to human error in placing the frame on to the vacuum box. The sheet cooled into an almost-visor shape, but I put it back in the oven and watched the plastic return almost to it's original shape. This was a life-saver.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-PQqRXP-1Bj0/UnGzjC8JbhI/AAAAAAAABA8/OEutZm2VD8k/s1600/20131030_165344.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="292" src="http://1.bp.blogspot.com/-PQqRXP-1Bj0/UnGzjC8JbhI/AAAAAAAABA8/OEutZm2VD8k/s400/20131030_165344.jpg" width="400" /></a></div>
<br />
I reinforced the fiberglass/Bondo visor mold this time to prevent the bottom from bending out, but the visor was still slightly misshapen. Luckily the formed PETG is flexible and allows me to bend it back into place. Next step is to attach lights to the visor, then glue the visor on to the helmet. gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com1tag:blogger.com,1999:blog-5124010431457427473.post-6457404730239593562013-10-17T15:47:00.001-06:002013-10-17T22:18:47.381-06:00Daft Punk Helmets: Icing a Toxic Cake<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-9JG2lVKpx5g/UmBYZgeOkSI/AAAAAAAAA70/UqVvT_7dunE/s1600/20131016_180945.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><br /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-9JG2lVKpx5g/UmBYZgeOkSI/AAAAAAAAA7w/Wxbo8HHBmEM/s1600/20131016_180945.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="179" src="http://3.bp.blogspot.com/-9JG2lVKpx5g/UmBYZgeOkSI/AAAAAAAAA7w/Wxbo8HHBmEM/s320/20131016_180945.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
At the end of my <a href="http://gperco.blogspot.com/2013/10/daft-punk-helmets-papercraft-to.html">last post</a>, both Daft Punk helmets had been formed with papercraft and fully reinforced with fiberglass. Once the fiberglass had cured, the helmets were rock hard, but fairly lumpy on the outside. The fiberglass cloth was applied to the inside, leaving the hard edges and seams of the original papercraft on the outside. To smooth out these bumps and produce a nice surface for painting, I used Bondo. Bondo is a putty-like substance that starts as a two-part mixture, goes through exothermic curing once mixed, and ends up as a hard clay-like substance. The final state is relatively easy to sand, allowing for fine tuning of the surface shape. The idea here is to iteratively shape each helmet by applying Bondo, letting it cure, then sanding it down. With each iteration of this, the helmets get closer to their final shape.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-n1723zcyT8o/UmBTTuG2GyI/AAAAAAAAA64/8tzwGERAqLk/s1600/20130930_194036.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="283" src="http://1.bp.blogspot.com/-n1723zcyT8o/UmBTTuG2GyI/AAAAAAAAA64/8tzwGERAqLk/s400/20130930_194036.jpg" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">It's a lot like icing a toxic cake.</span></div>
<br />
With some 60 grit sandpaper, I was able to shave off a lot of the goopy unevenness of the first layer of Bondo. My goal was to sand it down to the correct shape, and if I hit fiberglass, add some more Bondo everywhere to allow for more even sanding. I enlisted some help for sanding, since doing two helmets lends itself easily to two people working.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-gpmVa3eXEyI/UmBUKvhwBxI/AAAAAAAAA7E/31881Wjed9g/s1600/20131005_145918.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://4.bp.blogspot.com/-gpmVa3eXEyI/UmBUKvhwBxI/AAAAAAAAA7E/31881Wjed9g/s400/20131005_145918.jpg" width="342" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Helper with tiny power sander was a life saver.</span></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-7mFhWWMIibs/UmBU6NPCD2I/AAAAAAAAA7M/og0m4V7VVMw/s1600/20131005_174035.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="335" src="http://4.bp.blogspot.com/-7mFhWWMIibs/UmBU6NPCD2I/AAAAAAAAA7M/og0m4V7VVMw/s400/20131005_174035.jpg" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Not quite what I had in mind..</span></div>
<br />
The biggest issue we ran into was air pockets. I was not at all careful with my first layer of Bondo, so as we sanded it down we found many dents and pockets caused by uneven application of Bondo. At first we tried filling these holes with more Bondo, applied with a small stick for precision. This took a long time and was not nearly as precise as I wanted. In some areas we also needed to add an even layer to get away from the fiberglass beneath, but would end up with even more pockets and dents using Bondo.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-xRM-zFuqMF8/UmBVjuVi8uI/AAAAAAAAA7U/tL0RZsjyKbk/s1600/20131013_134223.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://2.bp.blogspot.com/-xRM-zFuqMF8/UmBVjuVi8uI/AAAAAAAAA7U/tL0RZsjyKbk/s400/20131013_134223.jpg" width="365" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Fixing past mistakes and adding future ones.</span></div>
<br />
You'll notice in the pictures that the visors have been cut out already. In order to create the clear plastic visors later on, all we need is a solid block that mimics the desired shape. Since the mold doesn't need to be perfectly smooth, I went ahead and cut them out using a Dremel rotary tool. The key was using the right cutting bit and being very careful. I ended up breaking two different cutting bits in during the process, so I won't claim I know which one is best to use.<br />
<br />
To get the rest of the helmets smooth faster, I tried a two-part solution. I bought some liquid nails that claimed it was easy to sand once cured and used a small nozzle applicator to fill in dents and holes. Unfortunately, each application was taking days to fully cure, and I didn't have that much time to waste. The next part of the solution was to use filler primer spray paint. This is a type of primer that has a decent amount of volume to it when applied, so it helps fill in small dents and holes. Once the helmets had all of the major bumps smoothed out, I switched to primer.<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-UF65G2tsfzU/UmBXhkXxA3I/AAAAAAAAA7g/zDeFvbI4Hrk/s1600/20131013_152316.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="275" src="http://1.bp.blogspot.com/-UF65G2tsfzU/UmBXhkXxA3I/AAAAAAAAA7g/zDeFvbI4Hrk/s400/20131013_152316.jpg" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">The difference in surface area was surprising.</span></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/--RpGkbGfepA/UmBXyPGTkOI/AAAAAAAAA7o/TgDNTq34sNc/s1600/20131013_152308.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="252" src="http://4.bp.blogspot.com/--RpGkbGfepA/UmBXyPGTkOI/AAAAAAAAA7o/TgDNTq34sNc/s400/20131013_152308.jpg" width="400" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Dat back.</span></div>
<br />
To eliminate dents, we followed a simple pattern: apply two coats of primer, let dry, sand with 220 grit until Bondo shows, wash, let dry, repeat. After three or four iterations of this, most of the dents had disappeared. During this process I also cut out the mouth slit on the helmet that needed it, since the hole left by papercraft had been accidentally filled in during the Bondo process. Again I used my Dremel and switched between a few bits while cutting.<br />
<br />
This whole shaping ordeal took a long time and a lot of effort. Most of the helmets are large smooth surfaces, so hand sanding wasn't too hard. For the smaller details around the ears, I used a Dremel to carefully sand them down while maintaining the angles I wanted. While the final shape was not 100% perfect, it was good enough for a costume.<br />
<br />
<br />
<div style="text-align: center;">
<a href="http://3.bp.blogspot.com/-9JG2lVKpx5g/UmBYZgeOkSI/AAAAAAAAA70/UqVvT_7dunE/s1600/20131016_180945.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="225" src="http://3.bp.blogspot.com/-9JG2lVKpx5g/UmBYZgeOkSI/AAAAAAAAA70/UqVvT_7dunE/s400/20131016_180945.jpg" width="400" /></a></div>
<br />
<br />
At this point, both helmets were ready for real paint. In one of my next posts I'll cover what steps we go through to get the nice reflective silver and gold surfaces needed. Spoiler: there is lot's of sanding involved.gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com0tag:blogger.com,1999:blog-5124010431457427473.post-75285432593461492682013-10-01T14:18:00.003-06:002013-10-04T13:58:55.912-06:00Daft Punk Helmets: Papercraft to Fiberglass<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-GWZI4cNQW3c/UkiUZvmE0uI/AAAAAAAAA2g/XbHot5_-U-M/s1600/20130924_171516.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="http://3.bp.blogspot.com/-GWZI4cNQW3c/UkiUZvmE0uI/AAAAAAAAA2g/XbHot5_-U-M/s200/20130924_171516.jpg" width="188" /></a></div>
<br />
Progress has been made on the helmets. As mentioned in the introductory post, the first steps were to create full-size paper versions of each helmet and then reinforce them with fiberglass. I've been following some guidelines put out by a few sites like <a href="http://kantikane.blogspot.com/2011/10/making-fiberglass-helmet-from-pepakura.html">this one</a>.<br />
<br />
The idea behind papercraft is that you can create a folded-paper version of a 3D model that has been designed on a computer. There is <a href="http://www.tamasoft.co.jp/pepakura-en/">software</a> that takes imported 3D models and splits up the geometric faces into pieces that can be cut out from paper and glued back together to re-form the 3D shape. This software outputs a series of 8.5"x11" images that are printed to cardstock. In order to connect the various 2D pieces together to form the 3D shape, the software adds on flaps and fold lines to instruct what pieces need to be glued where.<br />
<br />
After finding nice-looking 3D models of the helmets on the internet, I set out to print, cut, and glue. I didn't have access to a printer that could handle cardstock, so instead I just printed to normal copy paper and glued each sheet to a piece of cardstock.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-eQQT6IY4_V4/UkiTs3PjBSI/AAAAAAAAA2E/L2HtyXLM9DI/s1600/20130914_174214.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://4.bp.blogspot.com/-eQQT6IY4_V4/UkiTs3PjBSI/AAAAAAAAA2E/L2HtyXLM9DI/s320/20130914_174214.jpg" width="260" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Hints of a 3D structure.</span></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-_NGDHkNh38Y/UkiTszaUVmI/AAAAAAAAA2Q/Z9WDD333Drw/s1600/20130914_190551.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="225" src="http://3.bp.blogspot.com/-_NGDHkNh38Y/UkiTszaUVmI/AAAAAAAAA2Q/Z9WDD333Drw/s320/20130914_190551.jpg" width="320" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Rising from the fold.</span></div>
<br />
The whole papercraft ordeal seemed daunting at first, but then I realized it was 1am, had just spend 6 hours gluing my fingers together, and had no intention of stopping. When I got busy cutting and assembling (with the help of a friend) I noticed it wasn't as bad as I had anticipated; the mindless work was actually a little addicting.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-xVBZ_KXxXgE/UkiULXjXKdI/AAAAAAAAA2U/lVAaYif5r4Y/s1600/20130922_222623.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="139" src="http://4.bp.blogspot.com/-xVBZ_KXxXgE/UkiULXjXKdI/AAAAAAAAA2U/lVAaYif5r4Y/s320/20130922_222623.jpg" width="320" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Went through a few X-acto blades.</span></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-GWZI4cNQW3c/UkiUZvmE0uI/AAAAAAAAA2c/pzjrlqt5_lg/s1600/20130924_171516.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://4.bp.blogspot.com/-GWZI4cNQW3c/UkiUZvmE0uI/AAAAAAAAA2c/pzjrlqt5_lg/s320/20130924_171516.jpg" width="301" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Hoping I can fix slight lumps later..</span></div>
<br />
Each helmet ended up needing about 6 hours of work. I was very happy with the final product, even if my occasional sloppiness in cutting caused some unwanted lumps on the helmets. Both helmets were lightweight, but still able to support their own shape.<br />
<br />
The next step was to reinforce the helmets with fiberglass. Fiberglass is a strong, lightweight material that is made of two components: resin (glue-like stuff) and fiberglass cloth (a cloth that is made of bits of glass). If handling a cloth made of glass shards sounds like a bad idea to you, then you are a reasonable person. Don't let the stuff touch your skin. I've made this mistake when installing insulating wrap on motorcycle exhaust pipes and had to cover my hands with duct tape in an attempt to pull the tiny glass fibers out of my skin. Also, the resin comes in two parts that are mixed together to initiate curing (hardening/drying). This is an exothermic reaction (gives off heat) and it can really heat things up. A thin coating of the stuff probably won't heat up noticeably, but a cup of the stuff can start to smoke and will burn your hand if you hold it. In all, don't try to make fiberglass things unless you are familiar with the hazards. Do it outside, wear protective gear, etc.<br />
<br />
The first step in reinforcing the helmets is to coat the inside of them with a thin layer of resin. I mixed the two parts together in a mixing cup and used a cheap paintbrush to coat each helmet.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-8cilHor95C0/UkiVA2_Bp_I/AAAAAAAAA2s/nyasnMw9GwI/s1600/20130921_165808.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://4.bp.blogspot.com/-8cilHor95C0/UkiVA2_Bp_I/AAAAAAAAA2s/nyasnMw9GwI/s320/20130921_165808.jpg" width="298" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Outside looks wet due to the epoxy from the inside seeping through.</span></div>
<br />
Once the helmets were thoroughly coated I let them dry in an outdoor closet. After a day they were much stiffer than before, but still not robust. This first coat of resin was primarily to ensure the paper helmet doesn't fall apart or warp during the application of fiberglass cloth. To efficiently apply the fiberglass cloth to the inside of the helmets, I cut a large sheet into smaller strips (about 3"x1") and coated the inside of the helmets again with a thin layer of resin for stickiness. I would place 4 or 5 overlapping strips inside the helmet, then dab more resin on top until they looked soaked. I found that my resin would start to harden in the mixing cup after 10 or so minutes, so I made small batches in order to not waste materials.<br />
<br />
After being left alone for another day or two to cure, the fiberglass was hard as rock. My rough estimate is that each could withstand having about 40 pounds placed on top before buckling. Don't blame me if you try this and they collapse with less than that. I'm certainly not going to put my beloved helmets to the test.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-dqH7U67MoNE/UkiUuUcXqeI/AAAAAAAAA2k/bybUeoy_akE/s1600/20130929_102000.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="227" src="http://3.bp.blogspot.com/-dqH7U67MoNE/UkiUuUcXqeI/AAAAAAAAA2k/bybUeoy_akE/s320/20130929_102000.jpg" width="320" /></a></div>
<div style="text-align: center;">
<span style="font-size: x-small;">Ready for the next step!</span></div>
<br />
The next step we be to coat the outside in Bondo (or similar) and sand it down to fine-tune the shape.gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com3tag:blogger.com,1999:blog-5124010431457427473.post-18923280409517951302013-09-29T15:02:00.000-06:002013-10-25T14:30:26.784-06:00Daft Punk Helmets: IntroHalloween approaches, which means it's time to spend far too much of my free time working on a costume. <a href="http://gperco.blogspot.com/2013/01/embedded-halloween-costume.html">Last year</a>, I used my costume as an opportunity to refresh myself on basic electronics and get up to speed on Arduinos. I was pleased with the results, but was very much limited at the time by what I knew. Now I've had a whole year to experiment with embedded electronics, so I figure it's time to apply myself a bit more and see what happens.<br />
<br />
During this last year, Daft Punk released their new album <i>Random Access Memories</i>. In case you don't know, Daft Punk is a French electronic music duo consisting of two robots that are sometimes erroneously referred to as men. They look like this: <br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-EkjiPeAwb6s/UmBbPOunJvI/AAAAAAAAA74/bPrXS22HgK4/s1600/daft-punk-ram.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="155" src="http://2.bp.blogspot.com/-EkjiPeAwb6s/UmBbPOunJvI/AAAAAAAAA74/bPrXS22HgK4/s200/daft-punk-ram.jpg" width="200" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
Their heads contain all kinds of electronic goodies, most of which control LEDs that light up beneath the visors. The one on the left (which I will refer to as Thomas) has a 2D matrix of red LEDs that can display patterns or words that scroll across the visor. The one on the right (Guy) has a few horizontal strips of colored LEDs going up each side of the visor. Over the years, there have been a few iterations of the robot-heads that have shown different amounts of LED activity, but what I have described seems to be the most recognized set of lights.<br />
<br />
My goal this Halloween is to make helmet versions of both of these robot heads. Each will contain embedded electronics to recreate the LED patterns that make them identifiable.<br />
<br />
The first big challenge I see in this project is the creation of the non-electronically-enhanced helmets. Luckily, there is a fairly evolved online community of folks who are similarly interested in mimicking the looks of these fashionable robots. A common practice for creating props like these that I will use is to start with a papercraft model, reinforce it with fiberglass, cover with Bondo, then fine-tune the shape through sanding. This creates a hard helmet that can later be painted. To create the semi-transparent visors, I plan on replacing the fiberglass/Bondo visors with clear acrylic molded into the right shape.<br />
<br />
Once the helmets are forged, the electronics must be added. Both need LEDs, control electronics, and power. Since I anticipate the helmets impeding vocal communication (I assume the robots communicate via wifi instead), I'm interested in implementing some kind of 2-way communication between the helmets.<br />
<br />
In all, this is the intended order of operations:<br />
1 - Create full-size paper helmets (<a href="http://gperco.blogspot.com/2013/10/daft-punk-helmets-papercraft-to.html">post</a>)<br />
2 - Reinforce helmets with fiberglass<br />
3 - Shape helmet exterior with Bondo<br />
4 - Paint helmets<br />
5 - Replace solid visors with acrylic visors (<a href="http://gperco.blogspot.com/2013/10/daft-punk-helmets-vacuumed-visors.html">post</a>)<br />
6 - Install lights and control electronics<br />
7 - (Optional) Install 2-way communication<br />
8 - (Optional) Tack on any other neat electronic components I have sitting around.<br />
<br />
Taken one step at a time, I think this is a completely doable project, given a few weeks to complete. I will post updates to this project as I complete the steps listed above (assuming plans don't change, which they always do).gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com0tag:blogger.com,1999:blog-5124010431457427473.post-66409618188219937562013-09-18T13:00:00.001-06:002013-09-18T13:00:11.366-06:00Music Box (Software)In my previous post, I covered the build details of a 'music box' that could listen to music being played, compute a response, and display the output on a 12 RGB LED display. This purpose of this post is to talk some more about the software and algorithms behind the device. There are a couple parts to this, so I'll try to introduce them in an order that makes sense.<br />
<br />
The code for the Music Box is available on <a href="https://github.com/bgreer/sketchbook/tree/master/musicbox">github</a>.<br />
<br />
1 - LED control<br />
There are 12 RGB LEDs that can each be set individually using 3 16-channel PWM drivers. This means that 36 of the 48 channels are wired directly to the LEDs, and each controls the brightness of a single color of a single LED. I've used TLC5940 drivers for this project, which basically act as shift registers. The output value of each channel is set by an internal 12-bit register that is filled by shifting values through each channel register sequentially. The drivers are daisy-chained together so that only one long string of bits needs to be shifted out to set the value for every LED.<br />
<br />
The brightness of an LED is logarithmically related to the current running through it. This means that the increase in brightness per increase in PWM value actually decreases as you move up in value. While you may think this only matters if you plan on linearly dimming LEDs back and forth, it actually has a profound effect on color mixing. With only a narrow range of PWM values that cause an LED to appear dimmed, linearly sliding through PWM values will cause each color of an RGB LED to mostly appear on or off. Without correcting for this logarithmic response, sliding through hue on an RGB LED will look more like cycling through the primary and secondary colors (red-yellow-green-cyan-blue-magenta-) with little mixing in between.<br />
<br />
To correct for this, I used a pre-computed 12-bit look-up table for each value in [0-255] that would approximately linearize the brightness. While I could only set the color values with 8 bits per channel, the output to the PWM drivers was 12 bit to maintain precision at the low end of the look-up table.<br />
<br />
There are a few inputs on the drivers that need to be toggled while shifting data in, but this is well documented elsewhere.The only tricky part is getting the overall PWM frequency right. The grayscale clock (GSCLK) pin needs to be toggled 4096 times to complete one pulse cycle in our PWM setup. In order to have LEDs appear continuously on, the pulse frequency needs to be at least 60Hz or so. This means the GSCLK line needs to be toggled at 4096*60Hz = 246kHz or higher.<br />
<br />
When the music box is powered up, it begins by testing the LEDs and drivers by displaying a simple pattern. Then it jumps into Ambient Mode, where the value of each LED is set based on some simple math.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.youtube.com/embed/cEfgAiqLlc8?feature=player_embedded' frameborder='0'></iframe></div>
<br />
The pattern cycles itself randomly forever, creating an endless sequence of new color patterns. After letting the music box sit around in ambient mode for a few hours, I've decided I should really make more of these. Someday..<br />
<br />
2 - Sound Reactive<br />
<br />
To react to music, first the software must be able to react to what the microphone picks up. The microphone is hooked up to one of the analog input pins of the Teensy3, so I just need an analogRead() to grab the current state. In order to sample at a constant rate, I used an interrupt that would force the code to take a microphone measurement every few microseconds. After a measurement is made in the software interrupt, the sound reactive algorithms perform some incremental calculations to update the frequency-dependent output.<br />
<br />
Since the microphone-sampling interrupt occurs every few microseconds (~350us), any computing that is done using a new microphone reading needs to complete in a very small amount of time.<br />
<br />
The following video shows the Party Mode. Six different frequency bins are updated as the microphone samples come in and the amplitude of these bins are displayed on the LED circle. This mode acts as a simple graphic equalizer.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.youtube.com/embed/Dx3EbuUBT0U?feature=player_embedded' frameborder='0'></iframe></div>
<br />
I'm not 100% happy with how it acts, but I think it's a great start.<br />
<br />
The final mode is Tuner Mode, which keeps track of 36 different frequency bins that are spaced much closer in frequency. This is the mode I originally designed the music box to run, and much of the math and theory for it was covered in my last post. Unfortunately this mode is also the least impressive right now, so I don't have a video at this point. I think it needs some careful adjustments in the code and possibly some circuit modifications to deal with electromagnetic interference (EMI) produced by the LED drivers. If I can get it working satisfactorily, I will update with a third video!gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com0tag:blogger.com,1999:blog-5124010431457427473.post-42872212768627264192013-08-08T19:12:00.000-06:002013-10-04T14:01:31.543-06:00Music Box (Hardware)<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-lPY3KUOWqoo/UgQ-9dJkIHI/AAAAAAAAA1E/BrF2MMt6BCM/s1600/2013-08-01_23-29-30_800.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="179" src="http://4.bp.blogspot.com/-lPY3KUOWqoo/UgQ-9dJkIHI/AAAAAAAAA1E/BrF2MMt6BCM/s200/2013-08-01_23-29-30_800.jpg" width="200" /></a></div>
<br />
In this post, I'll be covering the build details of a project I've recently finished that has ended up as one of the more advanced things I've attempted. In another post, I'll cover some of the programming details, as those were equally as challenging to sort out. The original purpose of this project was to make a small gift for a musician, but it ended up being much more rewarding for myself as the builder/programmer than for the recipient (in my mind).<br />
<br />
The original concept was to have a small device that would react to music. Since the device would be given to a violin player, I wanted it to be specifically designed to react to someone practicing an instrument, rather than to regular music being played through speakers. The distinction I make here is that a single instrument (like a violin) being played on its own will create a fairly simple sound wave, consisting of one or two base frequencies as well as a limited set of harmonics, while 'regular' music is a complex mixture of many instruments (possibly including vocals) that create a much more difficult waveform to analyze. There are endless examples on the internet of DIY graphic equalizers that are designed to handle this wide variety of sounds, but I wanted something a little more precise.<br />
<br />
At this point I would like to make a note on musical scales, but after a few minutes of research to make sure my language is precise, I realized that I am very much unqualified to give a small lecture on music theory. So I'll keep this short. In modern western music, there are 12 pitches in the chromatic scale that separate each octave (a factor of 2 in frequency). Each of these pitches has a letter name with a possible modifier (ex: D# / Eb). You can also add a number after the name to specify which octave you mean (ex: A4 is 440Hz, A5 is 880Hz). While more traditional tuning is done using various ratios of frequencies to construct the 12 pitches of an octave, a modern approach is to space them out evenly on a logarithmic scale:<br />
<br />
f = f_0 * 2^(n/12)<br />
<br />
Here, f is the frequency of a note you wish to calculate, f_0 is an initial frequency, and n is the number of pitches away you want to calculate for. So by knowing one frequency (A4 is 440Hz), we can find any other one (A#4 is 440 * 2^(1/12) = 466.13Hz, B4 is 493.88Hz, etc).<br />
<br />
In a later post on the software I've written for this project I'll go into more details about how I've used this math to do some clever things, but for now the point I'd like to emphasize is the cyclic nature of the chromatic scale. Every 12 pitches, you return to the same note but an octave higher or lower. It was this idea that led me to think of a circle of 12 lights that would represent each pitch of the scale.<br />
<br />
To differentiate different octaves of the same note, I would use RGB LEDs for each light and have each octave glow a different color. To drive these 12 LEDs, I sprang for a couple 16-channel PWM drivers to make my life easier. In order to react to any sounds at all, the device would need a microphone, as well as a fast enough processor to handle the real-time calculations. For the mic I went with a MEMS microphone, and for the processor I sprang for a Teensy 3.0. I've been using one of these Teensy boards for another project and have really loved them. They are fully Arduino compatable, much smaller than most standard Arduinos, and also much faster (overclocking to 96MHz). In contrast to most Arduinos (not the Due), the Teensy is built around a 32-bit ARM Cortex-M4 processor which has a built-in digital signal processor (DSP). I've never used any DSP capabilities before, but I figured this project would be a good one to learn with. As for the aesthetic, I decided to grab a few 6"x6" diffused acrylic panels like the ones used in my water level box project. This way the box will have a nice soft color glow when running and reacting to music.<br />
<br />
After testing the PWM drivers and microphone on a breadboard, I went ahead and soldered everything together on a few small protoboards I got for cheap on ebay. While the circuit looks fairly complicated, most of the wires are for daisy-chaining the PWM drivers and linking the outputs to the LEDs. Amazingly, everything seemed to work right off the bat.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-ej0nUIR3HYw/UgQ8wWaHgLI/AAAAAAAAA0g/g92b5RzKVL8/s1600/2013-07-24_15-59-08_300.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="227" src="http://3.bp.blogspot.com/-ej0nUIR3HYw/UgQ8wWaHgLI/AAAAAAAAA0g/g92b5RzKVL8/s320/2013-07-24_15-59-08_300.jpg" width="320" /></a></div>
<span style="font-size: x-small;">At this point I start to wonder if I should plan these circuits before I solder them.</span><br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-eogZZUtmQiE/UgQ95f2nd6I/AAAAAAAAA0o/U9EvuBNwc9U/s1600/2013-07-24_17-05-24_219.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://1.bp.blogspot.com/-eogZZUtmQiE/UgQ95f2nd6I/AAAAAAAAA0o/U9EvuBNwc9U/s320/2013-07-24_17-05-24_219.jpg" width="224" /></a></div>
<span style="font-size: x-small;">Obviously not done, mostly because the mic isn't connected to anything.</span><br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-p37NKdGr_LI/UgQ-MzVAugI/AAAAAAAAA0w/lyPCvoBxu2U/s1600/2013-07-29_22-17-23_48.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="180" src="http://1.bp.blogspot.com/-p37NKdGr_LI/UgQ-MzVAugI/AAAAAAAAA0w/lyPCvoBxu2U/s320/2013-07-29_22-17-23_48.jpg" width="320" /></a></div>
<span style="font-size: x-small;">All good things are held together with hot glue and prayers.</span><br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
While writing the waveform analysis algorithm for this, I realized that I was limiting the usefulness of this device to times when a single instrument is being played nearby. This seemed like a waste of so much soldering, so I decided that the whole device could operate in a few different modes. Some could react to instruments, some to general sound, and some could be oblivious to the environment and just display some pretty colors. In order to switch between these modes, I needed a method of user input. I wanted something sleek, so I went with two small pads of aluminum tape wrapping around the back panel. One is grounded, the other connected to an analog input of the Teensy with an internal pull-up resistor. When the pads are shorted together with a reasonably high resistance object (say, a finger), the Teensy registers a change in the analog read value and the mode is switched.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-hhTFtAWegfg/UgQ-cDRB0FI/AAAAAAAAA04/8_RMFpxlk5s/s1600/2013-07-30_10-28-07_241.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="180" src="http://4.bp.blogspot.com/-hhTFtAWegfg/UgQ-cDRB0FI/AAAAAAAAA04/8_RMFpxlk5s/s320/2013-07-30_10-28-07_241.jpg" width="320" /></a></div>
<span style="font-size: x-small;">Gluing everything together.</span><br />
<br />
Once completed, I had a plain white box with 12 individually addressable RGB LEDs in a circle with a microphone and processor to handle the music processing. Power is supplied by a 5V AC adapter dropped to 3.3V with a voltage regulator, and reprogramming can be done right on the Teensy with a micro-USB cable. It actually took longer to write the firmware for the music box than it did to build the whole thing, so I'll cover those details in another post. The goal was to give this box as a birthday gift, and I was pretty rushed to get it done. While I did have it completed in time (neglecting a few coding errors that were fixed post-birthday), I didn't get a chance to take nice pictures of it with my non-phone camera. Hopefully I will have nicer pictures and maybe a video in the next post as well.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-lPY3KUOWqoo/UgQ-9dJkIHI/AAAAAAAAA1A/6MbUO793gx0/s1600/2013-08-01_23-29-30_800.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="287" src="http://1.bp.blogspot.com/-lPY3KUOWqoo/UgQ-9dJkIHI/AAAAAAAAA1A/6MbUO793gx0/s320/2013-08-01_23-29-30_800.jpg" width="320" /></a></div>
<span style="font-size: x-small;">Party hard.</span><br />
<br />gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com0tag:blogger.com,1999:blog-5124010431457427473.post-60301632866501356312013-07-17T22:04:00.005-06:002013-10-30T15:17:09.854-06:00LED Color Wand<span style="font-size: x-small;">I've got 6 or so posts drafted on reasonably advanced projects, but I figured it was time to log something that was extremely simple to make. In all, this project took about 3 hours on a Saturday morning.</span><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-wNgj_xsJF-o/UildA9K9gaI/AAAAAAAAA1o/1T84XCtmDgA/s1600/IMG_1309_anon.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="140" src="http://3.bp.blogspot.com/-wNgj_xsJF-o/UildA9K9gaI/AAAAAAAAA1o/1T84XCtmDgA/s200/IMG_1309_anon.jpg" width="200" /></a></div>
<br />
<br />
I do a lot of nighttime photography. I think the reason it appeals to me more than daytime photography is that you can capture an image that the human eye is incapable of seeing. Not only will a long exposure record the movement of light and present it as a still image, but it can reveal color in scenes where the eye would see nothing.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/--tgubR1VS38/UeWD_jPXuoI/AAAAAAAAAyw/hHmv6vANWGk/s1600/191423_10200331816007992_1855438583_o.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://4.bp.blogspot.com/--tgubR1VS38/UeWD_jPXuoI/AAAAAAAAAyw/hHmv6vANWGk/s320/191423_10200331816007992_1855438583_o.jpg" width="213" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
A fun thing to do with long exposures is 'paint' a scene using a light source. In the picture above, I stood below the rock arch waving a flashlight around in my hand to light up the bottom of the arch while the camera shutter was open. You can see the path of the flashlight with the circle of light surrounding my body. I ended up really liking this picture, but I wish I could have changed the color of the flashlight to something a little warmer. My flashlight can light up pure white or pure red, but I wanted something a little more orange-red.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-ZaEOXkMItAo/UeWD_gYRlnI/AAAAAAAAAys/BsbWV8tioVM/s1600/531133_4119018543761_595198782_n.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="211" src="http://3.bp.blogspot.com/-ZaEOXkMItAo/UeWD_gYRlnI/AAAAAAAAAys/BsbWV8tioVM/s320/531133_4119018543761_595198782_n.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
Instead of going out and finding a colored flashlight close enough to the color I wanted, I decided it was time to make my own 'color wand' for night photography.<br />
<br />
This little project ended up being the easiest of any I've shown on this blog so far. I wanted the wand to be extremely versatile in that it could have a wide variety of colors. An important goal was having the ability to change colors quickly so I can combine multiple colors in one complicated exposure.<br />
<br />
For the light source I went with 2 diffused RGB LEDs, because I happened to have them sitting around (in fact, they were hooked up to my breadboard with current-limiting resistors already in place). Instead of working with a microcontroller to set the color through PWM, I decided to do things very simply by just using three potentiometers to set the current going through the red, green, and blue channels. This way, a color can be dialed in by setting the RGB levels just as in Photoshop or Paint. At my local electronics store I was lucky enough to find a couple linear 10k pots that would work perfectly.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-g0vyaXSFkU4/UednMZmjYEI/AAAAAAAAAzE/et5jq2LNT1A/s1600/IMG_0915.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="172" src="http://3.bp.blogspot.com/-g0vyaXSFkU4/UednMZmjYEI/AAAAAAAAAzE/et5jq2LNT1A/s320/IMG_0915.jpg" width="320" /></a></div>
<br />
<br />
Battery power is always an issue for me because I hate having to throw away dead batteries and buy new ones. Rechargeable batteries can be expensive and often require special circuitry to change them without lighting things on fire. Luckily, I happened to have a handful of 3S (9.9V) 2100mAh LiFe batteries sitting around from a previous project. I already have a specialized charger for them, so I was good to go.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-bRGyK9jap0U/UednVMdNZ9I/AAAAAAAAAzM/gcclw7t29vc/s1600/IMG_0917.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="http://4.bp.blogspot.com/-bRGyK9jap0U/UednVMdNZ9I/AAAAAAAAAzM/gcclw7t29vc/s320/IMG_0917.jpg" width="320" /></a></div>
<span style="font-size: x-small;">Ignore the Teensy3, this was just testing the slider and an LED.</span><br />
<br />
The circuit is simple enough that I won't bother posting what I did. When the pushbutton is pressed, current runs through each color channel and is limited by a resistor and a potentiometer. At maximum, each color will have about 20mA running through it, and at minimum about 1mA.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-nEsU6770vUk/UedoNzHGbUI/AAAAAAAAAzc/VgohHsEYs9w/s1600/IMG_0932.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="142" src="http://2.bp.blogspot.com/-nEsU6770vUk/UedoNzHGbUI/AAAAAAAAAzc/VgohHsEYs9w/s320/IMG_0932.jpg" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-p3sWXs-l2zg/UedpK2kINpI/AAAAAAAAAzs/3ddI8S11gKI/s1600/Untitled-1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="153" src="http://2.bp.blogspot.com/-p3sWXs-l2zg/UedpK2kINpI/AAAAAAAAAzs/3ddI8S11gKI/s320/Untitled-1.jpg" width="320" /></a></div>
<br />
<br />
With everything soldered and taped together, I noticed two problems. The first is the non-linearity of the brightness as a function of current. While I designed the potentiometers to vary the current in each channel linearly, the nonlinear response of each LED means that most of the brightness change happens at the low-resistance end of the pots. There's not a whole lot I can do to change that at this point. If I were to redesign the whole thing, I would include a microcontroller that could translate the linear input into a nonlinear output that would create a smoother brightness response.<br />
<br />
The second problem with the wand is the LEDs I chose. Because they are diffused, the wand does not produce a beam of light, but instead acts as a beacon that spreads light in all directions. Since the individual colors within the LED package are separated by some small distance, the sides of the LED glow stronger in one color than the others depending on the angle. This means that even when you create an even mix of red, green, and blue to make white, the wand will appear slightly more red, green, or blue depending on which way you look at it. I'm guessing that clear LEDs might not have had this problem, but I'm not entirely sure.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-0aLkdr97B7E/UedoZUueTHI/AAAAAAAAAzk/1qSRMSiWLSE/s1600/IMG_0931.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="248" src="http://2.bp.blogspot.com/-0aLkdr97B7E/UedoZUueTHI/AAAAAAAAAzk/1qSRMSiWLSE/s320/IMG_0931.jpg" width="320" /></a></div>
<br />
<br />
At this point I have yet to use the color wand for any real night photography, but I will try to post an update to this post when I do.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-Qg9-wRdPLBY/Uedn7RKJmFI/AAAAAAAAAzY/lgZX8H3xRgk/s1600/IMG_0923.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://2.bp.blogspot.com/-Qg9-wRdPLBY/Uedn7RKJmFI/AAAAAAAAAzY/lgZX8H3xRgk/s320/IMG_0923.jpg" width="281" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
UPDATE: I've tested the wand, and it works wonderfully:</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-JuQTl27jByQ/Ue2_6gcUWTI/AAAAAAAAA0A/o3L7HJO8nOQ/s1600/IMG_1018.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="141" src="http://3.bp.blogspot.com/-JuQTl27jByQ/Ue2_6gcUWTI/AAAAAAAAA0A/o3L7HJO8nOQ/s320/IMG_1018.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-wNgj_xsJF-o/UildA9K9gaI/AAAAAAAAA1g/cuPYAL0_cRU/s1600/IMG_1309_anon.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="225" src="http://1.bp.blogspot.com/-wNgj_xsJF-o/UildA9K9gaI/AAAAAAAAA1g/cuPYAL0_cRU/s320/IMG_1309_anon.jpg" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-W3xSdBXe-VI/UildA503vrI/AAAAAAAAA1k/uS4aYo_-XNA/s1600/IMG_1316_anon.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="250" src="http://3.bp.blogspot.com/-W3xSdBXe-VI/UildA503vrI/AAAAAAAAA1k/uS4aYo_-XNA/s320/IMG_1316_anon.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />gpercohttp://www.blogger.com/profile/07551742495712737214noreply@blogger.com0