About me


Parkour gym clock



Orientation Aware Camera

LED Life

OpenGL Experiments

3D Modeling


Orientation Aware Camera

I started this project without really having any idea where it would end up. I started off by deciding it could be interesting to build a device that would sample a 3-axis magnetometer and a 3-axis accelerometer and send that data to my computer via USB. Devices very much like that are often sold as a 'tilt compensated compass', and go for $250-500. I made mine for about $150.

I happen to live in the same town as my new favorite hobby electronics site, SparkFun, so I ordered the accelerometer and magnetometer from them, and I was able to pick them up in person a couple days later. Next, I needed a way to get the data from the sensors to my computer. I wanted a USB connection, but I'd never done anything with USB before, and a little research into the controller chips told me that was a lot more trouble than I was willing to deal with. I also considered just having a microcontroller stand between the sensors and an off-the-shelf USB to serial adapter, but having the device appear as a COM port really didn't appeal to me. However, my research had also turned up another option; USBMicro's U421. The U421 is essentially just a breakout board with a microcontroller pre-programmed to relay data back and forth between its USB connection and its GPIO pins. It can speak SPI (same as my sensors), its USB interface just uses the HID drivers built into Windows, and it even came with a well-documented DLL that provided a nice simple interface for controlling it. It looked like it was just a matter of properly connecting it to the sensors, and then writing software to drive the whole thing.

Well, not quite. After I got the U421, I noticed that it does all its signaling at 5 volts, whereas the sensors require 3 volts. My roommate just happened to have a MAX3002 bi-directional level converter handy, which was just the right thing to fix the problem, except for its teeny little TSSOP (0.65mm pin pitch) chip package. So the whole project went on hold another week while I got a TSSOP/20 to DIP adapter from Logicalsys. I watched some tutorial videos on surface-mount soldering, bought myself some flux, and the soldering went remarkably easily.

With that problem solved, I worked out a layout to connect everything together on a bit of protoboard I bought, soldered everything together, wrote some code to have the U421 sample the sensors, and plugged it in to test.. and it worked! Here's what it looked like:

(If you're wondering, that box is a Hammond Manufacturing 1455 series enclosure. I got mine locally, but Digi-key has quite a few sizes in stock.)

However, I soon found it only worked for a few seconds at a time. For some reason, the accelerometer kept getting itself in a funny state and would just start returning garbage values until I power-cycled it. After much frustration, its datasheet finally revealed why; its chip select pin wasn't really a chip select pin.

The U421 only provides one SPI port, so I had both of the sensors bussed together on it, and used each sensor's chip select pin to let them know which one I was talking to. As it turned out, the chip select pin on the accelerometer doesn't really disable its SPI port when its not selected, it just switches it into I2C mode. So whenever I was trying to send a message to the magnetometer, the accelerometer would try to interpret it as an I2C message instead of ignoring it as it should have. The accelerometer provided no good way to have it ignore the bus completely. So, I ran out to the local electronic components shop and picked up a 74F32 OR-gate chip. Now instead of toggling the accelerometer's chip select pin, I just block its clock signal unless I really do want to talk to it. Problem solved.

Here's the finished layout. Sorry, no nice circuit diagrams or PCB layouts here. I drew these up just for my own benefit while I was planning out the layout. The color coding ought to be enough to see what's going on here, but if you want more detail, then refer to the datasheets for the U421, 74F32, MAX3002, accelerometer, and magnetometer.

In hindsight, I kind of regret going with the U421. Not that there's anything really wrong with it- it works as advertised and it got this project off the ground fairly fast. The problem is that every sensor sample requires a lot of communication round trips between my software and the U421. Each sample goes something like this:

Enable the magnetometer's chip select
FOR each axis (x, y, z)
  Enable the magnetometer's reset pin
  Disable the magnetometer's reset pin
  Send command to measure axis by SPI
  Wait until the magnetometer's data ready pin goes high
  Read measurement result by SPI
Disable the magnetometer's chip select
Enable the accelerometer's chip select
Send command to return status and acceleration registers by SPI
Read registers by SPI
Disable the accelerometer's chip select

That's at least 21 round-trips. I did a lot of experimentation to find ways to optimize this process, but the best I ever got was 5 samples per second, a fraction of sensors' maximum. What I should have done was just programmed a simple microcontroller to handle all the logic of sampling the sensors, and then given it a USB interface with one of FTDI's chips. I only found out about FTDI later, but they look like a great option to add USB to any device really easily.

At this point, I had a reliably working device that was more or less completely useless. I had a little console application I'd written printing out the data from the sensors, but it really cried out so what? Well, I've been getting into OpenGL development (and recently read the the Red Book cover to cover), so I started toying around with programs that would compute a transformation matrix from the sensor data and then rotate stuff on screen the same as the box. Still, so what? Next, on a whim, I dropped in some code I'd written for a previous project to connect to a webcam and get frames of video. Holding the box and webcam with one hand, I could rotate the camera, and the software rotated the video on screen in the same direction.. and the image seemed to stay perfectly level. That's when I started to think I had something interesting.

So I decided to add a camera to the box. The webcam I already had wouldn't quite fit even with its casing removed, so I went out and checked around a few stores until I found one that would, a Microsoft Lifecam VX-7000. Personally, I prefer Logitech, but none of theirs were quite the right shape. So, having been out of the box barely five minutes for a quick test, the Lifecam had most of its casing removed or trimmed back. Then I just drilled a big hole in the end of the box, put a rubber grommet in it to hide the nasty edge the drill left, and stuck the camera inside with a bit of really strong double-sided tape. To get everything to work over one USB cable, I just got the smallest USB hub I could find, removed its casing and all of its ports, mounted it inside, and soldered the USB cable, U421, and the webcam directly to it.

A couple weeks of tinkering with the software led to what you see in the video above. It's far from a finished product, but right now I'm pretty content with what it can do. Here's an equirectangular projection of the scene I captured in the video:

So, how does the software work? For the sake of brevity, I'll stick to how it accumulates the panorama the way it does. For everything else, you're welcome to read my source code.

So imagine you're in your favorite spot, and you just happen to have a camera that takes square pictures with a perfect 90-degree field of view. You take one picture to the north, one east, one south, one west, one straight up, and one straight down. Then you make twelve foot tall prints of all those pictures and attach them at the edges. The 'down' picture goes on the ground, the 'up' picture becomes a ceiling, and the other four are walls. Now, if you stand inside this big cube, with your head right at the center, you should be able to look around in any direction and it will be almost just like you're in the spot you took the pictures. In 3D computer graphics, this is called a skybox. This is how my program keeps track of and draws the surrounding view.

Now imagine, still standing inside this big cube, you get out that camera again. If you take a picture any one of the walls, it will exactly fill the camera's 90-degree field of view. So if you, say, hold your hand in front of the camera, take a picture of the north wall, and replace that wall with the picture you took, here's what you get:

The wall is the same as it was before, except your hand is on top of the old picture. You can do this again and again, adding something new every time, and all the parts that didn't get covered up stay the same.

This is where the analogy gets a little strained. Say you have, um, a television on a stick. Or a projector that.. oh, I give up. Watch the end of the video again. See how we take the image from the webcam and draw it in that little rectangle in 3D space? It's always facing the center, and it's always the same distance away from the center, but it rotates around the same as the box is being turned. (Now I'm talking about the real box, with the electronics in it.)

Now, with this video image floating around, if we start updating all six sides the way I described, then the frames video will get captured onto the sides. Now we can just turn the camera until it's covered every angle, and we're left with a panoramic image on the sides of the skybox. Those images can then be saved out and converted into other projections, like the equirectangular one above.

Here's the sides of the skybox from the capture I did in the video. This is a typical way of laying them out in one image, and the format preferred by HDRShop, the software I used to covert to the equirectangular image above.


Download C# source code (.zip, 555K)