Jared Sanson

Tags

 

Smart-watches are becoming the next big thing, but with increased miniaturization it is becoming impossible to build these sorts of devices yourself. This project aims to provide a "smart-watch" which can be built from readily available components (eg. from element14, digikey, etc.), and able to be soldered by hand. Of course, processing power must be sacrificed, but who needs a camera and 60FPS graphics on a watch???

I have been working on this watch since early 2013, and it has come a long way! It is still very much a work-in-progress though, and I am working on another hardware revision which will fix all the hardware bugs. (you can read more about those from my previous posts on the project)

You can find more info and progress on my hackaday.io project page: http://hackaday.io/project/2263-OSHWatch

Features

ClockClock
Kernel InfoKernel Info
AccelerometerAccelerometer

The watch features a PIC24F microcontroller and a 128x128 RGB OLED display, along with an accelerometer and magnetometer. The goal is to have both USB-HID (driverless) and Bluetooth 4.0 LE connectivity, so I can sync the calendar quickly and easily.

Working features

  • Real-Time Clock - It can tell the time!
  • University Timetable - I can tell when and where my next class is
  • USB-HID comms and bootloader
  • Buttons
  • Basic accelerometer logging
  • Battery charging & monitoring
  • Anti-aliased font drawing (Just grayscale at the moment, no "clear-type")

Future features

  • Accelerometer 'tap-to-wake'
  • Magnetometer compass
  • Bluetooth 4.0 LE
  • Alarm clock with peizo buzzer
  • Desktop GUI for updating the calendar

Hardware

The hardware is a 2-layer design utilizing SMD components, and is fully open-source in case you'd like to build your own. Schematics and PCB layout were done using Altium, which unfortunately is quite expensive, but they recently announced they would be releasing a free version sometime in the future!

Known Issues (Rev 1)

Known Issues (Rev 2)

  • Watch sometimes resets when waking up (possibly supply decoupling issues)
  • USB comms task sometimes doesn't go to sleep (drains the battery)
  • OLED display has some weird ghosting artifacts
  • In the schematic I got the connector around the wrong way, so the OLED is on the opposite side of the PCB than I wanted.
  • I didn't assign the OLED data lines to the PIC24's TFT driver, so I can't make use of it and drawing performance is less than what it could be.
  • Bluetooth 4.0 is untested
  • There are a few minor "bodges" on the PCB required to make it function correctly

Future (Rev 3)

I am working on a revision 3, which will fix all of the above issues and make the PCB much more compact! Stay tuned... (Subscribe to the RSS feed to keep up-to-date)

Firmware

The firmware is available on my github: jorticus/zeitgeber-firmware
You will need the MPLAB-X IDE, the XC16 free compiler, and the Microchip Application Framework. Alternatively you can download the bootloader & firmware binaries here: oshwatch-binaries.zip

Features:

  • Custom RTOS, with power saving
  • Custom graphics library, with anti-aliased text drawing
  • HID bootloader

Photos

Footnotes

All design files and source code is released under the OSHW License. ie. You may modify, distribute, and make the design, as long as you provide attribution.

View Comments...

This is a fun project I did for my computer vision research project at the University of Canterbury. It uses the Kinect face-tracking library to replace a user's face with a custom face, which warps to match their expression.

It is written in C++ and includes a few GLSL shaders.

Download

A stand-alone version is available here if you want to try it out:

face-replace.zip (12MB)

You'll need to download the Microsoft Kinect SDK Runtime which installs the required device drivers for the Kinect. Alternatively you can use a different depth sensor, as long as it works with OpenNI2.

Compiling

I developed this project in Visual Studio 2013 (MSVC++ 12), but it should work in other versions provided you can compile the required libraries.

Before you can compile this, make sure you set up the following libraries:

Operation

The program performs the following steps:

  1. Capture RGB + Depth video frame
  2. Detect head pose and face features using Kinect SDK
  3. Deform the Candide-3 mesh to the given head pose and face features
  4. Process the RGB + Depth frames using OpenCV
  5. Draw the RGB video frame
  6. Draw the texture-mapped candide-3 model in OpenGL, using a custom blend shader.

Side-note: This project uses a custom candide-3 face model instead of the Kinect SDK's internal model, since it's not easy to match vertices with tex coords using the internal model. This functionality is provided through the WinCandide-3 project (all source code named 'eru' is part of this project).

Future Work

It's probably unlikely I'll do much more on this project since I have other commitments, but here's a list of things that could be improved upon in the future:

  • Write a plugin for blender that can read and write the candide-3 model, so textures can be more accurately mapped. (I'm currently using the WinCandide-3 utility to approximately map the texture)
  • Add support for multiple people
  • Decrease tracking latency/improve face location. (Perhaps something like meanshift/optical flow + a kalman filter?)

Source Code

Source code is available on my GitHub page: face-replace

My professional paper is available here: COSC428 Computer Vision - FaceReplace.pdf

View Comments...

So it's been a while since I last posted about my OLED watch, and I've done a lot of work on it! (And also broke it multiple times)

It's taken me a lot of work to get this far, and I developed EVERYTHING from the ground up. The electronics design, the PCB layout, the RTOS and firmware drivers, the graphics engine, the user-mode app code, and even USB communications apps. I've used C, C#, and Python extensively in this project, and Altium Designer for the schematic and PCB.

Overall it has been an awesome learning experience, and if I was to make another one I would do a lot of things differently!

Here's a few features of my firmware:

  • USB HID Communication (No PC drivers required!)
  • Watch face for telling the time (Kind of required...)
  • Date & Upcoming events
  • Accelerometer reading
  • RTOS Kernel debug info

And some features planned for the future:

  • Bluetooth 4.0 (Still need to get the IC for it though)
  • Magnetometer readings
  • Smart alarm clock that wakes me up at the ideal time, by detecting my sleeping patterns through the accelerometer
  • Auto screen on by rotating or shaking my wrist

SMD Rework

If you've read my last blog post, you'll know that I ripped the USB traces off my last PCB! I ended up using my hot-air reflow station to transfer the components to a spare PCB, and luckily it all powered up and worked after that! Here are some photos of my reflow process:

Removing Components Removing Components
Applying flux for QFN chip Applying flux for QFN chip
QFN chip reflowed QFN chip reflowed
Cleaning flux with Meths Cleaning flux with Meths
Soldering the MCU Soldering the MCU
Soldering Soldering
Reflowing acc Reflowing acc
Soldering the OLEDSoldering the OLED

Notice how I soldered the OLED directly to the PCB, I had used a connector in my last board but it added extra distance to the OLED cable, so it wouldn't fit into the case. Soldering directly to the PCB gave me a few more millimeters to work with!

Case

I was going to print a case for my watch on the 3D printer, but then I found something far more suitable - an aluminium case designed for iPod Nanos. After struggling with metal cutters and grinders, I managed to turn it into something my watch could fit into:

It's by no means waterproof, but it looks nice!

Hardware Issues

Unfortunately I found that the cable on the OLED screens doesn't like being bent at tight angles:

After breaking two screens this way I decided to try and fix it:

Preparing epoxyPreparing epoxy
Epoxy strengtheningEpoxy strengthening

This provided enough strength to the cable to prevent it from breaking, and it's working beautifully now!

I've also had a few bugs with my hardware design, and they are so much harder to debug than software! One particuarly problematic issue was that my LiPo charger circuit wouldn't charge the battery if it was completely flat! The only way to power it back up was to apply voltage directly to the battery to "jump-start" it. Eventually after many iterations of soldering in different resistors, I noticed I had set the charge current an order of magnitude too low (something like 10mA charge current instead of 100mA), so the battery would never charge. After replacing that resistor it came right.

Future Work

Now the hardware is mostly done, I just need to work on the user interface to make it more usable. Currently battery life is just over 24 hours with intermittent use, but I plan on extending that through better firmware.

Update

I've released the schematics, PCB layout, and firmware as open-source: OSHW OLED Watch

View Comments...

As part of a Summer Scholarship at the University of Canterbury, I developed a system for detecting fire hotspots using thermal imaging cameras and remotely piloted aircraft systems (RPAS). The project was part of a group of four fire-related research projects, organized by Tait Communications and SCION.

It was an amazing learning experience as I got to experiment with many different technologies, such as OpenCV, quadcopters, thermal imaging cameras, and mapping algorithms. I also got to talk to some interesting people within the fire industry about what it's really like to be out on the fireground.

Background

Fire hotspots are a real problem when fighting large rural fires, as they can remain undetected long after the fire has been extinguished, potentially re-igniting the fire and endangering homes and property. Currently firefighters have to hire an expensive helicopter and trained crew, who then fly over with a thermal imaging camera and manually drop down markers on top of the hotspots. Unfortunately helicopters are not always available after a fire, in which case they must instead locate the hotspots manually using the back of their hands, which can take a very long time over large firegrounds!

Fire hotspots are not easy to locate with normal vision, since they are usually buried underneath ash or burn inside tree stumps. Thankfully the invention of far-infrared thermal imaging cameras has made this kind of thing very easy to detect:

As you can see from the photos above, the thermal photo quite clearly shows the hotspots which were otherwise completely invisible! Some hotspots can be in excess of 150°C, more than enough to re-ignite the fire if a substantial wind picks up the embers.

Project Overview

The reason for my project was to make it easier, safer, and cheaper to detect fire hotspots, and hopefully with more accuracy than current methods. The basic idea was to combine a Remotely Piloted Aircraft System (RPAS) with a small lightweight thermal imaging camera, and then be able to detect and locate hotspots from the captured imagery and data.

The actual algorithm was split up into two parts: the hotspot detection, and the geo-location of the hotspots. Detecting hotspots is actually fairly easy to do, though I can't go into details about it here. Finding the real-world co-ordinates of the hotspots is a lot trickier to do, as you have to worry about things like the RPAS's rotation, the ground elevation, and the camera parameters. The calculations behind that part are actually quite similar to the algorithms behind raytracers!

My algorithm was able to automatically detect and locate hotspots to within 50 meters or so, with no user intervention. I tested it on a fly-over of a power station, which easily picked out some transformers that were running quite hot:

Hotspots detected in imageHotspots detected in image
Geo-located hotspots on mapGeo-located hotspots on map

Due to some difficulty around the university's quadcopter we weren't able to set up a fully functioning system, but it worked well on the data I had at hand.

Overall, it was a good project and I learnt a lot about both programming and the industry. I am hoping to extend it into my final year research project for my degree, where I'd like to turn it into a fully functioning system.

The content on this page was reproduced with permission of Tait, and may not be used or copied.

View Comments...