## Week-long hack: ESP8266 touchscreen WiFi light controller and clock

A couple of months ago I picked up cheap WiFi-controlled LED bulbs (one among dozens of very similar devices), after seeing them at a friend’s place.  This turned out to be an excuse to play with the ESP8266, which has inspired several hacks.

I was overall very happy with these bulbs: decent Android and iOS apps and, compared to fancier solutions (e.g., Philips Hue or Belkin WeMo), they do not require any proprietary base stations, and you can’t beat the price!  However, switching off the lights before falling asleep involved hunting for the phone, opening the app, and waiting for it to scan the network; not an ideal user experience.  I was actually missing our old X10 alarm clock controller (remember those?), so I decided to make one from scratch, because… why not?

Although the X10 Powerhouse controller’s faux-wood styling and 7-segment LED had a certain… charm, I decided to go more modern and use a touchscreen.  I also designed a 3D printed enclosure with simple geometric shapes and used it as a further excuse to play with 3D print finishing techniques.  Here is the final result:

And here it is in action:

If this seems interesting, read on for details.  The source code for everything is available on GitHub. Edit: You can also check the Hackaday.io project page for occasional updates.

## Comparing data storage options in Python

When it comes to numerical computing, I always gave in to the unparalleled convenience of Matlab, which I think is the best IDE for that purpose.  If your data consists of matrices or vectors and fits in main memory, it’s very hard to beat Matlab’s smooth workflow for interactive analysis and quick iteration.  Also, with judicious use of MEX, performance is more than good enough.  However, over the past two years, I’ve been increasingly using Python (with numpy, matplotlib, scipy, ipython, and scikit-learn), for three reasons: (i) I’m already a big Python fan; (ii) it’s open-source hence it’s easier for others to reuse your code; and (iii) more importantly, it can easily handle non-matrix data types (e.g., text, complex graphs) and has a large collection of libraries for almost anything you can imagine.  In fact, even when using Matlab, I had a separate set of scripts to collect and/or parse raw data, and then turn it into a matrix.  Juggling both Python and Matlab code can get pretty messy, so why not do everything in Python?

Before I continue, let me say that, yes, I know Matlab has cell arrays and even objects, but still… you wouldn’t really use Matlab for e.g., text processing or web scraping. Yes, I know Matlab has distributed computing toolboxes, but I’m only considering main memory here; these days 256GB RAM is not hard to come by and that’s good enough for 99% of (non-production) data exploration tasks. Finally, yes, I know you can interface Java to Matlab, but that’s still two languages and two codebases.

Storing matrix data in Matlab is easy.  The .MAT format works great, it is pretty efficient, and can be used with almost any language (including Python).  At the other extreme, arbitrary objects can be stored in Python as pickles (the de-facto Python standard?), however (i) they are notoriously inefficient (even with cPickle), and (ii) they are not portable.  I could perhaps live with (ii), but (i) is a problem.  At some point, I tried out SqlAlchemy (on top of sqlite) which is quite feature-rich, but also quite inefficient, since it does a lot of things I don’t need. I had expected to pay a performance penalty, but hadn’t realized how large until measuring it.  So, I decided to do some quick-n-dirty measurements of various options.

## Household hacks with a 3D printer

I’m often asked “what is a 3D printer good for, isn’t it just a novelty”?  So here are some examples of household hacks, in no particular order.  I’ve chosen examples that satisfy two criteria.  First, it didn’t take me more than an hour to whip up the CAD model (and, in many cases, it took just 10-15 minutes), so it qualifies as a “quick hack”.  Second, it’s of general household use, so mechanical assemblies, 3D printer parts, etc, were left out.  Some of these are published on Thingiverse (linked from the post headings).

#### Eyeglass frame fix

This is one of my favorites.  It was one of the quickest to make, but it was used a lot.  My mother has her favorite eyeglasses and is loath to change them.  However, over time, the arm loosened and they would constantly slide down her nose. Tightening the screws didn’t do anything anymore. So, I quickly designed a clip that slides over the frame, and has a tapered nub to apply pressure to the arm (printed in ABS, so it has some flexibility).  Guess you could call it an “eyeglass arm pretensioner attachment”.  She’s been using them for years, and asked for a pack, in case she looses one (printing a set of six takes about 15 minutes; the example in the photo is an early print in black, instead of brown).

## Manufacturing @ Home: A rechargeable near-field mic, (almost) from scratch

Some time ago I backed the W-Ear kit on Kickstarter.  Even though they also offer the option of a fully assembled, rechargeable version, I opted for the through-hole kit, which went for much less and also shipped much earlier.  I was originally planning to just 3D print an enclosure, instead of using an Altoids tin.  However, on a whim, I decided to take this a bit further, because… why not?

TL;DR: I went from the PCB on the left, to the device on the right, without ever leaving home. Design files are available here (caveat: I’m not an EE, but I sometimes play one on the web! :).

## Quick hack: visualizing RBF bandwidth

A few weeks ago, I was explaining the general concepts behind support vector machine classifiers. and kernels.  The majority of the audience has no background in linear algebra, so I had to rely on a lot of analogies and pictures.  I had previously introduced the notions of decision function and decision boundary (the zero-crossing of the decision function), and described dot products, projections, and linear equations, as simply as possible.

The overview of SVMs was centered around the observations that the decision function is, eventually, a weighted additive superposition (linear combination) of evaluations of “things that behave like projections in a higher-dimensional space via a non-linear mapping” (kernel functions) over the support vectors (a subset of the training samples, chosen based on the idea of “fat margins”).

Most of the explanations and pictures were based on linear functions, but I wanted to give an idea of what these kernels look like, how their “superposition” looks like, and how kernel parameters vary the picture (and may relate to overfitting).   For that I chose radial basis functions. I found myself doing a lot of handwaving in the process, until I realized that I could whip up an animation.  Following that class, I had 1.5 hours during another midterm, so I did that (Python with Matplotlib animations, FTW!!).  The result follows.

Here is how the decision boundary changes as the bandwidth becomes narrower:

For large radii, there are a fewer support vectors and kernel evaluations cover a large swath of the space.  As the radii shrink, all points become support vectors, and the SVM essentially devolves into a “table model” (i.e., the “model” is the data, and only the data, with no generalization ability whatsoever).

This decision boundary is the zero-crossing of the decision function, which can also be fully visualized in this case.  One way to understand this is that the non-linear feature mapping “deforms” the 2D-plane into a more complex surface (where, however, we can still talk about “projections”, in a way), in such a way that I can still use a plane (z=0) to separate the two classes.  Here is how that surface changes, again as the bandwidth becomes narrower:

Finally, in order to justify that, for this dataset, a really large radius is the appropriate choice, I ran the same experiments with multiple random subsets of the training data and showed that, for large radii, the decision boundaries are almost the same across all subsets, but for smaller radii, they start to diverge significantly.

Here is the source code I used (warning: this is raw and uncut, no cleanup for release!).  One of these days (or, at least, by next year’s class) I’ll get around to making multiple concurrent, alpha-blended animations for different subsets of the training set, to illustrate the last point better (I used static snapshots instead) and also give a nice visual illustration of model testing and ideas behind cross-validation; of course, feel free to play with the code. ;)

## Towards laws of the 3D printable design web

With the explosive growth of 3D printing, and rapid manufacturing at the consumer level in general, physical objects can be designed and manipulated in a computer. However, like other forms of digital content (e.g., documents, software, music), this is only part of the story: digital representation also enables online sharing and collaboration (as Chris Anderson has pointed out). A prime example of the potential of all these technologies combined with online sharing and collaboration is the initial design of consumer-grade 3D printers themselves which, perhaps unsurprisingly, was what many early adopters of the technology used it for.  Considering that the rest of us is where those early adopters were five or more years ago, the future should be interesting.

Despite hearing about 3D printing daily, very few studies have looked at the digital content of physical things, and the processes that generate it. I collected data some time ago, and started off with this visualization, which I wrote about before. A further initial analysis of the data has some interesting stories to tell.

## Weekend hack: PortaPi arcade console

Some time last January I decided to back the PortaPi on KickStarter. This is a mini arcade cabinet, that runs several emulators, via the RetroPie project, on a Raspberry Pi. The kit arrived on time, and sometime in May I got around to assembling it.  Here’s how it looks:

The kit is great out of the box but, of course, I had to add some of my own tweaks.   Read the rest of this entry »

## Weekend hack: surveillance on the cheap

Some time ago I bought a few Foscam MJPEG cameras and installed them in our apartment, originally for baby monitoring, and I wanted to set up a proper security surveillance system.  I already have a Netgear ReadyNAS box, so I thought this should be easy.  However, I soon found out that video surveillance solutions from major NAS vendors (e.g., Netgear, QNap, Synology) require per-camera licenses, in the range of $50-60/camera. That would be over$200 to enable functionality already present in the device!  Although this is already an order of magnitude cheaper than hardware from traditional NVR (network video recorder) vendors, it still felt unreasonably high. Oh, and Foscam cameras still aren’t supported by ReadyNAS Surveillance.  There used to be a KMotion port for ReadyNAS, but it disappeared around the time Netgear’s official solution came out on the market.  Since ReadyNAS is almost Debian (with customizations), I gave installing KMotion from source a shot, but after an afternoon fiddling with custom configurations as well as tweaks for the low-power Atom CPU, I gave up.

Perhaps the NVR industry is ripe for “disruption”, but I wasn’t willing to wait. Last time I did that (for car stereos) was almost three years ago… and I’m still waiting.  Luckily, an NVR is a much simpler build than a custom car stereo (this was enough for me, thank you :).  There are several low-cost hardware options and ZoneMinder is a great open-source surveillance system that was originally built to scratch an itch (the original author’s power tools were stolen from his garage, and he couldn’t find any reasonably-priced commercial surveillance solutions he liked).  Here is what I got after about a day: