Quick hack: visualizing RBF bandwidth

A few weeks ago, I was explaining the general concepts behind support vector machine classifiers. and kernels.  The majority of the audience has no background in linear algebra, so I had to rely on a lot of analogies and pictures.  I had previously introduced the notions of decision function and decision boundary (the zero-crossing of the decision function), and described dot products, projections, and linear equations, as simply as possible.

The overview of SVMs was centered around the observations that the decision function is, eventually, a weighted additive superposition (linear combination) of evaluations of “things that behave like projections in a higher-dimensional space via a non-linear mapping” (kernel functions) over the support vectors (a subset of the training samples, chosen based on the idea of “fat margins”).

Most of the explanations and pictures were based on linear functions, but I wanted to give an idea of what these kernels look like, how their “superposition” looks like, and how kernel parameters vary the picture (and may relate to overfitting).   For that I chose radial basis functions. I found myself doing a lot of handwaving in the process, until I realized that I could whip up an animation.  Following that class, I had 1.5 hours during another midterm, so I did that (Python with Matplotlib animations, FTW!!).  The result follows.

Here is how the decision boundary changes as the bandwidth becomes narrower:

For large radii, there are a fewer support vectors and kernel evaluations cover a large swath of the space.  As the radii shrink, all points become support vectors, and the SVM essentially devolves into a “table model” (i.e., the “model” is the data, and only the data, with no generalization ability whatsoever).

This decision boundary is the zero-crossing of the decision function, which can also be fully visualized in this case.  One way to understand this is that the non-linear feature mapping “deforms” the 2D-plane into a more complex surface (where, however, we can still talk about “projections”, in a way), in such a way that I can still use a plane (z=0) to separate the two classes.  Here is how that surface changes, again as the bandwidth becomes narrower:

Finally, in order to justify that, for this dataset, a really large radius is the appropriate choice, I ran the same experiments with multiple random subsets of the training data and showed that, for large radii, the decision boundaries are almost the same across all subsets, but for smaller radii, they start to diverge significantly.

Here is the source code I used (warning: this is raw and uncut, no cleanup for release!).  One of these days (or, at least, by next year’s class) I’ll get around to making multiple concurrent, alpha-blended animations for different subsets of the training set, to illustrate the last point better (I used static snapshots instead) and also give a nice visual illustration of model testing and ideas behind cross-validation; of course, feel free to play with the code. ;)

Comments

Mobile OCR input: “Fully automatic” and reality

Recently I’ve been toying around with WordSnap OCR (project page, source code, app on Android Market), an app for OCR-based camera input on Android. In the process, I found out a few things about “smart” versus “fast”.

At least in data mining, “fully automatic” is an often unquestioned holy grail.  There are certainly several valid reasons for this, such as if you’re trying to scan huge collections of books such as this, or index images from your daily life like this.  In this case, you use all the available processing power to make as few errors as possible (i.e., maximize accuracy).

However, if the user is sitting right in front of your program, watching your algorithms and their output, things are a little different. No matter how smart your algorithm is, some errors will occur. This tends to annoy users. In that sense, actively involved users are a liability. However, they can also be an asset: since they’re sitting there anyway, waiting for results, you may as well get them really involved. If you have cheap but intelligent labor ready and willing, use it! The results will be better or, at the very least, no worse.  Also, users tend to remember the failures. So, even if end results were similar on average, allowing users to correct failures as early as possible will make them happier.

Instead of making algorithms as smart as possible, the goal now is to make them as fast as possible, so that they produce near-realtime results that don’t have to be perfect; they just shouldn’t be total garbage. When I started playing with the idea for WordSnap, I was thinking how to make the algorithms as smart as possible.  However, for the reasons above, I soon changed tactics.

The rest of this post describes some of the successful design decisions but,  more importantly, the failures in the balance between “automatic” and “realtime guidance”. The story begins with the following example image:

Original image

Incidentally, this image was the inspiration for WordSnap: I wanted to look up “inimical” but I was too lazy to type. Also, for the record, WordSnap uses camera preview frames, which are semi-planar YUV data at HVGA resolution (480×320). This image is a downsampled (512×384) full-resolution photograph taken with the G1 camera (2048×1536); most experiments here were performed before WordSnap existed in any usable form. Finally, I should point out that OCR isn’t really my area; what I describe below is based on common sense rather than knowledge of prior art, although just before writing this post I did try a quick review of the literature.

Read the rest of this entry »

Comments (13)