Mobile OCR input: “Fully automatic” and reality

Recently I’ve been toying around with WordSnap OCR (project page, source code, app on Android Market), an app for OCR-based camera input on Android. In the process, I found out a few things about “smart” versus “fast”.

At least in data mining, “fully automatic” is an often unquestioned holy grail.  There are certainly several valid reasons for this, such as if you’re trying to scan huge collections of books such as this, or index images from your daily life like this.  In this case, you use all the available processing power to make as few errors as possible (i.e., maximize accuracy).

However, if the user is sitting right in front of your program, watching your algorithms and their output, things are a little different. No matter how smart your algorithm is, some errors will occur. This tends to annoy users. In that sense, actively involved users are a liability. However, they can also be an asset: since they’re sitting there anyway, waiting for results, you may as well get them really involved. If you have cheap but intelligent labor ready and willing, use it! The results will be better or, at the very least, no worse.  Also, users tend to remember the failures. So, even if end results were similar on average, allowing users to correct failures as early as possible will make them happier.

Instead of making algorithms as smart as possible, the goal now is to make them as fast as possible, so that they produce near-realtime results that don’t have to be perfect; they just shouldn’t be total garbage. When I started playing with the idea for WordSnap, I was thinking how to make the algorithms as smart as possible.  However, for the reasons above, I soon changed tactics.

The rest of this post describes some of the successful design decisions but,  more importantly, the failures in the balance between “automatic” and “realtime guidance”. The story begins with the following example image:

Original image

Incidentally, this image was the inspiration for WordSnap: I wanted to look up “inimical” but I was too lazy to type. Also, for the record, WordSnap uses camera preview frames, which are semi-planar YUV data at HVGA resolution (480×320). This image is a downsampled (512×384) full-resolution photograph taken with the G1 camera (2048×1536); most experiments here were performed before WordSnap existed in any usable form. Finally, I should point out that OCR isn’t really my area; what I describe below is based on common sense rather than knowledge of prior art, although just before writing this post I did try a quick review of the literature.

Binarization

A basic operation for OCR is binarization: mapping grayscale intensities between 0 and 255 to just two values: black (0) and white (1).  Only then can we start talking about shapes (lines, words, characters, etc).  One of the most widely used binarization algorithms is Otsu’s method.  It picks a single, global threshold so that it maximizes the within-class (black/white) variance, or equivalently maximizes the across-class variance. This is very simple to implement, very fast and works well for flatbed scans, which have uniform illumination.

However, camera images are not uniformly illuminated. The example image may look fine to human eyes, but it turns out that even for this image no global threshold is suitable (click on image for animation showing various global thresholds):

Binarization with global threshold

If you looked at the animation carefully, you might have noticed that at some point, at least the word of interest (“inimical”) is correctly binarized in this picture.  However, if the lighting gradient were steeper, this would not be possible. Incidentally, ZXing uses Otsu’s method for binarization, because of it is fast. So, if you wondered why barcode scanning sometimes fails, now you know.

So, a slightly smarter approach is needed: instead of using one global threshold, the threshold should be determined individually for each pixel (i,j). A natural threshold t(i,j) is the mean intensity μw(i,j) of pixels within a w×w neighborhood around pixel (i,j).  The key operation here is mean filtering: convolving the original image with a w×w matrix with constant entries 1/w2.

The problem is that, using pure Java running on Dalvik, mean filtering is prohibitively slow.  First, Dalvik is fully interpreted (no JIT, yet). Firthermore, the fact that Java bytes are always signed doesn’t help: casting to int and masking off the 24 most significant bits almost doubles running time.

Method Dalvik (msec) JNI (msec) Speedup
Naïve 109,882 ± 4,813 1,712 ± 261 64×
Sliding 2,435 ± 141 71 ± 19 34×

JNI to the rescue. The table above shows speedups for two implementations. The naïve approach uses a triple nested loop and has complexity O(w2mn), where m and n is the image height and width, respectively (m = 348, n = 512 in this example). The 1-D equivalent would simply be:

for i = 0 to N-1:
   s = 0
   for j = max(i-r,0) to min(i+r,N-1):
      s += a[j]

where w = 2r+1 is the window size. The second implementation updates the sums incrementally, based on the values of adjacent windows. The complexity now is just O(mn). An interesting aside is the relative performance of two implementations for sliding window sums (where w = 2r+1 is the window size). The first checks border conditions inside each iteration:

Initialize s = sum(a[0]..a[r])
for i = 1 to N-1:
   if i > r:
      s -= a[i-r-1]
   if i < N-r:
      s += a[i+r]

The second moves the border condition checks outside the loop which, if you think about it for a second, amounts to:

Initialize s = sum(a[0]..a[r])
for i = 1 to r:
   s += a[i+r]
for i = r+1 to N-r-1:
   s -= a[i-r-1]
   s += a[i+r]
for i = N-r to N-1:
   s -= a[i-r-1]

Among these two, the first one is faster, at least on a laptop running Sun’s JVM with JIT (I didn’t time Dalvik or JNI). I’m guessing that the second one messes loop unrolling, but I haven’t checked my guess.

It turns out that there is a very similar approach in the literature, called Sauvola’s method. Furthermore, there are efficient methods to compute it, using integral images. These are simply the 2-D generalization of partial sums. In 1-D, if partial sums are pre-computed, window sums can be estimated in O(1) time using the simple observation that sum(i…j) = sum(1..j) – sum(1..i-1).

Savuola’s method also computes local variance σw(i,j), and uses a relative threshold t(i,j) = μw(i,j)(1 + λσw(i,j)/127). WordSnap uses the global variance and an additive threshold t(i,j) = μw(i,j) + λσglobal, but after doing a contrast stretch of the original image (i.e., linearly mapping minimum intensity to 0 and maximum to 255). Doing floating point math or 64-bit integer arithmetic is much more expensive, hence the additive threshold. Furthermore, WordSnap does not use integral images because the same runtime can be achieved without the need to allocate a large buffer. Memory allocation on a mobile device is not cheap: the time needed to allocate a 480×320 buffer of 32-bit integers (about 600KB total) varies significantly depending on how much system memory is available, whether the garbage collector is triggered and so on, but on average it’s about half a second on the G1. Even though most buffers can be allocated once, startup time is important for this application: if it takes more than 2-3 seconds to start scanning, the user might as well have typed the result.

Anyway, here is the final result of locally adaptive thresholding:

Binarization with local mean filter

Conclusion: In this case we needed the slightly smarter approach, so we invested the time to implement it efficiently. WordSnap currently uses a 21×21 neighborhood.  Altogether, binarization takes under 100ms.

Skew

Another problem is that the orientation of the text lines may not be aligned with image edges.  This is called skew and makes recognition much harder.

Initially, I set out to find a way to correct for skew.  After a few searches on Google, I came across the Hough transform.  The idea is simple.  Sayyou want to detect a curve desribed by a set of parameters. E.g., for a line, those would be distance ρ from origin and slope θ. For each black pixel, find the parameter values for all possible curves to which this pixel may belong. For a line, that’s all angles θ from 0 to 180 degrees, and all distances ρ from 0 to sqrt(m2+n2).  Then, compute the density distribution of parameter tuples.  If a line (ρ0,θ0) is present in the image, then the parameter density distribution should have a local maximum at (ρ0,θ0).

If we apply this approach to our example image, the first maximum is detected at an angle of 20 degrees. Here is the image counter-rotated by that amount:

After rotating by angle detected using Hough transform

Success!  However, computing the Hough transform is too slow!  Typical implementations bucketize the parameter space. This would require a buffer of about 180×580 32-bit integers (for a 480×320 image), or about 410KB. In addition, it would require trigonometric operations or lookups to find the buckets for each pixel, not to mention counter-rotation. There are obvious optimizations one can try, such as computing histograms at multiple resolutions to progressively prune the parameter space.  Still, the cost implied by back-of-the envelope calculations put me off from even trying to implement this on the phone. Instead, why not just try to use the users:

Finder alignment guides

Conclusion: Simple approach with help from user wins, and the computer doesn’t even have to do anything to solve the problem! Incidentally, the guideline width is determined by the size of typical newsprint text at the smallest distance that the G1’s camera can focus.

Font size

Next, we need to detect individual words.  The approach WordSnap uses is to dilate the binary image with a rectangular structuring element (in the following image, the size 7×7), and then expand a rectangle (shown in green) until it covers the connected component which, presumably, is one word.

Dilation with 7x7 rectangle

However, the size of the structuring element should really depend on the inter-word spacing, which in turn depends on the typeface as well as the distance of the camera from the text.  For example, if we use a 5×5 element, we would get the following:

Dilation with 5x5 rectangular element

I briefly toyed with two ideas for font size detection.  The first is to do a Fourier transform.  Presumably the first spatial frequency mode would correspond to inter-word and/or inter-line spacing and the second mode to inter-character spacing. But that assumes we apply Fourier to a “large enough” portion of the image, and things start becoming complicated.  Not to mention computationally expensive.

The second approach (which also appears to be the most common?) is to to hierarchical grouping. First expand rectangles to cover individual letters (or, sometimes, ligatures), then compute histogram of horizontal distances and re-group into word rectangles, and so on.  This is also non-trivial.

Instead, WordSnap uses a fixed dilation radius.  The implementation is optimized to allow near-realtime annotation of the detected word extent.  This video should give you an idea:

Conclusion: Simple wins again, but this time we have to do something (and let the user help with the rest). But, instead of trying to be smart and find the best parameters given the camera position, we try to be fast: fix the parameters and let the user find the camera position that works given the parameters. WordSnap uses a 5×5 rectangular structuring element, although you can change that to 3×3 or 7×7 in the preferenfces screen. Altogether, word extent detection takes about 150-200ms, although it could be significantly optimized, if necessary, by using only JNI only, instead of a mix of pure Java and JNI calls.


I’m now looking into the possibility of moving OCR into the “live” loop: as you move the camera, the phone shows not only the word extent rectangle, but also the recognized word.  Perhaps as a hyperlink to Google, or along with Google Translate results.  Then I can justifiably use the buzzword of the day, “augmented reality”!  It looks that it might just be possible, but let me get back to you in a week or two.  :)

Postscript: Some of the papers referenced were pointed out to me by Hideaki Goto, who started and maintains WeOCR. Also, skew detection and correction experiments are based on this quick-n-dirty Python script (needs OpenCV and it ain’t pretty!). Update (9/2): Fixed really stupid mistake in parametrization of line.