The Story Of MultiTouch On Android G1

Posted on 13 January 2009 by Luke Hutchison

The real story behind multitouch

(including screenshots, video and working code for functional multitouch on the G1)

Short story: I have full multitouch scaling and panning working in specially-developed apps on a stock T-Mobile G1 Android phone with a change to just one system classfile (i.e. with no modifications to the kernel whatsoever).

MultiTouch G1
MultiTouch running on the G1 without kernel modification (red and green circles are drawn where touch points are detected)
Long story: read on for full details, including a video of the action, and full source code so that you can run this yourself (assuming you are a developer and understand the risks of doing this — this is NOT yet for end-users).

Touch screens and tinfoil hats

When the T-Mobile G1 / HTC Dream was released, it only supported single-touch rather than iPhone-style multitouch. Theories as to the lack of multitouch included hardware limitations, software support for it not being ready in the Android stack, and the threat of being devoured by Apple’s patent lawyers. Dan Morrill, a Google developer advocate for Android, made statements that the device was single-touch and the Android stack had no support yet for multitouch, but that Google would be willing to work together with handset manufacturers to develop multitouch software support when the hardware manufacturers were ready to release a multitouch handset. Eventually even one of HTC’s chiefs chimed in that the Dream was only ever designed to be a single-touch device.

img1.jpg

Recently though, videos started surfacing on the net that showed various experiments people were performing on ListViews with two fingers that seemed to indicate the screen supported multiple touchpoints — however the results of these tests were still pretty inconclusive. Finally though, after the source of the Android stack was released, a developer Ryan Gardner / RyeBrye posted on his blog that he had managed to locate some lines in the kernel driver that were commented out that indicated that multitouch was indeed possible on these devices — and he hacked together a demo of two-fingered drawing that proved it.

To use RyeBrye’s solution, you have to recompile your phone’s kernel. It works by removing the comments around some debug statements (lines 132-151 of the the Synaptics I2C driver, synaptics_i2c_rmi.c) that dump motion events out to a logfile. He then wrote a user interface to read the logfile and draw dots on the screen.

Google, of course, continued to remain silent on the multitouch issue, and conspiracy theories grew thicker…

Enabling multitouch on the G1, the real way

RyeBrye did a great service to the Android hacker community by demonstrating that the screen is multitouch-capable. However there are some real limitations to his approach (which he fully acknowledged), such as having to recompile your kernel and having to get at the events by parsing a logfile. Also it looks like nobody yet has picked up the ball and turned his work into a working system.

Actually, it turns out that if you read a little further down in the driver code (lines 187-200 of synaptics_i2c_rmi.c), you’ll notice that you don’t need to recompile your kernel at all to get multitouch working on the G1 — the kernel driver in fact already emits multitouch information! The driver emits ABS_X, ABS_Y and BTN_TOUCH values for position and up/down information for the first touchpoint, but also emits ABS_HAT0X, ABS_HAT0Y and BTN_2 events for the second touchpoint. Where are these events getting lost then?

I pulled apart the Android stack and scoured it for the location where these events are passed through to Dalvik through JNI. It turned out to be very difficult pinpoint where input events were getting received MotionEvent objects populated (because they are processed on an event queue, the objects are recycled rather than created, and it happens in non-SDK code — egrep wasn’t much help either). The exact point at which multitouch information is lost though turns out to be $ANDROID_HOME/frameworks/base/services/java/com/android/server/KeyInputQueue.java. This class is the only code running on Dalvik that ever gets to see the raw device events — and it promptly discards ABS_HAT0X, ABS_HAT0Y and BTN_2. (It doesn’t seem to do so intentionally or maliciously, it just ignores anything it doesn’t recognize, and it is not coded to recognize those event symbol types.)

Now we’re getting somewhere. I recompiled the whole Android stack and tested detecting these events, and sure enough, I could now detect the second touchpoint — without recompiling the kernel (but, unfortunately, after having to modify part of the Android Java stack).

img3.jpg
Two touch points being detected, with blue bars indicating the column and row of each touch point

Implementing functional multitouch on the G1 in a backwards-compatible way

I wanted to find a way to pass multitouch events through to user applications in a way that was as minimally invasive as possible, i.e. that didn’t require a major replumbing of the whole MotionEvent system, and that was backwards compatible with single-touch applications. It turns out that there is a field in MotionEvents, “size”, that does not appear to be used currently. It is actually mapped to MotionEvents’ size fields from the ABS_TOOL_WIDTH attribute emitted by the Synaptics driver — however the value seems to be ignored by the Android UI, and the value seems pretty chaotic. I suspect the driver actually uses it to represent some attributes of a tool used on similar Wacom-style tablet devices.

Anyway the driver specifies that ABS_TOOL_WIDTH can be in the range [0,15] (and this is mapped to the range [0.0,1.0] when it is placed in the size field), so we have four spare bits in each motion event that are unused. I modified KeyInputQueue.java to generate either one or two motion events depending on whether or not BTN_2 was down, and then marked each event with a bit (bit 0) signifying whether the event was for the first or the second touch point. I then used two more bits to attach the two touch point up/down states to each motion event, BTN_TOUCH and BTN_2, so that individual touch states of the two buttons could be known from either event type, and then, for backwards-compatibility purposes, I set the button-down state of each generated event to the state of (BTN_TOUCH || BTN_2). This is done to keep the semantics of the button-down status of MotionEvents consistent with what the event pipeline would expect, specifically so that the up/down status doesn’t alternate between emitted events.

The result is an Android stack that behaves normally for single-touch, generates events that can be separated into two streams by multi-touch-aware applications, and at worst only generates a series of events that appear to jump back and forth between two points on the screen when two fingers are touched to the screen in a single-touch application — e.g. if you are using a standard listview and hold down two fingers, the list will just jump up and down between the two fingers as you move them around.

VIDEO OF WORKING MULTITOUCH ON THE G1

Here is a video of a multitouch application that I wrote to exercise the modified Android stack.

The REAL reason for no multitouch on the G1 at time of release

Note that I mention in the video that the multitouch screen for some reason “was disabled at the time of release”.  I do not at all believe this was an intentional curbing of the phone’s functionality — it just (1) was not in the design specs to have this feature for the first phone release, (2) would not have been ready in time (the hardware support for it is not polished, and the software support not started in the G1), and (3) was not central to the core mission of what Android was trying to achieve.  Honestly having looked through some of the ENORMOUS mass of source code in the Android stack, I don’t have any idea at all how it was all pulled together in time for release, and how the release happened with so few 1.0 problems.  The Android software stack is an incredibly well-engineered and well-brought-together stack — and it exhibits some amazing engineering and some amazing project management that all the pieces could have been developed separately and finally integrated into a single working product in such a short time.

As it is probably clear from the video, there are some technical challenges to making multitouch work on this hardware. The main technichal problem is that the Synaptics screen is not a true 2D multitouch device. It is a 2×1D device, or contains two sets of orthogonal wires and firmware for analyzing the resulting two 1D projection histograms of capacitance across the screen. This leads to a number of problems, in approximate decreasing order of severity:

1- When there are two touch points on the screen separated diagonally, there are two peaks in each projection histogram, but the hardware has no way of knowing if this represents a forward diagonal configuration or a reverse diagonal configuration. As a result, points that are being tracked can swap over each other (hard to explain, see the video).

img41.jpg

An example of the touch points crossing over each other

2- When points get too close together in one dimension, their histogram peaks merge together in that dimension, giving an undesirable “snapping” of the points to each others’ ordinates (one of the two coordinates). The radius of touch points on the screen is quite large (because the peaks in the projection histogram have to be quite well separated to be counted as separate peaks), so when fingers get close together, both points can merge into a single point, meaning your fingers can’t start really close together in a “zoom in”/”pinch-out” gesture.

 img5.jpg
An example of “snapping” when two points get too close together horizontally or vertically (regardless of their separation in the other dimension)

3- If the second finger is kept down and the first finger is lifted, then suddenly the second point’s location is returned in the first motion event (this may cause problems for application writers)

4- The thresholding algorithm in the hardware is not calibrated well, so in multitouch mode the peak-detection threshold is slightly different for the two axes, and points can “lose an ordinate”, jumping across to align with the other point in one of the axes. This gives very messy sudden motion events when the finger is placed down and raised.

5- Several smaller problems also exist, such as adding a second finger decreases the overall pressure measurement returned in the event, because pressure has not been correctly calibrated for multitouch.

These problems, especially the first two, are serious for general multitouch usage. This is almost certainly one of the biggest considerations behind the decision to not support multitouch on the G1. (And there is probably a financial reason, patent worries or other. There’s always money involved in anything you don’t understand…) The biggest problem, the inability to distinguish between forward and reverse diagonal configurations, means that general multitouch gestures involving rotations simply won’t work in the general case. (But see motion estimation workarounds below.)

The good news

Actually though it turns out that you don’t need rotation gestures for most multitouch operation that people would be interested in, because we work mostly with axis-aligned documents — maps, wordprocessing docs, web pages… and as long as your fingers are not too close together in either axis, you can get all the info and resolution you need for iPhone-worthy zooming and scrolling from the G1’s hardware events.

img6.jpg
Scaling a map (at least, the image of a map) — note that the points have inadvertently swapped, but the scale factor is still chosen correctly

Additionally the G1’s touch screen has a slight advantage for two-fingered (axis-aligned) touch gestures, such as sliding two fingers down or across the screen: if the two touch points are almost aligned in one axis, it locks them into alignment, making two-fingered gesture detection more natural (ok, that’s a stretch :-) )

img7.jpg
Scaling an image, with points snapped horizontally. Scale factor is not affected too dramatically by point snapping, because the distance between snapped points and actual finger positions is fairly similar.

As is demonstrated in the video, the system should work fine for zooming and panning maps and web pages.

It turns out that the multitouch events generated by the driver are very noisy (i.e. not well tested or polished). I had to do a lot of complicated polishing of event noise to get the system usable to this level. As well as the problems with loss of accuracy around axis-crossings as described above, quite a number of events can give wildly inaccurate X and Y coordinates just after and just before a change in up/down state. There is still a little more tuning and polishing that needs to be done, but the code is below if you want to play with it and improve it.

What can be done to fix or work around the remaining problems

The system could be made more natural to use by building in motion estimation (inertia and damping) in the vincinity of the discontinuities where touch points cross over each others’ axes, so that if the user is in fact doing a rotation gesture by moving strongly towards the axis crossing point, events will continue to be generated that smoothly cross that point. Of course there is still the potential for error here though if the user stops or reverses direction.

Getting and running the code

So I mentioned that you wouldn’t have to recompile your kernel… but you still have to recompile one system class of the Android java stack, or all you can do with the demo code is operate one touch point as normal (i.e. just drag, not stretch).

Unfortunately the version of the Android stack that made it onto the G1 was derived from a snapshot of the code taken quite a while before Android 1.0 was released, so you can’t just patch the one class, recompile that class’ .jar file, and re-install a single .jar on your phone — that .jarfile, built from the publicly-available Android 1.0 source (or, worse, Cupcake/1.1), won’t likely work with the rest of the .jar files on your phone. So for now you need to build the entire 1.0 stack with the patch and then flash your entire phone.

Note the following:

  • The Android 1.0 source in git builds a system that is a little bit broken in a lot of ways. Expect things not to work, and as a result expect multitouch to not be available on your primary phone until someone produces a more polished release from source that you can use.
  • Cupcake is still not ready, it is very broken right now. Use 1.0, don’t use Cupcake.
  • If you try this, you take full responsibility for anything that goes wrong, and if it breaks you get to keep all the pieces. You agree to not hold me responsible in any way if you lose important data or brick your phone, or if anything else goes wrong.
  • This is not yet ready for mainstream. If you are not a developer then wait until someone develops a working system that you can use easily.

Steps to follow:

  1. Get the Android source here
  2. Get my modified KeyInputQueue.java and overwrite the original in the Android source at $ANDROID_HOME/frameworks/base/services/java/com/android/server/KeyInputQueue.java .
  3. Get root on your phone, build the whole patched Android stack, and flash it onto your phone by following these instructions (except that you should use the 1.0 branch in git, not the cupcake branch). You could consider using JesusFreke’s RC30 v1.3.1 instead of v1.2 that is specified in those instructions. NOTE: all of these steps are highly dangerous to your phone, you must know what you are doing before you attempt this, and you agree to take full responsibility if anything breaks.
  4. Download and run my demo application which receives the patched events and splits them into separate events for each touch point.  (This is the application that is demoed in the video.)

Using the demo application

  • Roll the trackball left and right to switch between the two views
  • Press the trackball down (center-press) on either screen to toggle extra debug info. (Debug info starts “on” on the first screen and “off” on the second.)
  • All other interaction is performed by dragging one or two fingers on the screen.

Future development

There is considerable work that could be done to polish this and tweak it for optimal usage. A lot of the demo code (event noise smoothing etc.) could be moved into the Android stack, and motion estimation could be added to this to make things smoother. There are still sometimes glitches when you lift one finger off the screen after a multitouch operation, as well as when one finger hits the edge of the screen (due to some edge-logic in the lowlevel driver, I think).

Getting this patch upstream is probably unlikely, because ultimately this is a hack, especially the hijacking of the MotionEvent size field — but the actual impact to single-touch applications is very low: just some weirdness/jumping when you have two fingers on the screen. Note though that the G1’s default software stack has its own weirdness here (as the very first grainy “we think there’s multitouch on the G1″ YouTube videos showed), and because of the hardware event noise when you lift one finger from a multitouch event.

I suggest someone write a .odex editor tool that can selectively excise one class from a .odex tool and replace it with one from another Dalvik-compiled class — then “all” that you would need to do to get multitouch on your phone would be to get root and then patch your system. Everything else should keep working as normal.

Ideally someone would then graft this patched .odex file into JesusFreke’s RC30 image, so that all you had to do was reflash your phone and you’d have a phone that is full working, but with multitouch support too.  (At the moment it’s either-or…)

I want to also put out there a challenge for someone to build a MultiTouch frontend for Google Maps and WebView.  In the demo, I just scale static images of a map and a webpage.

You can also use my code if you need a testbed to start developing your own multitouch software, so that you’re ready for the day that multitouch is officially supported by Google.

I am unlikely to do any more with this code myself, I just had to show it could be done :-)

That’s it!  Have fun.

Please discuss among yourselves in this Google Groups thread.

You can also read comment from original post here.

« | Home | »

0 Comments For This Post

2 Trackbacks For This Post

  1. geeks2null.de» Rootzugriff auf deutschem G1 Says:

    [...] The Story Of MultiTouch On Android G1 Interessanter Englischer Artikel über Multitouch auf dem G1 mit Demo Anwendung zum Download [...]

  2. Multitouch - Linux Has It Too Says:

    [...] of the capabilities of the new kernel initially explored by Luke Hutchison who implemented multi-touch on an Android G1 using a [...]

Leave a Reply

Advertise Here
Advertise Here