Full Paper

Toward Compound Navigation Tasks on Mobiles via Spatial Manipulation
Michel Pahud, Microsoft Research, Redmond, USA
Ken Hinckley, Microsoft Research, Redmond, USA
Shamsi Iqbal, Microsoft Research, Redmond, USA
Abigal Sellen, Microsoft Research, Cambridge, UK
Bill Buxton, Microsoft Research, Redmond, USA
Time: Wed 16:24 - 16:48 | Session: Navigation and Selection | Location: Gro├če Aula

We contrast the Chameleon Lens, which uses 3D movement of a mobile device held in the nonpreferred hand to support panning and zooming, with the Pinch-Flick-Drag metaphor of directly manipulating the view using multi-touch gestures. Lens-like approaches have significant potential because they can support navigation-selection, navigation-annotation, and other such compound tasks by off-loading navigation to the nonpreferred hand while the preferred hand annotates, marks a location, or draws a path on the screen. Our experimental results show that the Chameleon Lens is significantly slower than Pinch-Flick-Drag for the navigation subtask in isolation. But our studies also reveal that for navigation between a few known targets the lens performs significantly faster, that differences between the Chameleon Lens and Pinch-Flick-Drag rapidly diminish as users gain experience, and that in the context of a compound navigation-annotation task, the lens performs as well as Pinch-Flick-Drag despite its deficit for the navigation subtask itself.

MobileHCI 2013 Proceedings in the ACM Digital Library.

Important Dates

ACM Logo
LMU Logo


Google Logo
Grand Logo
Intel Software Logo
Microsoft Research Logo
Nokia Logo
SMI Logo
Telefonica Logo
Yahoo! Labs Logo