ReachSense – What Happens Before The Screen Is Touched
Imagine a phone or tablet that fades away all distractions when you’re simply reading or watching a video, then quietly brings back controls the moment your hand moves in to touch the screen. A screen that adapts, not to your touch, but to your intention. This was the starting point for a series of explorations into how we might build more context-aware interfaces using ultrasonic sensing.

When the finger approaches the screen the UI appears, and when retracted the UI is hidden
Described here is some of the thinking, prototypes, and lessons learned from working on a novel interaction model where the user interface adapts based on hand proximity before the screen is ever touched. It is an idea that has stayed with me for years, and one I still believe holds great potential.
Read more about the patent here
Static Interfaces in a Dynamic Context
Touchscreens today are still largely reactive. They respond to contact but do not anticipate it. Whether you are reading, browsing, or interacting with a photo, the interface remains the same until you physically touch the screen. This means designers often need to compromise. They must choose whether to keep UI elements visible to ensure discoverability, or show them with touch and hide them with timer logic and risk confusing users.
The challenge is how to reduce visual clutter and maximize content space, while still making interaction feel intuitive and seamless?
Interaction Modes and Hand Postures
The way we interact with devices varies significantly depending on how they are held. When a phone is used one-handed, it is typically cradled in the palm while the thumb performs all touch interactions. This is a compact and efficient posture, but the thumb often moves unpredictably close to the screen even when no interaction is intended.

Full screen mode. When the thumb hovers above the screen, the UI appears.
In contrast, when the device is held in one hand and operated by the index finger of the other hand, the motion is more deliberate and easier to distinguish. The interaction usually begins with the hand moving in from the front or side, making the gesture easier to detect and classify using proximity sensing.

Full screen mode. When the index finger approaches, the UI appears.
Equally important as detecting an approaching hand with low latency is the system’s ability to recognize when a hand or finger is retracting. A responsive and reliable classification of this motion is critical to avoid accidental UI changes and to maintain a fluid, intentional experience.
To be scalable and user-friendly, the interaction model had to work across multiple hand postures. It was not enough to detect one hand or the tip of a finger. The system had to perform reliably across a wide range of positions, movements, and device orientations where also the rest of the human body can cause ultrasonic detection.
Ultrasound for Sensing Intent
The first prototypes used a dedicated ultrasonic transducer from Murata, along with four microphones placed in the ideal corner positions of the device. These early setups were promising. We could detect an approaching finger at distances up to 20 to 30 centimeters with reasonably good accuracy. We could detect if the thumb was hovering over the screen or at the side in hand-held mode, and respond accordingly. Concept demos were shown at Mobile World Congress 2014 and 2015.



But as we transitioned from controlled setups to more realistic conditions, limitations surfaced. A commercial implementation would need to rely on standard speakers and microphones, optimized for audio rather than ultrasound, and microphone placements dictated by acoustic performance rather than optimal sensing geometry. The result was a higher rate of false positives, particularly in handheld scenarios where thumbs moved unpredictably close to the screen.
Despite these limitations, the exploration helped us better understand both the potential and the constraints of ultrasound as a proximity modality.
Microsoft Research & Sensor Fusion Hope
In 2016, Microsoft Research demonstrated Pre-Touch using self-capacitive touchscreens to detect a thumb or a finger hovering above the screen. While limited to about 20 to 30 millimeters of range, it validated the basic premise. Context-aware interfaces were possible, and even desirable.

Pre-touch by Microsoft Reasearch using a self-capacative touchscreen to sense thumb, grip and close finger motion
Where ultrasound stands out is in its potential to bridge the sensing gap. Self-capacitive sensing excels at very close range, ideal for thumb detection. Ultrasound can detect intent further out, from 20 to 300 millimeters or more and is best at detecting an approaching index finger.
Combining both technologies through sensor fusion seemed like a promising path since it would use the strengths of both technologies into a cohesive whole. We explored this idea at various tradeshows and through informal partnership, since we needed access to low level raw data from the self-capacitive screen to create advanced sensor fusion detection algorithms. However, aligning technical roadmaps across companies proved challenging.
UX Implications and Trade-offs
From a UX perspective, the benefits are clear:
- Cleaner interfaces, with controls hidden when not needed
- Full-screen content, such as photos, videos or text, presented without overlays and need for timers
- Natural interactions, where the system responds directly to subtle movement and intention
However, every new interaction model comes with trade-offs. When UI elements are no longer always visible, discoverability can suffer. Some users might not realize where or how to access controls, especially in unfamiliar apps or contexts. This places a burden on designers to create affordances, such as subtle visual or motion cues that hint at possible actions.
There is also a need for consistency. If the behavior differs between devices or apps, the experience quickly becomes fragmented and confusing.
For this interaction model to succeed, it must be implemented at the operating system level. The sensing layer, event triggers, and UI responses need to be part of the core system architecture, just like touch or swipe gestures are today. This would allow for a consistent developer framework and a shared design language across the ecosystem. Third-party apps would then need to adopt new UI patterns and behaviors that align with proximity-aware interaction, requiring both time and design maturity. Without this foundational support, any single-app implementation would risk feeling like a novelty rather than a meaningful advancement.
Reflections and Future Outlook
Many years have passed since the first ultrasonic prototypes, but the vision still feels relevant. In fact, more than ever, I find myself wishing for this type of interaction in the devices I use today. The ability to blend content and control in a way that feels seamless and intuitive remains a valuable and largely untapped opportunity.
Introducing new interaction paradigms is not easy. Only a few companies in the world have the resources, reach, and craftsmanship to deliver such experiences at scale. Historically, Apple has done it successfully. Microsoft, Samsung and Google have made bold attempts. For this concept to reach consumers, it will take a combination of technical maturity, clear UX guidelines, and polished execution.
Technology should adapt to us, not the other way around. ReachSense is a small step toward that future, where intention is understood and interaction begins by interpreting the subtle motion of the human hand that happens before the screen is touched.