Redesigning how users point, aim, and interact in AR space
In the initial cursor implementation on Spectacles, hand-based targeting used a small 2D cursor rendered at a fixed depth — approximately 110cm from the user. When the user's ray intersected a UI panel at a different distance, the cursor jumped abruptly to that depth. The jump was jarring, but more importantly it was confusing: users had no visual model for how raycasting worked. There was no ray — just a floating cursor that snapped unpredictably. Every depth jump felt like a malfunction. Users couldn't build a reliable mental model for how to target, or whether they were doing it right.
UX research confirmed what was observable in sessions: targeting was breaking core interactions across Spectacles.
Make targeting feel like a natural extension of intent — not a visible mechanism users have to think about. The system should communicate just enough to guide action, and disappear once the user knows what to do.
The new cursor only appears when the user is near something interactable — fading in smoothly at the depth of that object. When not targeting anything, it disappears entirely.
To make this work, every interactive object needed a second physics collider — approximately 2x the size of the object — used exclusively to trigger cursor visibility as the user's hand approached. This was a foundational change to the Spectacles Interaction Kit: a new requirement that all interactable objects carry this proximity collider.
A visible ray gives users something the cursor never could: a clear line of sight that explains how the system works. Users can see where the ray originates, where it points, and what it lands on. The ray length adjusts dynamically based on what's being targeted. Different objects can express different targeting visuals — a cursor for UI panels, a ray for objects in 3D space — letting the system communicate context without adding UI.
In the balloon prototype, users transitioned naturally between cursor and ray within the same scene. In testing, they didn't notice the switch. That invisibility was the signal that the design was working.
UI panels needed a way to lock cursor depth to their surface — preventing the ray from overshooting past the panel into the scene behind it. The design concept I introduced for this became the Interaction Plane in the shipped Interaction Kit: an invisible depth surface built into interactable objects that constrains cursor behavior to the UI plane, keeping targeting precise and predictable regardless of what exists in 3D space behind the panel.
I built two functional prototypes — one with the original cursor behavior, one with the new system — and wrapped both in the same balloon-popping task to create a controlled comparison. The prototypes went through corridor studies with the UX research team.
The new targeting system produced 3x more successful hits than the original.
Beyond the numbers: participants stopped thinking about targeting. They focused on the task. I presented the findings and the prototype comparison directly to Evan Spiegel and received his sign-off to proceed.
The cursor and ray system shipped as part of the Spectacles Interaction Kit in Lens Studio, publicly available to all creators. The depth constraint concept became the Interaction Plane component in the shipped kit.