Designing the foundational pattern for moveable UI in AR space
When I joined the Spectacles product design team, there was an existing approach to moveable UI that had several compounding problems. Targeting it was difficult and fatiguing — the interaction surface was too small for the precision hand tracking demanded at the time, and users struggled to reliably grab and move things in space. It also offered no room for system-level functions — closing, scaling, switching between world and body anchoring would all have to live somewhere else, solved separately. The experience felt fragile rather than intentional.
There was no spatial UI primitive. There was no pattern. Everything was being solved ad hoc.
Design a spatial container that makes AR panels feel like first-class objects in space — easy to grab, easy to move, and extensible enough to carry the system functions that flat UI in AR would need over time.
I started in Figma — not to produce deliverables, but to think visually. Sketching the frame, exploring what the affordances should communicate, working out where system functions would live. Once the concept was solid, I mapped the interactions: cursor states, the frame's visual feedback on hover, the feedback hierarchy between cursor and frame.
Then I wrote a scenario — a storyboard of interactions I needed the prototype to show. Three Containers in front of a user. Within that setup I choreographed interactions with all of them: hovering over frames, content, scale affordances, and system buttons to visualize feedback on both the cursor side (shape changes to communicate intent) and the frame side (a dynamic highlight showing exactly where the user is targeting).
I recorded myself acting all of this out, framed deliberately so I knew where the Containers would sit in the final composite. I tracked the footage and brought it into Blender and After Effects. Built the UI assets in 3D, animated the full interaction model, and comped everything together.
This was before agentic tools existed. Without coding skills, animation was the fastest way to validate a highly interactive spatial design — no Figma prototype could communicate what the interaction felt like in 3D space.
I presented the prototype to Evan Spiegel. He greenlit it for further development and real-time prototyping in Lens Studio.
The frame margins were deliberately generous — designed to be adjustable within a range, with the default tuned to balance grabbability against visual footprint. Grabbable from any side, top, or bottom. The large surface area made targeting forgiving, and as hand tracking improved over subsequent hardware generations, the defaults were refined accordingly.
The frame wasn't just a grab handle — it created real estate for the functions that spatial UI needed: close, world-to-body anchor switching, scaling. These could live on the frame itself, contextually, without cluttering the content inside.
In Lens Studio, developers can adjust border size dynamically — for example, tying affordance size to distance from the camera, so panels feel more grabbable as they move further away. Developers can also choose whether the Container billboards to face the user at all times, or stays fixed in world space. The Container is a system, not a fixed component.
The Container was built internally first, refined through iterative testing, and shipped publicly in the Spectacles Interaction Kit approximately a year after the design phase. Neil Cline engineered the full real-time implementation in Lens Studio. Every moveable panel on Spectacles is a Container. On a system level, two are always present — Lens Explorer and System Settings. Developers can instantiate as many as their experience requires.
The visual design has evolved since — the team has refined the aesthetic across Spectacles generations. The interaction design has not changed. That stability is the thing I'm most proud of: a spatial UI primitive that became part of Snap OS and has stayed right.