PROXII is an interactive installation that projects an ASCII-generated version of each participant onto a wall, allowing users to physically interact with their own words as they fall through space. Spoken language is translated into drifting text, which the on-screen figure can catch, push, or reject in real time. Built with creative coding and machine-learning libraries, the experience was refined through testing to achieve seamless motion tracking and immediacy of response. By turning speech into something tangible, PROXII explores the intersection of communication, emotion, and digital presence—inviting participants to see their words take form.
Experience PROXII here, microphone and video input required.
Joshua Miller
Creative Coding
p5.js
Mad Mapper
Figma
Wall mounting the trifold maintained the project’s dimensional effect, leading to refinements in camera placement and projection mapping. MadMapper enabled precise alignment across the irregular surface, resulting in a more accessible and engaging installation.
Iterative development refined speech-to-object translation and motion tracking. AI assistance helped debug complex interactions, transforming experimental concepts into a seamless real-time experience.