Prototypes
This is a collection of work samples and design experiments that reflect how I approach prototyping: quick, scrappy and hands-on. Each one explores different tools and mediums, from physical computing to interactive storytelling.
BreakoutME
Context
This project came from a small observation — the red, green, and blue subpixel layout of LCD displays looks a lot like the bricks in Atari's classic Breakout game. That spark led to a playful twist on the original: a game where the blocks are made from your own pixelated webcam image.
Solution
BreakoutMe is an interactive game that turns the player's live image into RGB-colored blocks. Using a webcam, the system captures the player's photo and breaks it down into red, green, and blue rectangles, just like how pixels work on a screen. These become the bricks the player has to clear, bringing a physical layer of self into the gameplay.
Players control the paddle using a physical knob, and can change the resolution of the pixelated image by moving their hand closer or farther from the webcam. Once the game is completed, the original (unpixelated) image is revealed - a small reward that ties the whole experience together.
Prototyping Process
I built the breakout mechanic using p5.js, then modified it so each block represents a pixel from the live webcam feed.
To capture analog input, I connected a potentiometer to the CPX, which sent readings through a serial connection. Since p5.js can't read from serial ports directly, I wrote a simple Node.js server to read the serial input and pass it to p5.js via WebSocket.
For the hand tracking feature, I used ML5’s model to detect hand distance and scale the block size in real time, letting players interact using just motion.
AI Fortune-telling Installation
Client
Applause Entertainment
(via BORING Design Lab)
Context
In 2023, Applause Entertainment was looking for a playful, eye-catching way to promote their new film Hello Ghost! The story centers on four quirky ghosts helping the protagonist turn his life around. Our job was to bring that spirit to life (literally).
Solution
We created an interactive AI-powered fortune-telling machine inspired by Taiwanese temple rituals. Visitors chose from four ghosts, each with its own personality and theme, to ask questions about love, career, health, or social life.
The machine guided users through a short movie trailer, then printed a personalized fortune slip styled like traditional Taiwanese fortune sticks. All visuals and fortunes were generated by AI, making each result unique.
The installation drew over 4,000 visitors in just 10 days, making it the first large-scale marketing campaign in Taiwan to use generative AI. The campaign was picked up by three major media outlets.
Prototyping Process
I led both the concept development and prototyping. During early ideation, I facilitated brainstorming workshops that shaped the final direction. To bring the experience to life, I quickly built Figma and Unity prototypes to simulate the interaction and communicate the concept clearly to both the team and client.
On the tech side, I built the physical input using an Arduino Nano to connect the button to the computer. I also handled prompt engineering for the fortune-telling experience, using the ChatGPT API to craft responses that felt magical and tailored. The frontend server that powered the main interaction was developed by our engineering team, while the generative visuals came from Midjourney and the text responses from GPT-3.
Eye Contact Camera
Context
After COVID-19, virtual interviews became the default. But something always felt off. I found it awkward that I couldn't look participants in the eye: my webcam sat above the screen, and eye contact was always a little misaligned. I wanted to build a device that made remote interviews feel more natural, helping both researchers and participants regain a bit of the social presence that got lost in the shift to online.
Solution
This device makes remote interviews feel more natural by restoring eye contact. When the researcher looks into the device, they can speak to the participant while appearing to look directly at them.
A camera behind the device captures the researcher's image and sends it to Zoom. At the same time, the researcher sees a reflected image of the participant, allowing face-to-face eye contact, even through a screen.
The setup was shown to other HCI researchers and received positive feedback. In early testing, several researchers noted how natural it felt being able to “look down at a script, then look up at the participant” just like in an in-person interview.
Prototyping Process
The core of the device is a one-way mirror, a piece of glass that lets light pass through from one side while reflecting from the other. A camera behind the mirror captures the researcher's image, while a secondary monitor below the mirror displays the participant's video feed, which reflects back to the researcher.
I used an HDMI processor to flip the video feed so the reflection appears correct, and connected the camera output to Zoom via an image capture device. Getting the angles right was key. I calculated the spatial relation between the screen, mirror, and camera to align the participant's face with the researcher's line of sight.
Meltdown Mission
Context
Meltdown Mission was a 9-week course project during my graduate studies at the University of Washington. Our challenge was to design a walk-up-and-play game that drives social impact. We chose to spotlight climate change — specifically, the harsh realities polar bears face due to melting sea ice — and turned it into an interactive storytelling experience.
Solution
We created a physically immersive game that helps players understand the exhaustion polar bears endure to survive. Using a custom-built controller, players swam, ran, and hunted - simulating real behaviors of polar bears.
Each activity was mapped to a real-world challenge: dodging pollution, chasing prey, or swimming through melting ice. The difficulty of each part reflected how hard those tasks are for polar bears in reality. It taught players something real, but still kept the experience playful and hands-on.
The game was well received during public demos, where over 30 players pledged to take environmentally friendly actions. It later won 1st place at the 2024 HCII Student Design Competition.
Prototyping Process
I co-designed the core game mechanics and led development of both the game and hardware prototypes.
The controller was designed to capture swimming-like motions. Two wheels with embedded magnets were mounted to the frame and connected through rotary dampers. Reed switches detected magnetic changes as players moved the wheels. I used two Circuit Playground Express (CPX) boards to process input signals in parallel and avoid conflict, then routed the data to the game.
The game itself was built in Unity with a modular, object-oriented approach for quick iteration. It responded directly to the CPX input, allowing us to test gameplay changes on the fly.
Lunatic
Context
Lunatic tells the story of a moon enthusiast falling in love with and exploring the moon. The goal was to build a storytelling device that felt surreal - something that blurred the boundary between physical space and imagination.
The story and visual style were led by Sichen Liu, our lead artist. I handled the technical design and prototyped the physical device.
Solution
We built a small storytelling machine that blended fiction with reality using light, sound, and interactive buttons. As the audience explores Lunatic’s hazy memories, the experience shifts between poetic and informational, like paging through a dream you "sort of" remember.
Prototyping Process
The machine featured a mini figure, a TFT display, and a control box. A prism was used to refract the video, creating the blended reality effect the artist envisioned.
I chose the ESP32 for its compact size and stronger media support compared to Arduino. Several buttons were wired into the board, triggering changes in state based on player input, affecting the video playback, sound effects, and lighting in real time. The main story content was stored locally as video files.