top of page

A Room of One's Own

An Interactive Documentary of Individual Narratives during the Lockdown

Residential Spaces and People who are Trapped in them


TAGS: VR/AR, Sound, Narrative/Storytelling, Social Good/Activism

Project Description

Over the past three years, the pandemic has transformed our lives and the world around us in irreversible ways. Amid one and another lockdowns , I started to contemplate on the living spaces that sustain our spirit, emotions, memories, and physical bodies but at the same time confine us and become our “personal prisons”.

A Room of One’s Own invited people who were under lockdown in Shanghai to co-create a virtual archive of their living spaces and individual narratives under the grander, collective, and political narrative. Through the collection of 3D scans of people’s homes and audio clips of their reflections on the experience, I assembled an audiovisual, interactive virtual space in which the audience traverses and navigates among rooms and soundscapes.



My research focused on the usage of virtual spaces as a vehicle of environmental storytelling and its effect on the level of empathy felt by the audience. From Cassandra Herrman and Lauren Mucciolo’s acclaimed 360 degree documentary “After Solitary” to Caitlin Robinson’s project on NYC’s different types of housing, “Watertight,” the relationship between humans and their residential spaces seems to be a recurring theme in the implementation of photogrammetry. Both of the works displayed 3D captured scenes as an extension and projection of the person who lived there. Given the passive observant mode of the two documentary projects, the audience would gaze into the lives of others from an omniscient third person perspective. Regardless of the different degree of intimacy implicated through this lens(extremely intimate in the case of “After Solitary” while aloof in “Watertight”), the incorporation of photogrammetry generally provides a more immersive experience in comparison to the traditional documentary medium.

Furthermore, narrowing my research down to VRNF (Virtual Reality Non Fiction), I found even more questions raised on the medium in regard to empathy. In the essay “Behind the Curtain of the “Ultimate Empathy Machine”: On the Composition of Virtual Reality Nonfiction Experiences”, the authors question whether virtual reality technology is truly an “ultimate empathy machine” as it promises and if its 360 degree viewing format really defeats the traditional two dimensional film and achieve an ultimate “realness” in allowing the audience to become immersed in the environment.

Without giving an answer to those question, A Room of One’s Own only took them further: with the surreal texture of photogrammetry and the poetry in each storyteller’s language, the audience goes on a journey from a ghost-like perspective, traversing through the misty fog comprised of people’s recollection of facts, emotions, dreams, and subconsciousness. When “reality” and “surreality” melt together into one image, how can one define “realness”? Is the measurement of “realness” really a prerequisite of empathy? Additionally, when each subject becomes an active storyteller instead of a passive interviewee, how does that affect the degree of empathy felt by the audience?

Technical Details

A Room of One’s Own is an interactive web-based documentary built in Unity and 3D captured via Polycam(with or without the IOS built-in Lidar sensor). With creative usage of Cinemachine, post-processing, lightmapping, and scripting (C#), I aim to orchestrate a cinematic experience in the Unity game engine that both preserves the level of control and artistry of filmmaking as well as the interactive, immersive components offered in a virtual environment.


Over the past five months, A Room of One’s Own evolved both creatively and technically. In the process of building a collective virtual archive of individual narratives, I’ve collected 3D models of residential spaces as well as audio recordings from twelve people who were experiencing the lockdown in Shanghai. After fine-tuning and relighting the 3D models, I designed a user journey map and assembled the models in a timeline on Cinemachine. While setting predetermined camera movements to motivate the view of the users, I also incorporated a custom script onto the virtual cameras for users to look around in the scene with mouse drag. Balancing the level of control and autonomy, I aim to make the navigation process in the virtual environment simple, intuitive, yet not overly restrictive. As a result, the experience is a journey through the abstract landscape and soundscape infused with 3D scanned interiors and fragments of individuals’ reflection, contemplation, dreams, and imagination.    


The surreal, desolate texture of non-professional photogrammetry intrigued me deeply. If environmental storytelling manifests through displaying traces of use, photogrammetry accentuates that very aspect of architecture: it captures the fragmented, used, and imperfect essence resulting from human usage and technological limitation. The willing acceptance and tolerance of such fragmentation, deformity, and distortion echoes the philosophical concept of “Wabi-Sabi,” which embraces beauty in the incomplete, impermanent, and imperfect.       


In the process of refining 3D models and incorporating them into the scene, I found that the quality of the scans differed greatly depending on whether their device was equipped with the Lidar sensor as well as their individual fluidity with 3D capturing. To compensate for that difference, I experimented with adding volumetric fog, baking the lightmap, and altering the style and tone of their narration to adhere to the visual aesthetics of the model. 


Initially, when collecting the audio sources from my subjects, I asked for audio clips ranging from 5 to 20 minutes. However, after creating a demo scene and doing initial user testing, I discovered that the visuals do not match the audio in the density of information it provides. I also found that in a virtual and interactive setting, users tend to get bored and distracted more easily if there’s not enough visual content to keep them engaged. Eventually, I found that 2 minutes is the longest time a user can linger in the same 3D scanned room without losing interest. Based on this feedback, I trimmed the audio clip of each person down to under two minutes and made sure to provide different camera perspectives while in the same virtual space. 


In future editions, I plan to work with a sound designer to refine the narrations and sound design of the project. Furthermore, I will re-edit or re-scan some of the models that weren’t well scanned. Eventually, after incorporating all the elements and content, I will deploy a 3D web page version of the project and possibly also develop a VR version. 

Project References

Bevan, C. , Green, D., Farmer, H., Rose, M., Cater, K., Stanton Fraser, D., & Brown, H. (2019). “Behind the Curtain of the “Ultimate Empathy Machine": On the Composition of Virtual Reality Nonfiction Experiences.”

Guo, ChunNing Maggie. “Hard Life with Memory: Prison as a Narrative Space in Animated Documentary and Virtual Reality.”

Rodríguez-Fidalgo, María Isabel, and Adriana Paíno-Ambrosio. “Use of Virtual Reality and 360° Video as Narrative Resources in the Documentary Genre: Towards a New Immersive Social Documentary?”

Blassnigg, Martha. “Documentary Film at the Juntion Between Art, Politics, and New Technoligies.”

“After Solitary” by Cassandra Herrman and Lauren Mucciolo

“Watertight” by Caitlin Robinson

Interactive Website:

bottom of page