Creative Direction
VR / AR / Mixed Reality
R&D Prototyping
Experiential Installations 
Creative Technology
Engineering & Fabrication

work 🍩    about 🔎    team 🧠

 




NY Times R&D / Mixed Reality (Near)Future

What innovative tools, design patterns, and journalism experiences are possible in a world where Mixed Reality experiences are the norm?
2022


TL;DR

Experimenting in AR storytelling for an audience of millions, and inventing new tools and design patterns to explore the future of mixed reality journalism.



NYT R&D
brought us in to investigate how news organizations and journalists could utilize Mixed Reality in their mission to seek truth, and help people understand the world. 

From developing design patterns for mixed realty storytelling, to creating practical tools for Journalists, this six month effort was the combination of product development, experimentation, and designing human machine interactions that cultivate rich storytelling experiences. 


 


Captured MR video of The Times’s photo archive overlaid on the 3D spatial reconstruction of the photo archive space.



Watch and Read More on NYT R&D below:


+ Developing spatial investigation and reconstruction tools for journalists’ information gathering   
+ Exploring design patterns for building mixed reality stories 


For more detailed breakdown, keep scrolling or follow our links for a shortcut:


+ Spatial Tools
+ MR Journalism for Open World
+ Design Patterns for MR Stories




TECHNIQUES
Mixed Reality Development
Human Machine Interface Design
Systems Design
Shader writing
C#, HTML, Javascript
CLIENTS 
NY Times R&D



PLATFORMS 
Meta Quest Pro
Microsoft Hololens
Unity
Node.js







Spatial Investigation and Reconstruction Tools for Journalists


TL;DR

We built a set of networked tools that allowed a reporter in the field to do real-time spatial reconstruction and investigation of a physical site. 
It allowed journalist to create persistent spatial markers of video, photo, and voice memos overlaid onto the Mixed Reality world.




How?


The Spatial Reconstruction and Investigation Tool (SRST) we developed, gave journalists the ability to walk through a physical site and automatically reconstruct a 3D model in real-time.   

The SRST gave journalists a Mixed Reality overlay on-top of the world that allowed them to capture video, photos, voice memos, and create 3D spatial markers for tagging important details within the physical site.  These could be things such as an important object or evidence at a location.

All data captured was recorded and synced to a remote web-interface which allowed an off-site producer or journalist to watch or direct the capture in real-time. 

The captured content and spatial markers could also be reviewed and exported from the web interface at a later time.  Users could navigate through the 3D capture and playback its capture as if watching or scrubbing a video capture.  All content could be exported, highlighted, and seen within the spatialized 3D site-model.

A web interface allows reconstruction to be viewed live or at a later time.  All spatial markers such as photos, video, voice memos can be reviewed, exported, and viewed at the spatialized location they were taken.




Live Off-site Review / Playback Interface



Live-stream of a journalist wearing MR headset with the SRST toolkit. Off-site viewers can rotate around the live spatial capture as journalists captures the space.
Playback interface for recorded spatial and investigation data.




Research


Our initial research posed the following questions:


+ What tools could we create to expedite information gathering in breaking news situations?
+ How might we instantly create 3D models of spaces instead of needing to manually model them by hand?
+ Can mixed reality help reporters reconstruct a news event by spatializing found footage?



In collaboration with journalists who regularly report on stories that require visual analysis, we identified scenarios where mixed reality hardware could improve on-the-scene information gathering and promote deeper investigation.   


(Additional Images)













Designing Mixed Reality Journalism for the Open World:


TL;DR

Mixed Reality offers new opportunities for storytelling that extends beyond the reach of screens and into our physical spaces.
We explored how location-based storytelling could be harnessed to allow readers to dscover news as they roam the open world.  



Curiosity and Exploration


The New York Times’ 171-year-old archive contains millions of stories that could annotate the specific spaces readers occupy with rich and relevant reporting.   

Mixed reality’s biggest strengths is its ability to sense and undersatnd the world around the user, adding context to their space.  What might news look like in this context?  How could users engage with content in a way that felt natural yet not overbearing?  

Using computer vision and geolocation, we designed a prototype to explore how we might deliver news to our readers in a way that rewards curiosity and exploration

Spatial Capture of Washington Square Park overlayed with our mixed reality footage. 








Research

In the context of intelligent environments, we landed on the following design decisions:

  • Curious Exploration: Encouraging a reader to inquire about a space by moving their body through it. As they look around their environment, the user’s gaze triggers icons to appear. If they’re interested, moving closer reveals even more detail.

  • Discrete Wayfinding: Using subtle UI/UX to draw interest towards specific areas of the space without interrupting the environment or telling a user where to look.

  • Device Tethering: Acknowledging that not all devices can (or should) perform all activities equally, we explored the ability to tether a mobile device to the headset so that articles could be transferred over for ease of reading.

  • Designing to the Platform: Mixed reality is inherently open-ended and we don’t need to force a linear story into an unbounded space. Embracing open world design allows many opportunities for novel discovery of Times journalism that can be consumed later on other platforms.

Wayfinding and article icons lead user to points of interest and geo-located stories





Spatial Context for Articles


One challenge of mixed reality is something we call “visual saturation”. Too much content overlaid onto the already stimulating physical world results in a world that loses its sense of readability and creates fatigue for the user.

In the context of an world scale MR experience, we focused on creating spatialized news articles in their most functional form. 
Simple icons, a headline, and at close distance, a short byline to give more context.
 





Device Tethering: From Mixed Reality to Mobile Device



Not all devices can (or should) perform the same activities, this includes reading news content. 

In the context of the outdoor physical world filled with stimulii and happenings, we found that reading news in Mixed Reality was not ideal. 

Legibility, comfort, and visual awareness were three aspects that we were focused on.  

With this in mind we devised a way for users to tether a mobile device to their headset, allowing articles to be sent to their phone for reading.

Using image tracking, the NYTimes R&D logo triggered a “Send to Phone” button.  Using gaze tracking a reader could to send the article to their phone for reading.

This method improved legibility, comfort, and ultimately felt like a natural extension of how we already consume and experience content.


“Everything is becoming science fiction. From the margins of an almost invisible literature has sprung the intact reality of the (21st) century.” - J.G. Ballard