top of page
Life Documentation.png
DALL·E 2022-08-16 13.42.59 - stillshot of cybernetic person in augmented reality headset,

Life Documentation

Software systems for effective life documentation

Overview 

Life is finite, time is unidirectional, and humans itch to document everything, to freeze time and immortalize moments and by extension versions of our past selves. The emergence and eventual mass adoption of AR technologies will create a mass explosion of life documentation data. Yet effective life documentation requires processing and filtering this sea of data to provide users with the most relevant moments.

The Life Documentation project is a collection of software modules that enables both automated and human-in-the-loop processing of life documentation data.

Goals

 

  1. Create software algorithms to select significant moments from life documentation data using semantic understanding.

  2. Develop human-in-the-loop feedback systems to enable deep / reinforcement learning for data selection.

  3. Use and iteratively improve the system in our own lives.

Outcomes

  1. We are developing a web application that allows users to search and recall moments from their past. Users can input any search query, including names of people, places, objects, actions, etc. and our system will perform semantic search in order to return relevant video clips from recorded content.

  2. We are developing a semantic video captioning system that labels the content of videos on various timescales: i.e. {0-10s: “Man plays football”, 2-5s: “Man throws ball”, 6-7s: “Ball flies through air”,…}

How We Built It

Our system consists of a web application through which users can perform search queries over their documented memory. This web application queries a cloud server that pulls data from a Firebase database and performs relevant semantic captioning and NLP computations in order to calculate the clips relevant to the search query. Specifically, searchers are performed through a combination of using facial recognition to match names in the query and using semantic similarity to match phrases with video captions.

 

Future Plans

We plan to conduct self-experimentation with our system in order to gain insights on the impact of the system on the way we think, remember, experience, and behave. Specifically, we plan to integrate the system into video feed from AR glasses that we wear around daily. Before we do so, we will need to acquire a pair of sleek AR glasses and consider the ethical implications of recording everyday life.

Philosophical Considerations

 

As with any technology, questions of artificial vs. natural ways of existing arise:

  • Transience vs permanence - is there inherent value in the transience of moments and states of being?

  • Meta-reflection vs presence - is there a tradeoff between being present in a moment and thinking about the moment in meta-reflection? if so, what is the appropriate balance to strike between the two?

  • Role of documentation - a tool for extending memory, augmenting experiences, challenging perception?

bottom of page