The UX of Time

TEAM: Daniela Navaes, Constance Ip, Nathassha Di Pasquale, Deepika Grover, Wan Li

For our fourth project of the term, we got the brief of “The UX of Time: Design an experience that alters how time is perceived”.

We studied the concept of timekeepers of many natures (biological, psychological, social etc) for different living beings. After that, we did three timeline exercises. First we individually made a timeline of our morning up to that moment; then, a timeline of our month; and finally, in a group, a timeline of our last year.

On the next day we had another workshop on timekeepers. In our groups, were asked to quickly design a timekeeper and present to class. Our group did an experiment with water and tracing paper. 

Testing the behaviour of tracing paper with water and humidity.



Territory Mapping
The first research method used on this project was the territory map. We listed our thoughts on the concept of time and made a map with all of them. Then we voted for our favourite concept in each categories, then voted for one of the five. The concept chosen was “time as currency”.

territory map

After discussing, we strayed from the original concept and arrived at the relationship between time, life and death, and we designed some experiments with the concepts of “deadline” and “countdown”. The first is usually associated with stress and anxiety, while the latter usually consists of waiting for a happy occasion to happen. We wanted to test those assumptions.

After reflection, we then came up with a new experiment, a board game, coming back to the original concept of time and money.

Creating the concept and testing within ourselves before testing with others


The board game consists in giving the participant a certain amount of fake money and watching how they behave when money means money and when money means time. 

The game would consist in three rounds of spending money/time. First, spending money; second, they flipped it over and now it’s time, plus there was a time constraint – walls closing on them -, representing the end of the turn. Finally, there’s only one unit of time left, and the time constraint represents life ending.

We experimented with it with 9 people, and the results showed that, for the majority, when money was the currency, they spent it on investing to get more money, but when it was time, they spent it on activities with other people.

real time board comparisson between rounds

A sample of some of our results from the board game

Screen Shot 2018-12-10 at 17.04.12

Some quotes from the participants

While doing that, we also sent some people a digital booklet with reflexive activities to understand what they think about what is productive time, and what they considered valuable.

A sample of some of our results from the digital booklet

The results were that although productivity was related to activities related to work and studies, people said that what they really valued was time spent with family and loved ones.

Experience Sampling Experiment / Timekeeper

For our experience sampling method, we decided to change the topic to “the value of time”. We divided the concept of value of time into four parts:

  • History of currencies and value attribution;
  • History of measurement of time;
  • Activities and objects associated with time and value;
  • The perceived value of time for people.

My individual brainstorming on the perceived value of time

Screen Shot 2018-12-10 at 17.04.24

Excerpt from article about mobile phone behaviour

For our timekeeper, we wanted to measure how consistent was people’s everyday behaviours with those activities that they considered to be the most valuable.

It was a small bag, in which participants would put coloured balls inside for each time they used their phone. With that, we wanted to measure how much of their phone time was used for conversations with people.


The timekeeper

Participants would put a pink ball inside the bag for each time they picked up the phone to talk with someone, and a white bag to do anything else. They would do it for a period of three hours after diner, and send a picture of the results for us.

Some of the results


After we presented our research results, the feedback was that we had a lot of data, but a lack of direction. So now we should focus on the design.

The Design

After feedback on the presentation, we decided to work on the concept of a Value Tracker for our design outcome. It consist on a device that measures the perception of time and the feelings associated with it through visual cues like image saturation and speed of movement. To create the concept, we’ve done some brainstorming and storyboards.

The device/system was envisioned to exist in a distant future, and it would allow users to tune into other person’s subjective perception of their time, and see the world “coloured” through their eyes.

The experience would not give any information about the user, or what they’re seeing; it would simply allow you to see the world coloured with their own subjective interpretation of what they’re feeling. If you can see how someone else is experiencing their time in the same situation as you, could it make you reflect upon how you’re valuing your own time?

We thought about what technologies could enable that (glasses, for example), but decided not to focus on in, but on the experience.

While designing it, we also considered a few issues, such as:
– Could it be damaging for people who suffer from low self esteem or mental issues?
– It should be easily turned on and off.
– Privacy issues: User has to actively allow others to be able to tune into their perception, instead of it coming already activated as default.

Final Presentation

As a feedback, people didn’t quite understand the concept, and we were told that we should have walked them through it before the role-play. Also, that the design of an actual device would have helped the narrative.


The concept of time is very abstract and difficult to work with, because it is a completely based on the perception of the individual experiencing it, their feelings and sensations. Our topic chosen, the value of time, makes it even more difficult, because it adds a layer of psychological complexity of it: the interpretation of one’s perception. Nonetheless, I was satisfied with the final result, because the abstract nature of the concept matches the complexity of the subject. We ended up with an experience that is non intrusive – albeit not perfect and with limitations – and, more importantly, that not only alters one’s perception of time, but provokes a reflection on the value that they’re giving to their own time.


The UX of a Conversation

TEAM: Daniela Navaes | Patrick Bull | Ivy Wong | Reina Yuan

For the second and third weeks, the brief was: the user experience of a conversation. What constitutes a conversation? What are the entities and elements involved? Is it possible to have a conversation between a human and a non-human entity?

The goal was to design a conversation between a human and an artificially intelligent system.

Project Direction

We decided to explore the design of a non-verbal conversation. The UX/UI community is very excited about voice based assistants, since being able to speak to a machine via voice only is a very attractive idea. But what about those who cannot hear and/or speak?

The deaf community is somewhat invisible in society. Since they are not as easy to spot as blind people, for example, they’re also easy to be forgotten.

For that reason, we asked ourselves what if machines can learn to read and interpret body language and sign language to anticipate someone’s needs? We read several articles about it and found out that many technologies that enable this are constantly being developed and tested.

A Highly Proactive Smart Home  

We decided to explore the concept of how a smart home could interact with a person who is deaf/mute. The house would be proactive, suggesting and doing changes in the environment according to what it interpreted from their human.

Research Methods

We used two distinct research methods: AEIOU and Speed Dating.

The AEIOU framework consists in listing the following elements for the research subject: Activities; Environment; Interactions; Objects and Users. 

As for the Speed Dating method, we drew storyboards for a series of interactions between the user and our system, and showed to our colleagues in order to get feedback about the interaction – criticism, suggestions for improvement, feedback about language, realism, level of relatability etc.

AEIOU for blog


To get real user perspective on the user experience of deafness, we contacted a few organisations, of which only DeafPlus got back to us and arranged a meeting for the following week. Although it took place during the second week of the project (the design week), we took valuable insights from it which helped with the final outcome.

Research Findings
After presenting our findings, we got feedback that even though we did a lot of research about interaction between deaf people and AI, our results are vague and largely based on assumptions rather than real data.

Final Design

We then deliberated and decided to change our direction into designing a (still) wordless conversation between someone who has trouble sleeping and their smart bedroom.

For that we ran short interviews with college students about their sleeping habits and came up with the final concept: an app that would be connected with other apps in order to act like a sleep assistant. There are two conversation: would be between the user and their phone, though a chatbot that would define their preferences, and later through gestures only. 


The biggest learning outcome from this project is that it’s very easy to make assumptions about how someone with a disability might experience the world, especially when we are fully abled body. Although the intention is good, the road to real empathetic design consists in much more than reading articles online and doing interviews. Also, we learned that the deaf community sees itself more as a culture than as group of disabled people. In regards to the final design outcome, was satisfied with it, especially considering it was all done in two days (new research and design).

Is Artificial Intelligence really smart?

At the present era, Artificial Intelligence (AI) is present in several moments of our day. Take the self check out counters at the supermarket, for example. They’re fast, convenient, practical. But they’re also a perfect example on how it’s likely to be a long time until we can fully rely on them for shopping. Although the experience may run smoothly most of the time, the machines often make mistakes, and there is always a human in patrol to correct them. I dare to make the assumption that the people surveilling the automated check out machines are there not only due to the company’s legal requirement of having a certain number of employees. Maybe supermarket employees are always observing the machines because they expect them to make mistakes.  

The above scenario is a brief picture our current relationship with AI: we know it exists and roughly what it can do, but we don’t expect it to solve all our problems. In fact, we expect it to create problems at some point, and that’s why we are aware we need to say vigilant.  

We don’t feel safe enough yet to give machines full control. One might argue that this is a healthy relationship, although one sided. One party knows that the other has severe limitations, but still chooses to engage in it, not expecting it to be the bearer of their happiness. The other can only read zeroes and ones. And that is probably ok. Relationships are never 100% equal. 

Of course there are way more intelligent AI systems. Voice based assistants like Alexa, for example, can recognise human speech, overlooking accents (sometimes) and responding accordingly to many commands. That requires an enormous amount of intelligence to quickly go through a massive set of database, interpreting it and giving back an appropriate response in seconds (or an inappropriate one, like a laugh or a cheeky joke).

But what Alexa is oblivious to understand is human complexity, nuance of speech and implied meaning. It can give me the weather report if I say “Alexa, tell me the weather”. But I wonder if it would understand if I said “Alexa, should I wear a hat outside today?”. Is Alexa capable of understanding my question as a desire to know if it’s going to be sunny and hot enough to make me want to wear a hat? 

siri 1

I’ve done a little experiment with Siri. Yes, it interpreted my question as a desire to know the weather, but it didn’t really answer my question. It is sunny, but not hot enough to wear a hat, as one will not get sunburn at 10 degrees. One might say that it’s cold enough to wear a hoodie or a beanie, but that’s not really what I asked.  

Here’s what I got after asking Siri if I should wear a beanie today… 

siri 3

This is another great example of how important it is that we recognise the limitations of AI, and keep our expectations on check. First of all, it didn’t understand beanie, but bikini. Second, I’m shocked that on my second try Siri pointed me to articles about body insecurity, which is something I wasn’t even thinking about. Not only it made a mistake in recognising my words, it suggested me content that could be damaging to one’s self esteem and body image. I’m aware, however, that it was not a conscious choice of Siri. It simply searched the question on Google and showed me the most popular results. 

Also, do we really want to establish that kind of codependent relationship with our voice-based assistants? Is it healthy to expect them becoming more and more intelligent in order to fulfil our every need? Isn’t it better to ask those kinds of questions to a friend? Are we really that lonely?  

The UX of a Sensory Experience

For our first project, we were assigned to design a sensory experience, in order to think about questions such as what is an experience and how do we apprehend the world through our senses.

Field Exploration Research

One of the research methods for this project was a field exploration day, in which we went for a stroll in some specific locations of London in order to record sensory maps.

So we were divided into two groups. My group met in the morning at Royal Victoria DLR Station with Prof. Alistair, to visit the Emirate Airline. That would be our first sensory exploration, but first, the professor asked us to gather sensory data from the station itself. My exploration of the station consisted of a walk through it as a high fidelity point-of-view style video recording with high sound definition, in order to simulate personal experience and identify the most characteristic sounds of each specific location.

Emirates Airlines

Then we got into the Emirate Airlines, which was really pleasant, apart from the consistently annoying institutional video that played throughout the flight. It was a lovely experience however, and this time I chose to map it by drawing my impressions on paper, using form and some key sentences as indicators of my perceptions, sensations and feelings. 

Source - emiratesairlines co uk
doodles of map of emirates airlineDoodles of personal sensory map of the experience of flying the Emirates AirLines Source: personal collection

Greenwich Market

Then we headed to the Greenwich Market for lunch and meet with Group A. It was an interesting mix of colour from the many stalls of handmade art and smells of different world cuisines.  

collage grenwich marketSource: personal collection

Queen Elizabeth I’s “Mask of Youth” (The Queen’s House Greenwich)

After that, we had the unique experience of being the first ones (after the press) to see the newest installation at the Queen’s House Grenwich Museum, titled “The Mask of Youth” by Mat Collishaw, depicting a 3D model of Queen Elizabeth I’s face, animated and programmed in order do blink, move her eyes and make discrete facial expressions.


The Greenwich Foot Tunnel

Then we explored the Greenwich foot tunnel. I decided to again explore the environment using sound, but this time sound only. So I went blindfolded, being guided by one of my classmates (Timmy). As he walked through the tunnel, I kept my hand on his shoulder and audio recorded the experience. Unfortunately, there was a crash on my phone and the audio was not saved. 

Source- greenwich guide org uk

Overall sensory impressions

“Perceiving and imagining an object in a conscious state is the basis of human cognitive activity. As a multi sensory process, this never occurs with the participation of only one modality.” (Haverkamp, 2012)

Based on that experience of the tunnel I can say that not only my perception of space was affected by being blindfolded, but also my perception of time.

The experience of the Queen’s death mask was another of with big impact on my perception of what is an experience. I could experience my brain transforming feelings into sensory stimuli that didn’t physically exist. For example, I smelled formol. Did that happen because it was triggered by memory of previous experiences in which the sight of something morbid or related to death was accompanied by the smell of formol?

That said, the real outcome of all those experiences is that I was left with more questions than answers. Is is possible to separate perception from feelings? Does every sensation come with an emotional response? Is it possible to focus on only one sense without the interference of the mind to “compensate” for the “missing” one(s)? What is the role of memory and imagination in sensing?


For the presentation of the results of the exploration we were divided in pairs. So me and Shen designed 3 part presentation.

1 — Introception
We started our presentation by doing a very quick breathing exercise, for two main reasons: not oxygen the brain of our colleagues and introduce the concept of introception (Harverkamp, 2012), the sense of being aware of your internal organs and overall body condition.

The exercise is very simple and it is based on yogic breathing exercises called pranayamas (I am a trained Yoga and meditation instructor). First I asked people to close their eyes and get a sense of how their bodies and mind felt at the moment. Then, I asked them to take a few deep breaths in and out. Then, to breath in slowly, through the nose, counting to 4; then to hold their breaths counting to 7 (or until uncomfortable); then to breath out through the mouth slowly counting to 8. This exercise is empirically known to reduce levels of stress and help sleep. After that, I asked people to observe again their overall body sensations and the pattern of the thoughts. Everyone who manifested themselves related feeling much calmer and more relaxed than before.

2 – Experiencing sound and sight… isolated
The second part of our presentation involved showing a video. We wanted to present the contrast of isolating our two most used senses: sight and hearing. It represents the tube ride experience of someone who is visually impaired. While they’re sitting on the carriage surrounded by the Bakerloo Line’s loud noises, they’re having glimpses of imagination about other non related experiences. In the video they are being represented by very short soundless footages, and although they’re visual, their ethereal nature are being used to represent emotional states: for example, the high speed footage of people walking represents mind agitation, while the slow motion ones symbolise happy memories.

3 – Sound and imagination. Can you guess what sound is this?
In this part, Shen wanted to do some sensory experiences with the audience, in which she played some sounds without visuals and asked our colleagues to guess what they were. This was to see how much imagination plays a part in the sensory experience when one of more senses aren’t being engaged.


It was an interesting and unexpected way to start the course. By asking myself what constitutes and experience and how much do each of my bodily entities – senses, body and mind – play a part on it, was a good way to put a foundation on my understanding of my role as an UX designer. Our bodies are the vehicle through which we experience the world as humans, and even though we are getting more involved with the digital experiences and excited about the latest technologies, we should be constantly reminding ourselves of what it is like to be a human in it’s rawest form.

Also, this first project was a great opportunity to reflect upon what might be the experience of those who are sensory impaired, and how do the senses work together in order to create an impression of the world.




It’s a non-binary world

The fact that 55,1% of the world’s population have access to the internet (Internet World Stats, 2018)  and can freely express themselves has changed the experience of being human. Along with interconnectivity, we have been witnessing an increase in the blurriness of the boundaries between genders, nationalities and race, to the point of them becoming practically non-existing. Are we more diverse now than ever, or were we always, but now it’s easier to see? The more connected we are, the more we can see our own diversity and complexity.

Along with this realisation, however, there should be a big disclaimer. That is absolutely not to say that the experiences of groups of people identified as being in the minorities groups don’t count and should be dismissed: quite the opposite. More than ever, it is necessary to listen to acknowledge what oppression feels like, through the direct speech of those who suffer it.

We now have first hand access to stories of people that in the past we could only relate to through movies, songs and books. The difference is that, most of the time, those who are behind the such works are, ironically, representatives from the majority: white, cis, northern hemisphere born males. Up until today, even in works that intend to tell the story of, for example, a black woman and her experiences, are mainly written by white men, because such women still don’t have a strong enough voice and place in our society. 

That raises a big problem of inaccuracy of representation. Even I, a white woman, can’t possibly infer how a black woman experiences something, so how could a white man?

The problem with automation

Up until so far, algorithms have shown to be utterly ignorant when it comes to recognising the colourful spectrum of the experience of being human. As it’s quite clear in her article, Constanza-Shock shows how embarrassing and humiliating it can be for a transgender woman to go through airport security control. The A.I. can’t interpret the fact that the informations from the passenger’s gender information and the body scanner don’t quite match, therefore her existence is “flagged” by the machine as a threat. She then has to go through a very humiliating process of questioning and body search. That is just one example of it.

At each stage of this interaction, airport security technology, databases, algorithms, risk assessment, and practices are all designed based on the assumption that there are only two genders, and that gender presentation will conform with so-called ‘biological sex.’ Anyone whose body doesn’t fall within an acceptable range of ‘deviance’ from a normative binary body type is flagged as ‘risky’ and subject to a heightened and disproportionate burden of the harms.” (Constanza-Shock, 2018)

The questions that arise with from this subjects are many. First of all, why do these algorithms exist and who do they benefit? Is it for practicality and making life easier? If so, whose life is becoming easier with it? 

Another interesting concept that is presented in the article is Crenshaw’s concept of intersecionality, which basically explains how social injustice is often a correlational product of two or more factors that underlies someone’s outer identity. It is to say, for example, that black women suffer not only from sexism, but also from racism, which then merge together to tailor the experience of discrimination faced by a black women in a more intense way than if she was just a woman.

As designers and communicator or ideas, we are largely responsible, directly or indirectly, for shaping society’s views. So it’s more than about time that we pay attention to these concepts, especially if we are not affected by them. Not only to the concepts, but we must be listening to how different people experience the world, if we want to design smoother experiences for everyone.

At this point I ask myself whose lives are becoming easier with the design of algorithms that recognise and classify humans for any purpose. Designers? Companies? I can certainly say that there is a lot of people that are not happy with their experiences.
I wonder if is it even possible to create algorithms that can recognise human complexity? And do they really need to exist? Are we in such a rush that we cannot even pay attention to people around us? How can we move forward with the development of A.I. without running people over with automation? 



Costanza-Chock, S, 2018. Design Justice, A.I., and Escape from the Matrix of Domination. Journal of Design and Science, [Online]. Available at: [Accessed 30 October 2018].

Internet World Stats, Usage and Population Statistics. 2018. INTERNET USAGE STATISTICS The Internet Big Picture World Internet Users and 2018 Population Stats. [ONLINE] Available at: [Accessed 29 October 2018].


Create your website with