Milestone 6: Iterating

UI design iteration

To be delightful and useful, glmpse must be simple, clean, and actionable. For casual, low-friction video communication to be easy and fun, the app can't get in your way; it has to make everything really easy. It should enable you to request, receive, and share 4-second videos seamlessly and without effort.
For any given user, the glmpse app is really just a wrapper around glmpse video requests (incoming and outgoing) and a way to initiate, respond to, and view requests. The challenge is to organize and display the requests such that three things are immediately clear about each request:

Original Prototype

We started with a simple design that segregates requests by their kind and status:

Based on user testing, we identified several deficiencies with this design, but the biggest, most serious deficiency is that the homepage doesn't give you anything to do or see. Users have to navigate to a next page to see activity. Nothing is immediately actionable.

Home page
Each sub-page is a list like this.

UI prototypes

We prototyped a few different ways of organizing and presenting content using mock-ups and Flinto (for simulating the interactivity). We tested the prototypes with test users to identify what was working and what wasn't.

Unified Feed

In this design, the home screen is a feed of all activity -- new requests from friends, open requests you've sent, received glimpses, and sent glimpses. Check out the flinto interaction prototype here

Activity Feed by Friend

In this design, the home screen is a list users you're currently interacting with. Each user includes some information about the status of your interaction with that user. Clicking on a user leads you to a history of your activity with that user. Check out the flinto interaction prototype here

Activity Feed by Incoming/Outgoing

In this design, the home screen has two tabs: incoming and outgoing. Incoming is new requests that have been sent to you and video responses to your requests; outgoing is requests you've initiated and glmpses you've sent. Check out the flinto interaction prototype here

We found that the unified feed was the most succesful organization of content. It required the least navigation through the app to achieve a user's goal; it immediately surfaced the content that was actionable or required attention; it gave a clear overview of all current activity, both in terms of requests the user received and the status of requests the user initiated.
The downside of the design is that it requires some "learning" because it is noisy and messy. Because all content is on the same page, each individual piece of content needs a lot of information indicators -- who is this interaction with? did they request a glmpse or did I? If I received the request, have I seen it and responded to it? If I initiated the request, have I gotten a response? It took test users a bit to figure out what all of our indicators meant.
In an effort to simplify the design, we iterated on the unified feed prototype:

See the Flinto prototype here

We were happy with the results of this UI iteration and decided to implement it. Here's a screenshot of the current home screen of our app:

Functional iteration

Text Prompts

A major feature we added was the ability to send text prompts with requests. This was in response to two related problems:

  1. Some users felt that the quality of glmpses they received was low, or they did not receive the kind of glmpse they wanted
  2. Some users struggled to decide what to record when responding to a request
These are issues with pull-oriented video communication. When someone pushes a video, they have decided their video is worth watching, which arguably raises the chance of that video being interesting. When making a glmpse request, there's a high chance the other person isn't doing anything particularly interesting, so he/she has to get creative. Creativity can be hard. Adding text prompts allows the glmpse requester to have more control over what they receive. Eg, if I want to know where someone is, I can ask "where are you?", and hopefully not receive a glmpse of just a face. Meanwhile, the glmpse provider gets some guidance. Depending on the text, the provider doesn't have to think as much, and the requester gets what he/she wants. Furthermore, the text prompt may allow for more of a conversation between users, since a small amount of detailed information can be passed along. As a constraint, text can also encourage more creativity from users who provide unsatisfying glmpses by default.

Friends

While the app is still small and in user testing, we want to keep it very open and not limit functionality. Our mentality is to see how users desire the app to be used and see how we can best facilitate that. With that in mind, we didn't want to restrict who users could request glmpses from. We did, however, want a better way to organize what used to a long list of all users. In the page where you request a glmpse we know have a friends list on top of a list of all users.

Friends List

Adding Friends

Continuing Iterations: What's Coming...

With our most recent iteration only out for a few hours, we were already keen on collecting feedback from our users. Some of the most concentrated feedback we have received over the last few days involves some feature additions: