During my time working as User Experience Designer for COBI I was responsible for a variety of tasks. My range of duties reached from research, organising, conducting and evaluating user tests and surveys over the creation of flow charts, screen flows and wireframes to prototyping, but most of all concept development and documentation for new features and optimisations.
As an example of my work at COBI I give an insight into the process of creating a concept for synchronised feedbacks and notifications for the whole system.
Research & Fieldwork
Starting to work on possible feedbacks and notifications for the system as a whole, the basic hub (system core element) states and events were of cause already defined. As COBI is a new connected system for bikes and not comparable to other products on the market yet, the question was if we would need to mostly stick to assumptions to create helpful and satisfying feedbacks or if we could get some insights on possible needs from target groups in that case also.
Research mostly focused on different types of feedback and their correlated products, systems and environments. Here we had a stronger focus on means of transportation and navigation apps. In interviews with cyclists we asked if they have already heard of COBI and tried to find out if there are expectations on the systems behaviour, including feedbacks.
Some leading questions during that phase:
- What kind of systems are comparable?
- Can analogies be created?
- In which surrounding and system people are used to which feedbacks and notifications?
- What kind of behaviour is expected from COBI?
- Are people missing feedbacks and notifications in the field of means of transportation?
Ideation & Concept development
During research and fieldwork phase we made out that, although COBI is a system for bicycles, a connection to automotive systems can be made. Thus analogies could be created that were among others included to create user stories and scenarios.
In several rounds of brainstorming and collaborative sketching, feedback types and priorities were defined. It was important to work together with people from all involved teams from the beginning (design, concept software and embedded development) to minimise the amount of adjustment processes and possible rework later.
As COBI can be used with and without smartphone, when working on the ride lifecycle feedbacks and notifications it was really important to keep in mind that all solutions must work regardless if used with phone or in “stand alone” mode. Working together on solution ideas helped to narrow it down to a concept respecting all known possible errors that can occur and sorting out unnecessary things for both use cases.
Having developed a basic concept, the details needed to be defined next. For a better impression of how the feedbacks and notifications would feel and look like on the hub’s status LED, front light, rear light and thumb controller, together with our lead visual designer we worked on video prototypes to achieve that. These were also the base for defining timings and with some more technical input creating a well conceived feedback concept with all necessary information, ready to be implemented.
When used with phone, depending on the hub state, if standing or moving and the regarding app screen, several different percaptable solutions were possible and needed to be sketched, tested out and synchronised as well. Starting with wireframes , screen and interaction flows and going over to test everything, it was important to always imagine szenarios in its whole with all possible visual and auditive influences, to not get to loud in both senses. Especially while riding, safety playes a big role and unnecessary distratctions should be avoided.
Some leading questions during that phase:
- What feedback can be given in general?
- Which components should be included?
- Which states and events need to be communicated to the user?
- How should feedback and notifications look like for respective states and events?
- Are there differences to be made depending on the use case?
Testing & Iterating
Ongoing tests were conducted through all work phases. As some feedbacks can only be proper tested by using COBI directly and riding a bike with the system, most tests were made internally and with small groups of external testers. Some things like timings could be quickly tested and adjusted really fast over the prototype videos. Other neede more rounds of iterating and testing.
Most test rounds were made to answer he following questions and find solutions to them:
- Are all feedbacks on the hub LED/ the frontl ight/ the rear light/ the thumb controller/ in the app/ as combination understandable?
- Are the used sounds to quiet or to loud depending on the respecting type of feedback?
- Do sound feedbacks work in every environment?
- Do some feedbacks get distracting?
- Except of the Theft Alarm, are some feedbacks irritating or annoying?
By testing constantly we got to know fast what could be optimised or needed to be thought over.
Iteration rounds lead us to further testing rounds and possible test szenarios. Especially for the sound feedback variations in volume and sound directly were made and tested out in different environments and noise levels.
Through great collaborative workprocesse through all phases, several iteration processes and a back and forth between testing, prototyping and implementation a rounded, synchronised feedback concept including all components of the COBI system was developed.
The final result can just be experienced by direcly using COBI and run through some states and events of the system. Ideally testing it out in both possible modes, with smarphone and in "stand alone".
Nevertheless I can show a chart visualising the ride lifecycle with all states and events and the regarding feedback types and components.
Of cause not all feedbacks will be understood immediately from all users. Especially when COBI is used without phone, feedbacks, warnings and notifications are communicated only over light feedback and short hub sounds.
To learn and understand all feedbacks and get an impression of the feedback interplay between all components, even before they occur, from the initial video prototype final feedback videos were created.
All status feedback videos can be found here