Yoguide is the result of my final student project at CIID. It lasted 9 weeks, and it was an individual project.
I have always been a big believer in accessibility, and that the world should be equally accessible for everyone, regardless of their disability. This is why I began my research with blind people to understand the problem area and see if I could make a positive impact.
Yoguide is an app that helps the visually impaired do yoga at home, by themselves. It acts as a personal trainer and keeps an eye on you, providing specific instructions and correcting you when you go wrong.
Yoguide uses a phone camera with computer vision to detect the body's joints and orientation, compares it to the correct orientation, and guides them towards it. This way, it helps them learn and form mental models of the yoga poses and they can stay fit from home.
In the spirit of inclusivity, even though Yoguide was designed and developed with blind people, it can be used by anyone. It is not just a product for blind people; it is a product which can also be used by blind people. Designing for the niche of blind people helped make this a better and accommodating product.
On a larger scale, according to research, visually impaired people are generally less fit and more obese than sighted people.
On a more local level, it is stressful and tiring to use public transport or arrange for transport to go to fitness centers in the city, so they don't go. The problem space was well explained by an expert, Birgit, in the video below:
Doing exercise by themselves at home is not a possibility as they don't know if they are doing something correctly since they have no way of confirming visually, nor do they get the guidance of a fitness center.
From these problems, there was a clear tension and opportunity area I could see. People wanted to exercise at home, but had no way to do it in the existing ecosystem.
I learned that designing for yoga would be a good way to go as it requires minimal space and cost at home, which led to the opportunity:
Helping the blind do yoga at home
As shown in the video, one can select the exercise and time, place the phone somewhere it can see the person, and do the yoga session. This is all done on the app.
The phone camera can track the person's joints and orientations between them. It can recognise if the person is doing the pose wrong, and can guide them towards the correct one.
This helps them form a mental model of the exercise, as they can't do it visually, and helps them stay fit from home.
The working prototype was built on a Microsoft Kinect v2 and was demoed live in the CIID Final exhibition. It uses computer vision for pose detection, and gets the orientation between each joint, as you can see in the video below.
The process took place over 9 weeks and involved 7 participants, which consisted of 4 blind people and 3 experts, to tap into their wealth of knowledge. The process was fairly structured like the image below shows, with a lot of back and forth and iterations in each stage, especially in the "Generating the concept" phase.
In what ways can being blind reduce independence or autonomy of a person?
In order to understand more about this, I interviewed 3 people.
Talking to all 3 of them, I found 3 separate problem areas relating to navigation, social security, and exercise. I found the third one quite intriguing.
I learned about exercise as a possible problem area from Birgit (who works at the Institute of Blind and Partially Sighted), and decided to work on this as a lot of work was done in the two other problem areas I’d looked at. I wanted to let the process guide me, and this seemed like a black box I could dive into and start prototyping and testing.
During this period, I tested and interviewed 2 experts and 2 users.
I held a co-creation session with Flemming Davidsen, who is an expert with mobility for blind people. He has been doing it for 20 years.
I took 4 extreme ideas to figure out what he liked and didn't like about it, and tried to understand why. From this, we came up with a set of principles, which if satisfied would lead to a "successful" design.
I attended a yoga session taught by Nadia, a yoga instructor teaching the blind. I wanted to understand how she gives instructions and provides guidance
From this quote and what I observed, I generated how Yoguide would instruct in order to mimic the instructors. It would work as a decision tree, giving a step and checking if it is correct. If not, it would get corrected and only then move on to the next step.
I did two yoga sessions with Flemming and Inger, to understand what kind of instructions they respond best to, and understanding the nuances better.
From these sessions came the metaphor that the app would behave like a personal trainer, as that is what the users gravitated towards naturally.
I got to understand the nuances of different aspects of the session, like how many times instructions should be repeated, the tone of voice, and the organic two way communication between the app and the user.
Each instruction would only repeat for a maximum of 3 times, after which it would move on in order to not disrupt the flow.
Users can tell the app vocally when to stop, resume, and repeat instructions.
The tone of voice is important as it is a yoga session. I tested different voices with a soundboard and as a result the most natural sounding voice was chosen.
I wanted to understand what form the product should take. I tested options from wearing a suit, a separate device, or an app on a phone. The result was overwhelmingly positive for an app, with 4 out of 4 participants vouching for it.
In fact, 95% of blind people use apps. It is a large part of their independence and helps them navigate daily life more independently. This would fit into their current ecosystem and mental models well.
The screens were made after bouncing off ideas about the content with the participants. The app was made after prototyping for how blind people navigate phones with gestures using Voiceover or Talkback. It can be found on appyoga.webflow.io (open on your phone).
I experimented with motion capture suits, and computer vision models like PoseNET and OpenPose. They were not as accurate or as fast as the Kinect however, and that is what I decided to go with for demonstration purposes.
The Kinect could best demonstrate the concept. It can read the body position in 3D space, which means the phone doesn't need to be at a specific angle for the yoga session to work. It can be kept anywhere as long as it has the whole body in view.
The prototype was tested and improved after testing with Inger, Flemming, and a few sighted users.
The working prototype was demoed live in CIID's final exhibition. These are the things I'd like to do next:
If number of instructions reduces over a week/month, it might be working, it means people are learning and using it regularly, which are two important parameters.
This project was all about letting the process guide me, and as a result I made something about yoga for blind people, both areas I knew nothing about beforehand. There still remain a lot of nuances to be discovered by research and testing.
As a result of designing it for a niche, it has also resulted in a solution that can be used by everyone, even people who can see. It can be helpful for any yoga practitioner to have a personal trainer always on their phone, who can correct them at home.
The solution also works for squats, push ups, pilates, correct posture, and other home exercises. As a result of designing for accuracy, it has opened up a multitude of use cases.