LEGOVision is the result of a one-week programming course at CIID taught by Dennis Paul, Jacob Remin and Annelie Berner. The project was completed with Aditi Vijay and Aakash Aggarwal.
My main role was programming, and I was involved in all phases of the process from brainstorming, prototyping to setting up the final installation for the game demo.
LEGOVision is an interactive mixed reality gaming experience. The reality you create in the physical world gets translated to your computer screen.
Gamers arrange LEGO blocks on the LEGO board physically. This configuration of blocks is translated into the screen in the form of obstacles.
The objective for the gamer is to hit the monsters at the bottom of the screen with a fireball released from the top. The LEGO blocks act as obstructions for the fireball, making it hard to hit the monsters accurately. The gamer has three lives. Missing the monster thrice means the game has been lost. Hitting the monster before losing three lives means you win the game.
We wanted to explore the idea of games beyond just computer screens.
Screen addiction has become a well known problem in today's world. We wondered if it would be helpful to extend the digital world into the physical.
In addition to that, we wanted to explore how interacting with the physical world can affect the digital world.
Everything has been coded in Processing with the help of supporting OpenCV libraries
The game setup consists of a LEGO board, LEGO bricks, a mouse, a web camera and a projector. The gamer arranges the bricks on the LEGO board while the camera placed right over it captures a real time video feed and sends it to Processing.
Processing then detects the orientation and size of these bricks using an OpenCV Blob detection library and translates the same configuration on the digital screen in real time.
Physics elements like gravity, acceleration, velocity and rigid bodies have been added to make the game feel more real and natural.
The process was quite generative, and I interviewed and held prototype and feedback sessions with 4 participants to evolve the concept based on their expertise.
View the code on Github.
On Monday morning we were introduced to the design brief – Design a Gaming Experience. We began by asking ourselves the question – What type of game do we want to design? To answer it, we did some secondary research followed by brainstorming, discussion and affinity mapping. As a result, we concluded that we were all interested in the domain of mixed reality. We wanted to explore the idea of video games beyond just computer screens. This also became the guiding map for our project and facilitated decision making throughout the design process.
User will do something + in Physical System + and something will happen in the Digital System
We started on Tuesday by discussing our map with the instructors.. Asking them for feedback early on in the process helped us identify technical challenges and exposed us to new opportunity areas. One idea that emerged from the discussion was to detect colours from the physical world and put it up on the screen. We were not sure as to how much time would be required to make this to work so we decided to do a quick prototype. We were able to make this function very quickly using the ‘Open CV’ library in Processing. This got us excited and thinking about adding further complexity – What if we could also detect shapes from the physical world and translate them onto a digital screen?
Wednesday morning, we began by reviewing our map and the "How Might We" statement. We gathered into a group and made some sketches. This helped us identify some of the unanswered questions regarding the overall gaming experience and prioritize the next steps based on the criterions of importance and effort.
We wrote the code using the Blob Detection library to detect the shape of the LEGO bricks. Doing this accurately took a lot of iterations. By the end of the day, we had successfully made a fully working prototype on the input side – user’s interaction with the LEGO bricks being translated on the digital screen in real-time.
On Thursday, we started by having a quick chat regarding the fidelity we were targeting for the project. This helped us focus on tasks that required immediate attention. We decided to complete the entire loop of the gaming experience before aiming to refine the already functional elements. We spent the rest of the day prototyping. We built the digital game interface, setup the physical game installation, and fine-tuned the controls.
On Friday morning, we tested the game with a few classmates. This helped us identify pain points and improve some of the core interactions. We ended up adding sound feedback to enhance the overall gaming experience before the final demo.