Sunday 5 February 2012

The Prototype Post Mortem

After much early experimentation with different AR SDKs it was finally decided to use QUALCOMM’s Vuforia SDK. This entry will be brief rundown of the development of the prototype.

Goals

  • Familiarize myself with the iPhone and Vuforia SDKs
  • Develop a mock up of what I expect the final output to be
  • Analyse it to work out procedures and development strategies for the main project

Development

The development of the prototype is a necessary step in the development of the final application. Whilst none of the code produced for this will be used in the final product it has been a valuable learning experience. The basic development of the prototype started by taking one of the simple development samples in the Vuforia SDK and expanding it to provide a basic version of the functionality I desire to create in the final application. In this case the initial sample showed a teapot, I removed the code that rendered the teapot and replaced with the code to render the planets. I also wrapped this code in a function so multiple planets could be spawned easily. I also developed an overlay menu. This menu was developed using interface builder, and involved handling input to manipulate the planetary data provided for the prototype. In the prototype the only data manipulated is the scale of each individual planet, and whether all the planets are displayed or not.

Issues

There are several issues with this prototype. Firstly there is no design to speak of and it whilst it uses the basic concepts of Apple’s Model View Controller (MVC) methodology, it doesn’t implement a full MVC design, especially as all of the AR tracking, planetary simulation, and rendering code are all in one View. To follow proper MVC convention the planetary simulation would be part of the model component and the tracking would have parts in both the controller and the model.

Lessons

  • Working around the sample framework allowed me to learn a lot about how the Vuforia SDK interacts with iOS
  • Taught me about the MVC design pattern and how Apples interface builder works
  • Familiarised me with OpenGL ES on the iPhone
  • Helped me get handle on how the final application should be designed
  • Helped me learn how to set AR tracking

Further Development

  • Create a rough design outline of the different components required for the final application
  • Familiarise myself with Apple’s Coremotion library to add the further features needed for the app
  • Research a method to load data from a data file on the iPhone

Thursday 2 February 2012


Here is a video of the prototype

Wednesday 1 February 2012

The Project and Augmented Reality

Project Scenario

A manager of a science museum wishes to create a new exhibit using augmented reality. This exhibit allows the end user to explore the room with a mobile device with planets overlaid into the environment that can be viewed through a virtual tour of the solar system. This exhibit would allow basic interaction between the user and the augmented environment. This exhibit must be easy to set up calibrate and maintain and should be relatively inexpensive.

Project Aims

The goal of this project is to examine and implement the applications of augmented reality in an educational and entertainment context, by producing an application which generates an augmented reality planetarium on a mobile device. This project will be an exploration of the practical applications of various techniques using both vision and motion based augmented reality. This exploration of techniques hopes to produce a hybrid method of implementing augmented reality using both markers and simultaneous localisation and mapping (SLAM) based tracking that is easy for the end user to set up and calibrate. This application will also allow for augmented reality objects to still be tracked even when the marker that spawn the object is partially or totally obscured from the camera’s viewfinder by using the previous scene data and the mobile platforms accelerometer data.

Augmented Reality

Augmented Reality (AR) is a method for integrating computer generated imagery (CGI) with a real world video feed, with the correct pose and perspective. There are several methods to achieve this effect.

  • · Camera tracking with fiduciary markers, that can be used to estimate pose based on images
  • · Camera tracking with computer vision and natural feature tracking, that calculate pose without markers on the fly
  • · GPS tracking
  • · Inertial tracking using gyroscopes and accelerometers
  • · Tracking using sonar sensors

Example fiduciary marker

All of these methods have advantages and disadvantages with regard to accuracy and computing power required to implement. There have been some applications that use a hybrid of different tracking methods, although these seem few and closed source. The most commonly used method in commercial phone applications is camera tracking with fiduciary markers as it has the best balance between accuracy and computing power required for mobile phones.

AR has been developed since the mid 90s. However due to the cost of custom hardware and tracking software, and the difficulty to calibrate, AR hasn’t seen much commercial use until recently. This is due to commercially available mobile devices with cameras and gyroscopes now being common and the development of several SDKs to aid in the development of AR apps.

ARToolkit Plus

AR Toolkit is one of the oldest augmented reality libraries which use fiduciary markers for pose detection. It is well known and commonly used as it is released under the terms of the GPL. This means a potential design if using this SDK could be integration into the original source code. If ARtoolkit is used in this project the ARtoolkit plus variant the last version of AR Toolkit before it the project that created it became closed source. This is due to this fact AR toolkit has not been officially updated since 2006, although unofficial updates do continue. ARToolkit was originally designed for PC,but as it is open source it is possible to cross compile to mobile platforms however experimentations with the marker tracking on the iPhone showed that it library did not respond well to the sudden disappearance of the marker and left the augment image up on screen for several seconds . Finally AR toolkit plus is incredibly sensitive to the fiduciary being obscured and will cease tracking if only a tiny piece of the fiduciary marker is missing.

Parallel Tracking and Mapping (PTAM)

PTAM is a tracking library that does not use fiduciary markers; instead it uses natural feature tracking to generate pose estimation that can be used for augmented reality.

OpenCV

OpenCV is not an augmented reality fiduciary marker tracking system, but a digital image analysis and processing library. It can be adapted to work as a marker tracking system that allows for 6 degree of freedom. Furthermore it could allow easier integration of further image tracking techniques, and would allow for more flexibility in how the markers and environment are tracked then AR toolkit and if implemented correctly can use more complex fiduciary markers than ARToollit. The primary disadvantage of using this library is that it would probably take longer and require more research to get the basic application operational.

Qualcomm AR SDK

Qualcomm has released an augmented reality SDK for both the android and IPhone platforms. This toolkit whilst not open source is free to use and distribute, and contains many features include in the closed source version of ARToolkit SDK. It also allows for more complicated fiduciary markers than ARToolkit and handles partial obfuscation of the fiduciary. Furthermore this SDK is optimised to work specifically on mobile platforms.

Friday 13 January 2012

ARPlanetarium

Welcome

Hello and welcome to the development blog for Augmented Reality Planetarium (ARPlanetarium). Over the next few months I will be providing insight in to the development of this application and will show demonstrations of my project in action.

The Project

ARPlanetarium is a new way to interact with our solar system to make learning about it fun. The project is an iPhone application which uses Augmented Reality. Augmented Reality uses a device’s camera and/or motion sensing information (gyroscope/accelerometer/etc), to realistically insert computer generated imagery (CGI) into a video feed of the real world and allows the user to interact with it in a number of different ways.

The Overview

Over the next couple of weeks, there will be entries discussing the tools used; the planned features and design of the app and a post mortem of the prototype build. After that there will be regular updates on the main build of the project.