Planning Report - Running with Sound: Android Application Simulating Sound Sources at GPS Coordinates Using Smartphone Sensors
The general purpose of the project is to make an application that:
Registers running activity and presents its statistics.
Uses the techniques of sound and sensors in a meaningful way.
Makes it enjoyable and motivating for people to exercise.
The device sensors will limit how accurately the user orientation can be measured, thereby limiting the user experience.
When starting writing this report, possibilities to enhance the experience with 3D positional audio was still being researched by members of the group. There’s an API called OpenSL ES that claims to make it easy to position audio binaurally (as well as processing it in other ways) (Khronos Group, 2014). However, as for 2012, no actual Android device seemed to support that specific feature (Ratabouil, 2012). Neither did any of the project groups own phones. It turned out a device had to implement a profile provided by OpenSL ES in order to make use of all its functions.
Instead, if the sound is supposed to be heard from behind the user, it will instead appear from one of the ears and gradually move towards the center as the user is rotating towards the source. A voice or sound informing the user that he/she is heading towards the wrong direction might be a good alternative if the former turns out to be difficult.
When developing the application the assumption that people might listen to music when running will be considered. Ideally the user should be able to listen to music while using the application. Alternatively the experience could perhaps be made fun enough for the user not wanting to listen to music while using the application.
Initially, information will be gathered on how to use the Android APIs in the most efficient way for this kind of application. This will include how to use activities, sensors and maps. Since the application is developed to be run entirely on Android platforms, this part of the research will be of huge importance for the outcome.
Alongside the coding-aspects mentioned above, information will be gathered on how specific areas (GPS, orientation sensor, audio, etc.) work and how they’re implemented using Java for Android. Most of the information will probably come from e-books, as well as Android’s developer pages on the Internet. While doing this research a proof of concept of the experience of the sound will be made. This proof of concept will confirm that the sound is sounding as expected in relation to its location.
When information is gathered the structure of the app will be decided. Here UML-models will be drawn to make the structure clear. After the modelling decisions are made sketches will be drawn to decide how the GUI might look.
Then the coding process will start. The parts to be coded will be the ones mentioned in . Alongside with the coding, testing will be made to make sure that everything is working as expected. When possible, the tests will be made as test cases. It will also be important to test with real values - such as going out and running and see how the application works.
An evaluation will be performed when the first BETA-version is finished. This will be done by letting a test group use our app and conducting interviews with the test persons. These findings will help us establish requirements to come up with new design alternatives which later can be implemented.
The Scrum method will be used when developing the application - a flexible Agile software development framework where the team works more as a unit opposed to a traditional and sequential approach. The division of labor will be done in cycles (sprints). Each job occasion will begin with a scrum meeting - a short meeting where the group talks about what’s going on, what’s about to happen as well as possible problems.
To handle the coding part of the development, Git will be used, which is a version handling system that makes it easy to collaborate with others in coding projects. It makes it possible for each team member to work in the same classes and merge them when needed. Each specific part of the application (audio, gps etc.) will be implemented in individual branches.
When it comes to the testing part of the development, both the virtual phone available in the Android developing environment will be used, as well as our own phones.
The literature studies will involve how the human ear perceives sound, and how it’s possible to imitate sounds coming from specific directions using stereo headphones (Roads, 1996).
Literature about Android and how to best develop the app will also be studied. This is necessary to make the app enjoyable for the user and flexible enough to work on different kinds of phones. According to the Android API Guide - Fragments , fragments can decompose the functionality of an app into smaller parts (fragments) that can be reusable and, depending on the screen size, show up in different quantities at a time.
Alongside the audio and Android studies, the sensors considering position and orientation need to be studied. While GPS is the most natural way to measure the position (Sood, 2012), there are various ways to measure the user orientation.
The orientation while in motion could be decided through the GPS bearing Android Developers (2014), which is calculated as the direction the phone is travelling in. Although, while standing still and only rotating on the spot, we might be able to use the magnetic field sensor (compass) and accelerometer to provide an orientation of the phone itself (Sood, 2012). This however has problems since the orientation of the phone relative to the actual user is not always known (it depends on how the user is holding the phone).
Probably, as some initial testing shows, some combination of both is preferable.