James Alfei edited Our Prototype.tex  about 10 years ago

Commit id: 34c1f6beecc307224d5e8e9f4e8de6d0ab61e05e

deletions | additions      

       

When Google earth has been focused and the system has booted completely, the hardware can be used immediately. Firstly, the user will likely want to zoom in to see a closer view of the planet and satellite imagery. To do this, the user would push the slider forward which would emulate a key press on the keyboard and would zoom the view in. To do the opposite, the user can pull the slider back and similarly, the view would zoom out. To stop zooming, the user would place the slider bar somewhere in the middle of the range of motion. This is outside the thresholds and causes no action, and further allows other phidgets access to the system.  The next action a user may want to perform is to navigate around the map in a similar way to using arrow keys. This has been implemented in our system using the circular phidget. The phidget is logically split in software into 8 distinct sections. These sections replicate up, down, left and right and any combination of diagonal movement using these keys. The circular phidget can be very senesitive, so before actually actioning any commands, we first check that a finger or physical contact has been made with the device. When this has been confirmed, the current position is checked against our rules and the actions performed. When the user releases the phidget, the virtual keys are released.  Now that the user can zoom and navigate around, to make use of the new 3D imagery and height mapping available in Google Earth, the joystick has been implemented to look around both vertically and rotate around the z axis. This movement is triggered when the jostick is moved beyond a certain threshold in the x or y axis. When this occurs, the camera can point up or down (by pushing the joystick up or down respectively) as well as look left and right using the same physical pushes on the joystick. This movement is fluid and feels natural in its use.  One of the more interesting features of the system is that of the RFID sensor. We have 3 tags available in the system. Each of these tags has a predefined location within the software upon first run. These work in a way such that when a tag is scanned, an event is fired within the software to detect the unique tag that was scanned. Upon detection, the software will look up the stored location (e.g. Swansea University). We then use a combination of virtual mouse click and keyboard presses to effectively "Search" for the location. This operation is hidden behind the physical screen cover of the system.