Visualization and Target Users:
We are going to build a system to help monitor users’ activities outdoors, particularly in mountainous areas. The system displays information including trail records, speed, ambient light intensity and weather forecast (See prototype figure below). The intended target audience consists of people whom we are calling “savers.” These individuals monitor the hiking routes of people out trekking in the field, while collecting data about both the person and contextual surroundings. The savers can communicate with hikers about safety and precautionary data, such as flood warnings and rock slides, and they can help guide the hikers back to the appropriate trail should they wander off into the wilderness. With the information the system provides, savers acquire an understanding of the past and current positions of the hikers and the surrounding environment.


How we’re going to visualize it:
The system will be built as dashboard style/ the visualization mock-up is on our website, and is also shown below. For the implementation, we will use JSP + Java to make the system. JAVA will be used for the back-end of the system to get sensor data remotely from mobile phone. JSP will be used for the front-end of the system’s visualization. Here we may use one or more JavaScript libraries to help with visualization, for example, Protovis and the Google Map API.


High-Fidelity Prototype for Mountain Hiker Contextual Data


Description of the Prototype’s Interface:
The image directly above in the interface mock-up is a satellite image from Google Earth depicting the Glacier Point hike to the top of Half Dome in Yosemite National Park, California, U.S.A. This destination was selected both because as a group we possess familiarity with the layout of the park, and because this particular trail is known to be long, arduous, and challenging. Dangers abound on long, strenuous hikes, and this sort of challenging situation is the type for which we are designing this system. The starting point of this trail is in the lower-left, displayed as a blue bubble with a star in the middle. Similarly, the end point, on top of Half Dome, is also marked with a blue bubble containing a star. The hiker, shown on the map in blue, is working his way down the trail in this view. The data on the right are, from top to bottom, speed/acceleration, ambient light, a time line slider to retrieve recent data, and a weather display for the conditions throughout the day.


The Source of the Data:
The entire architecture of our system has been set up. The mobile and desktop clients work together with the server to pass data back and forth. We are currently able to collect live sensor data from our device. We will also collect terrain and route data from Google Earth and Google Maps. The sensors will collect data on acceleration, orientation, light intensity, and GPS readings, and weather and forecast data will be regularly updated.


Evaluation of the Final Result:
We will actually deploy this to see whether the saver can interpret the hiker’s contextual information correctly, though we are not going to travel to any mountainous areas at the current time. Realistically, given Michigan’s lack of exciting hiking trails in the Southwest part of the state, we may evaluate our result by tracking one another’s movements in a place with lots of hills, such as the UM Arboretum.


Division of Labor:
Jessie and Gary have collected research data on the ways in which sensor data can be helpful to users. They also made the prototypes based on our group discussion. Zhenan and Sang have been exploring the technical requirements to build the system. They constructed the back-end data capturing functionality, which is ready for the purpose of visualization.  Also, they will utilize Google Earth/Maps API to present the information to users.


High level plan to complete our project:
We need to explore how to present the sensor data to the saver so that he can actually help the hiker.  We need to figure out how to visualize both live contextual data and historical contextual data.  We would also like to explore the ways in which the saver can communicate with the hiker.  The obvious modes of communication are the cell phones and possibly a Bluetooth device.  We would also like to figure out how Google Earth/Maps might be used to scan the paths ahead of the hiker for potentially dangerous or impassible road blocks.  We would like to collect data to point out refreshment areas and distances (to guard against dehydration), and to anticipate rest stops (to avoid pulled muscles).  Of course, with the high-level overview of the hiker’s route that the saver has access to, he can also recover the hiker’s last route, and the safest route back to the main trail should the hiker become disoriented or lost.

One of the most substantial and obvious challenges that we will have in designing a system intended for monitoring wilderness hikers is that we do not have wild areas within close proximity.  Since we cannot at this time collect data from travel through actual mountains with risks, challenges, and dangers, we will have to devise another scenario that will approximate the target conditions as closely as possible.

What We Want to Visualize:

We are going to build a system to help monitor the activities of outdoor hikers.  In our particular scenario, the users of the data visualization are not the hikers themselves.  The hikers do carry the phone that collects and reports the data, but the end users who see the visualization are people watching out for the safety of the hiker as he travels across risky or dangerous terrain.


Usage Scenario:

When the hiker goes out to pursue a mountainous adventure, he carries his mobile phone with the built-in sensor set to ‘on.’  The person viewing the visualization, whom we will refer to as the saver, collects the sensor data from the hiker’s phone, and uses the data visualization to help him understand the hiker’s contextual information.  This contextual data provides clues necessary for the saver to locate and assist the hiker.  For example, the saver may find that the hiker is engaging in dramatic or risky movements based on readings from the accelerometer sensor.  Using this contextual data, he may report to the hiker and provide suggestions as to where stable and steady terrain is located, based on information collected with the GPS sensing device.  The GPS data will be shown using a histogram or an accumulation of lines.  The saver may be monitoring several hikers at the same time; accordingly, we try to make the visualization data accessible pre-attentively.


Tools We Are Using:

The tools we will use to process and visualize the data are JSP + Java. Java will be for the system’s back-end, and will collect sensor data remotely from the mobile phone. JSP will be used for the system’s front-end, to build the visualization. Here, we may also use JavaScript library to help with the visualization, for example, ProtoVis and/or Google Map API.


The Source of the Data:

The entire architecture of our system has been set up. The mobile client, server, and desktop client work together to pass data. We are currently able to collect live sensor data from our device.


How We Might Evaluate the Final Result:
We will actually deploy this to see whether the saver can interpret the hiker’s contextual information correctly, though we are not going to travel to any mountainous areas.


Role Division:
Jessie and Gary have collected research data on the ways in which sensor data can be helpful to users. They also made the prototypes based on our group discussion. Zhenan and Sang have been exploring the technical requirements to build the system. They constructed the back-end data capturing functionality, which is ready for the purpose of visualization.


High-level Plan:

We need to explore how to present the sensor data to the saver so that he can actually help the hiker. We also need to decide how to visualize both live contextual data alongside historical contextual data.  So far, we have determined that the saver can assist the hiker by viewing and communicating about terrain quality and distance, weather conditions, geographical coordinates, time, and light intensity.  The usefulness of data like this is self-explanatory; essentially, this data is helpful in the sense that it provides information about the hiker’s context from a high-level, comprehensive viewpoint, and as such can make hiking safer and more enjoyable for explorers.

High-Fidelity Prototype

March 31, 2010

In the time spanning the submission date for the lo-fi prototype and the deadline for the hi-fi prototype, our ideas have evolved into a different design project. We have identified the target audience as hiker monitors, or people who are watching the progress of a hiker while collecting contextual data from the hiker’s mobile phone and viewing the data on a large-screen PC.

The mock-up of our interface is displayed below. The left-hand side shows a Google Earth map of Yosemite and Half Dome, and the right-hand side contains data visualizations for speed, ambient light, and weather conditions. Time is represented as a scroll bar, as well.

High-Fidelity Prototype for Mountain Hiker Contextual Data



When you click on the image above to view the screen-sized interface, several features become apparent. The visualization in the form of a satellite image of a trail hike in Yosemite National Park is outlined in red on the map. The hiker’s current position is displayed with the icon of a man carrying a backpack. On the right-hand side are data visualizations for the hiker’s speed, the ambient light, and the weather. Positioned in between the weather and ambient light displays is a slider that can be used to adjust the time. In other words, by sliding the bar, data from a time frame spanning 24 hours can be viewed in the graphic displays.

Lo-Fi Prototype

March 10, 2010

Live Visualization for our lo-fi prototype

Our lo-fi prototype designed with a computer


The illustration above displays the interface to our visualization. The five components are: ambient light, as measured by the intensity of sunlight in the upper right-hand corner; magnetic field, illustrated with the red triangle and blue waves extending towards the person; orientation, measured in terms of north, south, east and west; accelerometer, measured as the speed at which the phone is being moved; and proximity, the distance with which the phone is being held from the person’s body.

The selection items in the upper right of the diagram determine which data is displayed in the graph positioned below it. For example, selecting ‘Ambient Light’ from the list will display the detailed and continuously updated data in the graph below (positioned in the lower right).


Hand-Drawn Sketch of our lo-fi prototype

This is a hand-drawn sketch of our lo-fi prototype.


This hand-drawn sketch is the intermediate phase between brainstorming and the detailed mock-up designed with the use of the computer (at top).

Webcam Visualization

Webcam Visualization


The image above shows a possible area for exploration: live capture of users’ activities by camera, a visualization using augmented reality. We may combine two features, brightness tracking and color sorting, to build our visualization. By using color to represent each of the five sensor programs, we can visually display the data for each sensor. The benefit of using a camera is that the visualization will be a more realistic and visually alive representation of data.


Use Cases

Our system is composed by three parts: (1) a mobile client running at background of the mobile device to collect sensor data and send them back to a remote server; (2) a server, as a middleware, will store the data and ready to be queried; (3) a desktop application that can fetch live data from the server then visualize them for the purpose of understanding human behavior.

Our system can help in situations such as the following design case: Brian is developing a messaging application for mobile devices. He wants his application to have the capacity to adapt to the surrounding environment. Following traditional methods, he would conduct a contextual inquiry, build personas and construct scenarios to support his design process. In our proposed approach, Brian can install the mobile client on users’ phones and collect data passively without interrupting users’ regular tasks. With our system, he notices that users sometimes send text messages on the go while under intense levels of light. This situation reduces the effectiveness of the performance. Then he figures a way (e.g. brightness self-tuning, voice-input) to make his application context-aware to adapt to the changing environment.

Progress

The entire architecture of our system has been set up. Mobile client, server, and desktop client all work together to communicate via the data. So we can get live, continuously updated sensor data at this stage. The next step is to visualize and display our data in an aesthetically pleasing and creative visualization. We will post additional ideas concerning the visualization as our brainstorming and planning both progress.

March 10, 2010

We are currently using an Android phone from Google to collect sensor data and convey it to the computer. We plan to design an original visualization, most likely using Processing. We are exploring the possibilities surrounding the incorporation of video into the visualization, as well as alternatives, such as using Flash animation encoded within a Processing application.

March 10, 2010

The goal of our project is the design of a system for monitoring individuals’ activities for research purposes.  Many mobile computing projects require contextual data in order to design the desired application.  As mobile devices become increasingly powerful, we have learned to obtain contextual data without a sensor network, which is more expensive and complex. 

Here, contextual data refers to users’ environmental and activity data.  Our project can help developers obtain this form of data when designing context-aware systems.

March 9, 2010

We are currently working on the lo-fi prototype, which will be posted by 5:00 pm on March 10th, 2010.

This is the website for the SI 649 team working on the Smart Phone visualization.  Our project has five main components that we plan to visualize with data:

1. Accelerometer data

2. Data from an orientation sensor

3. A light sensor, to measure the level of light

4. A magnetic field sensor

5. A proximity sensor

Our goals and objectives with regard to the practical application of these components will be posted shortly.