ABSTRACT

In the recent two decades, we have witnessed an enormous growth in the application of online digital maps. Google Maps, Bing Maps, and OpenStreetMap ranked the top three in the online maps market in the United States. Based on visual perception theories, we established several evaluation metrics and conducted experiments to assess how those maps’ default roadmap views differ from each other, and which map is easier to visually locate destinations at different zoom levels, without using the search function. By integrating user testing and surveys, we collect quantitative and qualitative data for result analysis. The findings indicate that in general Google Maps has better ratings on its visual design, and is perceived as easier to locate destinations. Bing Maps and OpenStreetMap performance varied at different zoom levels. In the end, we compare the visual design differences among those maps and categorize design strategies on improving the visual design of digital maps.

Author Keywords

Online maps; visual perception; design strategies; evaluation.

INTRODUCTION

Cartography, as a science of making maps, has thousands of years’ history. It combines science, technique, and aesthetic. The advancement in computer and internet technology launched the digital revolution of maps. Spatial information began to be digitized and shared on the internet community.
Released by Esri in the early 1980s, ArcInfo was the first commercially available GIS software package for inputting, processing, visualizing and maintaining geospatial data [2].  
In 1993, Steve Putz developed Xerox Parc Map Viewer,  considered to be the first online map building application [3]. Launched in 1996, MapQuest offered the first online wayfinding service. In 2005, Google Maps brought in extensive search features, making itself the most prominent map services on the World Wide Web. Its tile-based and API-based mapping methods were quickly adopted by competitors such as Microsoft’s Bing Maps, OpenStreetMap, Baidu Maps and so on [17]. In 2012, Apple released its mapping services to replace Google Maps in the Apple operating system, however, as of the time of this study, Apple’s Mapkit API is was not yet ready for public website use [12].
According to Datanyze, a leading company in technographics, in the online maps market for website usage in the United States, Google Maps, Bing Maps, and MapQuest are the top three players, with Google Map occupying more than 90% of the market share [7]. One thing needing to be addressed is HERE is one of the main map data providers for Bing Maps, and OpenStreetMap is the data provider for Map Quest.
Figure 1. Online Map Market Share in United States in November of 2016, provided by Datanyze.
There are multiple factors that may contribute to a map’s popularity, such as functionality, interactivity, accuracy, visual interface, etc. How do those top three online maps differ from each other? In this study, we seek to examine the advantages and disadvantages of those three map services, specifically from the perspectives of cartography visual design principles, and the effectiveness when locating destinations when viewing the default roadmap views.

BACKGROUND

In digital cartography design and evaluation, multiple preceding studies provided different perspectives, mediums, and use cases.  For example, in 1991 Anthony J. Aretz published a paper in Human Factors presenting an experiment on evaluating pilot’s navigation performance on different map displays: track-up and north-up [1]. In 1992, animation and the role of map design for large spatio-temporal data analysis was studied [8]. In 1998, Brooks and Green published a report on experiments that concerns the amount of information displayed on electronic maps while driving [4]. In 2000, Christopher discussed the human factors in vector based map design, and stated that “no one map is best suited for all tasks” [15]. In 2005, a study of map design on mobile display was conducted to evaluate the effectiveness of map design in a comparatively small display [9]. In 2007, a usability testing was conducted to evaluate web maps’ zoom and pan functions [6]. In 2013, a study on visual perception and map design was published in the Cartographic Journal [16]. These studies helped pave the way for establishing cartographic design principles in digital maps.
2007

HYPOTHESIS

Based on its already established loyalty among users, we predicted that Google Maps would be the most preferred map and highest rated map by users for its overall visual design and ease of use during visual search tasks.  

METHODS

The following subsection: subjects, procedure, and data analysis describe the experiment that was conducted and the methods used to do so.

Subjects

Due to time constraints, this study used a convenience sample of students and faculty from Purdue University. There were thirteen subjects in total. Subjects were given an alphanumeric identification code to submit for each survey they completed in order to maintain anonymity.

Procedure

Pre-Experiment Survey
Participants were asked to complete a short survey before the experiment in order to collect  information on the subjects’ demographics, map usage and preference, familiarity with visual design terms, and whether or not they have any visual impairments.
Experiment
There were three different randomized comparative tests designed to evaluate and compare the three maps’ visual interfaces. Each test had three rounds. Within each round were three task series, each representing a different map platform—Google, Bing, or OpenStreetMap—at a different static zoom level (level 1= 9~11 depending on the city, level 2= 12 , level 3= 14 ). Only roadmap views were tested. Each test’s three rounds were ordered differently in order to remove potential bias caused by the sequence of the map presented during the test. Within each task series were three to four tasks to be completed, resulting in about 12 tasks per round. Those tasks included: 1. Find a road, 2. Find a building, 3. Find a water body, 4. Find a park. Each of the task item (road, building, water body, park) names were specified during the experiment so the subject would know which one to look for on the map being displayed. The map views and task items to look for in each map platform (Google, Bing, OpenStreetMap) were pulled from three different cities: Birmingham, AL, US; Singapore; and Bristol, UK. We used three cities from three different countries in order to account for familiarity that subjects may have with one region over another, and for any city layout design that might be better for certain map platforms over others. Figure 2 gives an illustration of the model followed when designing each test.
Figure 2. The diagram followed for the design of the three different test versions for the experiment.
Utilizing Microsoft Powerpoint as a quick and easy solution, we designed a series of click through slides for each testing groups. Destinations such as a road, waterbody, park or building that need to be located was covered with a clickable hotspot on the destination names. On successful location finding, testing subjects were asked to click on the location name that linked to the next task slide. The experiment was conducted in a large computer lab at Purdue University (See Figure 3). Three subjects participated at a time (separately) in the experiment, with one researcher per subject taking notes and making observations. While there were three different tests made for subjects to be randomly assigned to, due to the small number of participants, only two of the test versions were issued to subjects.
Figure 3. A researcher observing and subjects participating in the experiment in a computer lab at Purdue.
End-of-Round Surveys
After completing each round within a test, subjects had a brief survey to complete that asked questions about their perception of how easy or difficult it was to complete their tasks for the maps they just viewed. They were also asked to rate the quality of the map’s visual contrast of roads, boundaries, and color; the legibility of the names and symbols; and the hierarchical organization of information.  Figure 4 shows what one of these surveys looked like.
Figure 4. A sample of what part of the End-of-Round Surveys looked like. .
Post-Experiment Interview
Upon completion of the test, subjects were briefly interviewed by their assigned researcher to find out which map they thought was the easiest to search from, which map was the most difficult to search from, and their overall impressions of the experiment. They were also offered the chance to provide any additional comments, feedback, or suggestions for improving the experiment. The researchers took handwritten notes of this feedback.

Data Collection and Analysis

The pre-experiment survey and end-of-round surveys were  designed within Google Forms. This application automatically collects the data submitted and turns it into totals, percentages, visual charts, tables, and diagrams. This was useful in that it cut down time and effort that would have been spent documenting and calculating results if the surveys were in paper form. The only downfall of this method is that the surveys are not linked together, so comparing end-of-round surveys to corresponding demographic data from the pre-test experiment must be done manually. After retrieving the results from these forms, comparisons were made between each map’s ratings. Averages and totals were calculated manually for each zoom level, task, and map and compared against the other zoom levels, tasks, and maps. Some interesting patterns were revealed in the data analysis that will be discussed in the “Results” section of this paper. The rest of the data collected were hand.

RESULTS

Pre-Experiment Surveys

Two of our thirteen subjects’ pre-test surveys failed to submit so the demographic information we share will not reflect their responses. The results from the pre-experiment revealed the following information about our subjects: 90.91% of the subjects were between the ages of 18–34 and the other 9.09% were between the ages of 45–54. 81.8% of the subjects were international students, mostly from Asia. There were only slightly more female (54.5%) subjects than male (45.5%). Figures 5 and 6 discuss the results from some of the questions asking about map application usage.
Figure 5. When asked about map application usage, 100% of the subjects reported that they use Google Maps and another question response revealed that 100% of the subjects use that application most often.
Figure 6. Locating buildings, getting driving directions, and determining the distance between two locations were the top-rated activities when using map applications.  
Figure 7. Subjects reported using map applications on a daily or weekly basis, with most using them weekly.  

Experiment

During the experiment, many observations were made of the subjects’ body language during their participation. Due to inconsistent screen resolutions, some subjects were recorded as leaning in towards their screens to see better, and a few even squinting their eyes. One subject verbalized that the map they were viewing (Bing Map) looked a bit blurry and difficult to see the names of roads. The other maps were fine to them. One interesting observation in the visual designs of the maps is that Bing uses italic style font for building names and some road names while the other two do not (See Figure 8). It also has white outlines surrounding all text. One or both may have contributed to its poor legibility.
Figure 8. Bing Map has Italicized names for buildings and white outlines for all text.  
Several participants verbalized that it was very hard to see and find objects when viewing OpenStreetMap because there was overloaded information being shown on the map (colors, boundaries, labels, etc. and poor information hierarchy). Figure 9 illustrates the congestion of OpenStreetMap.
Figure 9. OpenStreetMap shows so many colors, boundaries, labels, and regions. It was very difficult for subjects to locate  buildings and roads.   

End-of-Round Surveys

The end-of-round surveys produced several results and a large amount of data. In order to compress it and work it into something more tangibly understood, the results were aggregated into charts for side-by-side comparisons. The images below in Figure 10 show some of those comparisons for each zoom-level. The numbers in the charts represent the total number of subjects who marked the category at each ranking level.
The data indicates that Google Maps had the best overall ratings for all zoom levels and categories being tested. Microsoft Bing Maps had poor legibility and hierarchy for zoom level 2, but the best legibility of symbols and strong hierarchical organization at zoom level 3. OpenStreetMap had the worst ratings overall. For zoom level 1 and 2, OpenStreetMap ranked the lowest in all categories, however, in zoom level 3, its performance increased dramatically for all categories, except for the legibility of symbols.
Figure 10. The top three images show the visual design rankings subjects gave for Google Maps, Bing Maps, and OpenStreetMaps and the number of subjects who ranked it in that category. The top image shows the results for all maps in zoom level 1, the middle image shows for zoom level 2, and the bottom image shows for zoom level 3.  
Figure 11. The top three images show the destination finding rankings subjects gave for Google Maps, Bing Maps, and OpenStreetMaps and the number of subjects who ranked it in that category. The top image shows the results for Google Maps in all zoom levels, , the middle image shows for Bing Maps, and the bottom image shows for OpenStreetMaps.  
The images above in Figure 11 show the ease and difficulty ratings results of destination finding tasks. The aggregated data suggests that, in general, for all maps at all zoom levels finding a road is the most difficult task, and finding a park is the easiest. Google Maps again had the highest rating for ease in finding destinations than the other two maps. For Google Maps, at zoom level 1, all four tasks were rated easy/very easy. At zoom level 2 and 3, finding a road was considered hard/very hard, most of the other three tasks were rated easy/very easy. For Bing Maps and OpenStreetMap, finding a road was hard almost at all zoom levels, however at zoom level 3, finding a waterbody, building, and park were considered easier than at zoom level 1 and 2.

Post-Experiment Interviews

After each test, experiment facilitator conducted a short interview with the testing subject to get qualitative feedbacks of their experiences on completing those tasks, as well as opinions on the experiment design. Most of the subjects claimed that OpenStreetMap is hard to finish the destination finding tasks, especially for finding a road. Some also argued that “if I were looking for a building or place, I would just use the search bar, not scan the map,” and some mentioned that they used colors or shapes as a reference to find a place, such and parks and buildings. Subjects also implied that in the beginning, they thought one of the maps would always be good. However the fact is that the visual design quality and helpfulness on finding a location varies on different zoom levels. One subject stated that “I liked this experiment. It felt like I was playing a game”, yet some participants complained that the tests were too long, taking them more than half an hour to complete. One subject suggested that “you should use eye tracking and evaluate cognition to find out the strategies people use to find things. It’s not just a visual thing, but also a cognition thing”. Those feedbacks gave us valuable insights, especially on helping us to improve the experiment design.   

DISCUSSION

Suggestions for Improving the Visual Design of Maps
Based on the results from our experiment, it is evident the the visual design of Google Maps was the best rated among our subjects for task searching activities at static zoom-levels. Google successfully implements text size variations, color variations (but not too many), and text-styles that are easy to read and see, and create an information hierarchy that helps the viewer complete the visual search tasks. While this is true, if the experiment were to allow users to use the search feature or other features regularly in the map, the results may have been different. Or, if we were using the satellite view or other views of each map platform, again, the results may have differed, and perhaps a different map would come out on top. Given the context of our tests, the ratings for the visual design and ease of task of Bing Map may improve by using more appropriate text styles and adding variation into the text sizes in order to create a clearer information hierarchy. OpenStreetMap, while it has beautiful colors and a lot of information, might improve its ratings on a test like this by eliminating colors and unnecessary information in its default roadmap view. This would improve the information hierarchy.
Constraints and ideas for improvement in the future for this experiment
Due to the time allotted for such an experiment and not enough researchers to complete a larger-scale testing plan, there were constraints preventing further exploration of design theories and other variables affecting visual perception. For example, during one of the post-interviews, a subject suggested that we use eye-tracking software to examine the paths subjects take to locate places. Subjects may have different searching strategies that influence their perception of how easy or difficult something is to find depending on how compatible the visual design of map they are viewing is to their searching strategy. Additionally, it was suggested that we ask subjects during the interviews and end-of-round surveys what their searching strategies were. Understanding searching strategies address cognitive factors that influence visual perception. If we were to do this experiment over, there are several adjustments we would make that we previously overlooked:
  1. Shorten the tests: some of the tasks were redundant and possibly not necessary to repeat for every zoom level. The length of the tests caused fatigue in some subjects, which could have influenced the way they responded during their surveys if they tried rushing through them in order to finally finish. Fatigue could have also influenced subjects’ perceptions about how easy or difficult it was to locate places on the maps.
  2. Record time on task. Due to the software we used for the test, it was not feasible to record time on task.
  3. Record eye movement: this would allow us to look for searching patterns.
  4. Ask subjects what their strategies were for locating places in order to understand how that may have influenced their experience.
  5. Evaluate only one visual design principle instead of multiple at once: this would allow us to dig much deeper into the evaluation of the interface for that given design principle.

CONCLUSION

In this paper, we explain an experiment conducted to evaluate the visual design of the three top ranked online maps in the US market: Google Maps, Bing Maps, and OpenStreetMaps. Through two tests, we assessed how these maps differ from one another by exploring their implementation of visual perception theories and which map is easier to visually locate destinations at different zoom levels, without using the search function. The results of our tests were revealed and explained, and ultimately, they show that in general, Google Maps had better ratings on its visual design, and is perceived as easier to locate destinations. Bing Maps and OpenStreetMap performance varied at different zoom levels while Google Maps performance remained consistent throughout. There are further explorations to be conducted in order to further validate these findings. In the end, we compared the visual design differences among those maps and categorize design strategies on improving the visual design of digital maps.

ACKNOWLEDGEMENTS

We appreciated our colleagues from Purdue University who provided participation and comments that greatly assisted this project. The data and feedback we obtained from them helped us summarize these results. We also would like to thank our instructor, Dr. Chen, who distributed this project to us this year and provided insight to the experimental part of our project.

REFERENCES

  1. Aretz, A. J. 1991. The design of electronic map displays. Human Factors: The Journal of the Human Factors and Ergonomics Society 33, 1, 85-101. Retireved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.841.9984&rep=rep1&type=pdf
  2. Bishop, W., & Grubesic, T. H. 2016. Geographic Information, Maps, and GIS. In W. Bishop & T. H. Grubesic (Eds.), Geographic Information: Organization, Access, and Use (pp. 11–25). Cham: Springer International Publishing. Retrieved from http://dx.doi.org/10.1007/978-3-319-22789-4_2
  3. Bossomaier, T. & Hope, B. A. 2015. Online GIS and Spatial Metadata, Second Edition (2nd ed.). CRC Press, Inc., Boca Raton, FL, USA.
  4. Brooks, A., & Green, P. 1998. Map design: A simulator evaluation of the factors affecting the time to read electronic navigation displays. No. UMTRI-98-7. Retrieved from https://deepblue.lib.umich.edu/ bitstream/handle/2027.42/1234/91801.0001.001.pdf?sequence=2
  5. Buckley, A. 2011. Design principles for cartography. Retrieved December 15, 2016 from https://blogs.esri.com/esri/arcgis/2011/10/28/design-principles-for-cartography/
  6. Chen, C., Lin, H., Liu, H., & You, M. 2007. A usability evaluation of web map zoom and pan functions. International Journal of Design 1, no. 1. Retrieved from http://www.ijdesign.org/ojs/index.php/IJDesign/article/view/31/11
  7. Datanyze. 2016. Maps market share in United States. Retrieved December 15, 2016 from https://www.datanyze.com/market-share/maps/United%20States
  8. DiBiase, D., MacEachren, A.M., Krygier, J.B., & Reeves, C. 1992. Animation and the role of map design in scientific visualization. Cartography and geographic information systems 19, no. 4, 201-214. Retrieved from https://www.geovista.psu.edu/publications/1992/DiBiase_C&GIS_92.pdf
  9. Dillemuth, J. 2005. Map Design Evaluation for Mobile Display. Cartography and Geographic Information Science, 32(4), 285–301. https://doi.org/10.1559/152304005775194773.
  10. Dobraszczyk, P. 2012. City reading: the design and use of nineteenth-century London guidebooks. Journal of Design History, eps 015. Retrieved December 15, 2016 from http://s3.amazonaws.com/academia.edu.documents/35561617/Dobraszczyk__City_Reading__JDH__2012.pdf?AWSAccessKeyId=AKIAJ56TQJRTWSMTNPEA&Expires=1481879095&Signature=cUYscSkWJaW2Lt5oQsepwghSxJk%3D&response-content-disposition=inline%3B%20filename%3DCity_Reading_The_Design_and_Use_of_Ninet.pdf
  11. Lynch, Kevin. 1960. The image of the city. Vol. 11. MIT press, 1960. Retrieved from http://italianstudies.nd.edu/assets/68866/lynch.pdf
  12. Mayo, B. 2016. Embedded Apple Map on WWDC site suggests official public MapKit web API coming soon. Retrieved December 15, 2016 from https://9to5mac.com/2016/04/19/embedded-apple-map-on-wwdc-site-suggests-official-public-mapkit-web-api-coming-soon/
  13. O’Beirne, J. 2016. Cartography Comparison: Google Maps & Apple Maps. Retrieved December 15, 2016 from http://www.justinobeirne.com/cartography-comparison
  14. Painter, L. 14 Jul 2015. Apple Maps vs Google Maps comparison review: Apple’s public transport directions in IOS 9 might give it the edge. Retrieved December 15, 2016 from http://www.macworld.co.uk/review/reference-education/apple-maps-vs-google-maps-comparison-review-googles-maps-still-better-apple-maps-3464377/
  15. Wickens, C. D. 2000. Human factors in vector map design: The importance of task-display dependence. Journal of Navigation 53, no. 01, 54-67. Retrieved from https://www.cambridge.org/core/journals/journal-of-navigation/article/div-classtitlehuman-factors-in-vector-map-design-the-importance-of-task-display-dependencediv/42D6053F5512D5CC99EA61D9C2FCC6E4
  16. Wood, M. (1968). Visual Perception and Map Design. The Cartographic Journal, 5(1), 54–64. https://doi.org/10.1179/caj.1968.5.1.54
  17. Zentai, L. 2015. The Evolution of Digital Cartographic Databases (State Topographic Maps) from the Beginnings to Cartography 2.0: The Hungarian Experience. In J. Brus, A. Vondrakova, & V. Vozenilek (Eds.), Modern Trends in Cartography: Selected Papers of CARTOCON 2014 (pp. 81–92). Cham: Springer International Publishing. Retrieved from http://dx.doi.org/10.1007/978-3-319-07926-4_7