Results

\label{results}
In the baseline survey, nine out of 10 users agreed or strongly agreed there is untapped value in completed study data: ”There’s clearly a treasure trove of stuff but the way it’s entered [in repositories], it’s just a pile. It’s really hard to sift through. That’s what I’ve found when I tried; it takes so long to figure it out and I didn’t even bother.” After using the DataSpace in beta, all 10 agreed there is untapped value. All 10 also agreed (seven) or strongly agreed (three) at baseline that data sharing was personally important to them, with four changing to ”strongly agree” after beta.
The DataSpace made a large and statistically significant improvement in our top two metrics: the speed and ease of answering basic factual questions about past work (mean difference 1.22 on a 5-point scale), and exploring and interpreting data from other investigators (mean difference of 2). After beta, nine out of 10 agreed or strongly agreed that the DataSpace is faster and easier than previous options for exploring and answering questions, that it is personally valuable, and that they will use it again. The 10th user felt that his role did not benefit from the studies currently included in the DataSpace.
Eight out of 10 reported a positive outcome on their work, such as improved awareness of activity in the field (six), answering a question or assessing the potential of a hypothesis (five), or helping assess others’ grants, papers, or presentations (four). One investigator needed details of his own past results when writing a progress report for a grant and found that the DataSpace was easier than other methods even though he had the original files.
We hypothesized that the greatest opportunity lay not in detailed analysis of individual assays from single studies but from cross-assay and cross-study comparison. All but one user reported that these scenarios accounted for most of their self-directed usage. They also confirmed the importance of contextual metadata, with two of 10 users reporting they spent more time learning about the work than exploring the data and three reporting they did both equally. The Learn section has detailed entries on hundreds of studies and products, while only 17 studies currently have subject-level data for exploration.
As we hoped, beta users found several important data errors we corrected before launch. Unfortunately, several users reported these issues gave them a negative impression of the interface. When data were not as expected, they believed there was an interface problem. Future users who will not experience this challenge may provide a better sense of naïve usability. Two of our users had surprisingly small screen resolutions that caused unanticipated interface problems. Despite these challenges, the SUS score was 70 out of 100, slightly above average according to the scale’s creator.
Several investigators reported difficulty finding time for beta participation after the assigned tasks were over, feeling ”overcommitted” or temporarily working on a project that was less relevant for the DataSpace. Web analytics confirmed a significant decline in usage after assigned tasks. We believe this is an important indication of future needs and discuss it Plans below.
We also asked Beta investigators how they would describe the DataSpace to another investigator. “It’s a free goldmine for quick access of data and for testing hypotheses with data someone already generated.” “Plotting is really awesome… it’s really easy to navigate to what you want.” “A great source of data and inspiration for potential collaborations.” “It’s giving me the freedom to play with the data. It fills a niche that is totally empty right now.” “You’re investing upfront to allow a lot of downstream work that would never be able to be done just because of the lack of resources.”