The push task performer currently interfaces with Twitter as well as facebook and comprises an adaptive flow control algorithm to send out messages via dedicated Social Computer bot accounts on those platforms. In order to control the amount of messages, it maintains a compute stream, which is a fused and ranked list of selected tasks from all active compute processes. How many tasks from a particular process are in this stream and when they are sent out depends on a) the number of active processes, b) the number of content elements in those processes, c) the number of human participants who provide input in response to published tasks, d) at what rate input comes in, and e) the kind of input. This built-in instruction interface also pulls instructions applied to those published tasks back from Twitter and facebook by monitoring how people respond to the task messages.
This modular and data-centric system design allows to spin a Social Computer instance up that is already connected with a set of remote platforms but also to expand it easily by the means of add-on instruction interfaces that link further mediated ones in order to reach out to the participants there. The availability of a Linked Data interface to access the raw procedural data generated by the Social Computer allows add-on instruction interfaces to bypass the REST API for read-only access and to be prepared for settings in which data from multiple independently running Social Computer instances needs to be integrated in a lightweight fashion.
Conclusions, Challenges and Future Directions
Running Social Computers at Various Scales
Extensive testing with varying observer configurations, different data sources as well as open and closed context use cases is needed to understand the Social Computer at work and to compare its capabilities to those of other forms of human computation. Depending on the scenario, different dimensions of the Social Computer will be challenged and different insight about its practical application can be gained.
Real-time event response
Responding to events in near real-time on social media is an increasingly important topic. The most well-known use cases for this are in the area of digital disaster response, where social media streams are mined for meaningful information or to quickly spread messages related to an actual humanitarian crisis. Despite the great prospects that have been identified there are also great challenges that are mostly related to the specific character of short social media messages that challenge state of the art natural language processing as well as the general information overload on social media generated by advertisement bots that pollute hashtag streams and people promoting meaningless messages that then get viral.
Our Social Computer, in this context - intended to run as an open audience instantiation - spins up processes automatically in case of a burst of activity around a particular content pattern that is contextualised with a humanitarian crisis. Apart from a seed list of crises-related terms such an instantiation is basically agnostic to contextual information; it will respond to any activity that exceeds the thresholds. The challenge, therefore, is quick process and task validation as a first step, because one has to expect massive amounts of information that are not meaningfully related to crisis relief. PRIO and RESOLVE instructions that come in as early responses need to be analysed if in this burst any messages at all are meaningfully related to a real-world event identified by the extracted content pattern. If a relevant amount of messages is positively validated in that way, all other contributions can be used to carry out other tasks that help to relieve the crisis situation (e.g. enrich content by providing translations of requests for help).
The by far largest challenge for such a global-scale open audience Social Computer instantiation is the bootstrapping of a community of contributors that is large enough to let the system evolve autonomously. It needs to be studied what the ideal size and structure of a seed community is that, by issuing meaningful SHARE instructions for example, increases the visibility and credibility of the Social Computer instance for potential participants that have never recognised it before.
Coordinating idea exchange and emergent knowledge work
Organisations easily suffer from the problem that in different organisational units people work on similar things or have similar ideas but do not know of the other units’ activities. On top of this sits the interoperability issue of the different tools used for time planning and scheduling, note taking, information sharing and organisation as well as brainstorming by different parties involved in a collaborative project.
A closed group instance of our Social Computer seeks to act as a most lightweight interface between the information management infrastructure of an organisation. It links up as many information sharing channels as possible to consolidate all non-confidential interactions into one large stream that is examined for bursty patterns. Even though an individual may locally not perceive a current topic as relevant, the Social Computer will flag up macroscopic patterns and upgrade the importance for the individual that would potentially remain inactive on that matter otherwise. This approach can be regarded an activity based intervention mechanism to stimulate ad-hoc collective action by broadcasting contributions and may even have a potential to help lower the issues arising from cross-understanding \citep*{HUBER_2010} and similar constructs.
It may appear that such a bot sending messages between people in an organisation increases the information overload and hence distracts them from their actual work. This motivates studies of the amount of messages created under a particular configuration and simulated organisational setting (e.g. number of employees, amount of emails per day, typical content patterns of emails). To be most informative, such investigations should bring together analysis of the organisational network structures (people, units, tools) with quantitative investigation of how information flows dependently as well as independently of those networks. Studying the flow through those networks helps understanding the context-dependent roles of people and the importance of certain systems. Looking at information evolution independent from those networks complements this with a macroscopic view to the time-dependent topology of information within an organisation.
From instructions to a non-positivistic engine of social action?
With the concept of our Social Computer we seek to go beyond the common agent-based approach, that is to formulate fixed interaction protocols and to rely on economic principles for coordination and adaptation. We argue that this can at most represent instrumentally rational action according to Weber’s theory of social action \citep*{weber1978}. The idea of a Social Computer as described in this article aims to cover the full non-positivistic spectrum of social action including value-rational, affectual, and traditional action. While we may assume that the human participants incorporate such a conceptualisation of behavior anyway, we need to make sure that the Social Computer can preserve it and does not overwrite it with one dominated by economics. The three examples of social media activity in response to different events emphasise the importance of implementing the full range of social action in a Social Computer, since a lot of the observed messaging can be classified as affectual action in the mix with value-rational action (e.g. the cynical comments about the revelation of the Panama Papers, where the reactions are effectually triggered but the content is shaped by the actors’ values and experiences).
The “incentivisation” game
The recent widely recognized and mixed experiences with the Microsoft chatbot Tay (
https://www.technologyreview.com/s/601279/how-to-prevent-a-plague-of-dumb-chatbots/) emphasize what first experiments with our Social Computer system have also shown: bootstrapping a digital agent to become a credible social media user is the most critical step towards autonomous problem solving of socio-technical collectives. This is affected by strong tendencies to compromise and deceive such attempts by human users and potential competitors.
The openness of our Social Computer system design deliberately follows the openness of the public Web. It is a matter of fact that this (currently) results in an absence of exclusive means to incentivise people (e.g. by micro payments). Furthermore does research show that it would remain questionable anyway if such kind of incentives necessarily lead to better quality of the contributions, so that post-processing of content could become almost effortless.
We argue that this gives rise to a novel grand challenge for Computer Science and artificial intelligence that is not about masking the artificiality of the bot - as in the infamous task of Turing’s imitation game \citep*{TURING_1950} - but about making the bot intelligent enough to be resilient against continuous spam and deception. This involves not only smart technology to filter out irrelevant, misleading and harmful content but - maybe even more importantly - strategies to incentivise human and machine peers on the open Web so that these decrease or even give up any potentially aversive attitude and turn into valuable content contributors instead.