ROUGH DRAFT authorea.com/40876
Main Data History
Export
Show Index Toggle 0 comments
  •  Quick Edit
  • White paper

    Hey, welcome.

    Double click anywhere on the text to start writing.

    In addition to simple text you can also add text formatted in boldface, italic, and yes, math too: \(E = mc^{2}\)!

    Add images by drag’n’drop or click on the “Insert Figure” button.

    Citing other papers is easy. Voilà: (CMS/CERN 2012) or (Holstein 2009). Click on the aureplacedverbaa button in the toolbar to search articles and cite them.

    Authorea also comes with a powerful commenting system. Highlight the text you want to discuss or click the comment button.

    Find out more about using Authorea on our help page.

    Introduction

    The title of the doctorate is: Ontological Reasoning for Human-Robot Teamwork. The main topic of the doctorate is to achieve persistent human-robot teamwork with harmonized task and information distribution, using an ontological approach for a multi-agent system, applied within the GOAL framework. The resulting models and user interfaces will be created and tested in the domain of Urban Search And Rescue (USAR), and hence will contribute to the EU FP-7 Project: Long-Term Human-Robot Teaming for Robot Assisted Disaster Response (TRADR).

    The TRADR scenario

    Problem description & definition

    The use of AI allows for creating “intelligent” software, which is necessary in building robots that function as equivalent team members within a disaster response rescue team. A multi-agent system consists of a set of agents acting as a set of electronic partners that support both humans as well as robots, facilitating a seamless collaboration in a human-robot teaming. In order to create an effective multi-agent system, there is a need for the agents to share common knowledge about the domain, so that any time all agents’ beliefs about the exact state of the world correspond. Besides having shared beliefs, they must be able to communicate in a language that relates to their knowledge, and is understandable for all, in other words, a formal knowledge representation language to capture every situation. Third, the agents have to be able to infer new consequences about the world, a non-ambiguous reasoning technique working on the knowledge representation language established.

    The use of ontology is the most promising approach to address the above described issues. An ontology is a conceptualization of a domain, including concepts, properties and relationships between concepts, hence a world model. Agents sharing the same ontology to represent common knowledge allows for communication to be much easier: the ontological terms constitute the vocabulary to be used in exchanging messages. This way both parties of sender and receiver will know how to interpret every part of the message, relating it to the common ontology they both are aware of. An ontology allows for a reasoner to infer new relations that are a consequence of the terminological definitions.

    In order to achieve successful teamwork between humans and robots, the main issue of having an effective communication language between the two parties is addressed. Ontologies allow for a human and machine-readable vocabulary, ensuring semantic interoperability. This means that the content of the messages exchanged between agents uses an agreed-on set of ontological constructs, and its semantics, or meaning, is understood by both parties. Due to the axiomatic definition of concepts and properties, a reasoning system is generally applied to an ontology to exploit the underlying structure of the information. Agents will use ontological rules to add more knowledge that is not expressible with the basic ontological constructs. They are if-then rules, with conjunction of atoms inside, an atom being an ontological term. Such rules will be the means by which agents manage their intelligence.

    Agents need to reason about many things: they have to reason about the state of the world, meaning what can be inferred from the world model, about planning tasks and orchestrating information within a team based on a number of factors, and about the information to be provided to the user through a view. In general, deductive reasoning is used to conclude the consequences of the world using the provided rules, but abductive reasoning might be also needed, in order for an agent to explain its actions or decisions taken.

    Visualization concerns the graphical representation of the state of the world. In our case, a disaster site with all the resources, processes and goals. A graphical user interface will be developed with the partner universities so each human user can stay connected and up-to-date with the events on the scene, and provide and edit information to the shared knowledge base. One of the most important aims of the visual interface is to create situation awareness among all team members. This will facilitate teamwork on a much more efficient level. Since intelligent agents mediate information processes and the corresponding views of the user interface, they are responsible of visualizing relevant information at a relevant time to the relevant person. Information about the team (i.e. members, members’ status, each member’s role, commands, eventual messages or warnings) help achieve a good collaboration on the team level. A possible innovation in this area would be a system that does reasoning about the way information is presented to the user: achieving situation awareness in a most efficient way. The result would be a smart graphical user interface controlled by the agent, that, taking into account the user’s cognitive load, task, the robot’s autonomy level, and other relevant factors, can determine the necessary and sufficient amount and type of data to be displayed.

    Research focus topics and research questions

    This research focuses on one key question that can be broken down (or consists of) three sub-questions. These three sub-questions correspond to the three main components of Figure \ref{fig:SysArch}.