Abstract: Understanding the brain is perhaps the greatest
challenge facing twenty-first century science. Currently, there exists a
tangible gap between the intelligence of computer systems and that of human
beings. While a traditional von Neumann computer excels in precision and
unbiased logic, its pattern recognition abilities lag far behind those of
biological neural systems. However, exciting new technologies have emerged that
possess high potential to bridge this gap. Furthermore, the fields of
neuromorphic and brain-based robotics hold enormous promise for furthering our
own understanding of the brain. Cloud robotics is a new paradigm in which
robots take advantage of the Internet as a resource for massively parallel
computation and real-time sharing of big data. A neuromorphic cloud
infrastructure and its extensive set of internet-accessible resources has
potential to provide significant benefits in myriad areas. In this paper we
survey several current approaches to these technologies and propose a potential
architecture for neuromorphic cloud robotics.
Keywords: Neuromorphic computing, Cloud
robotics, Big data, Brain-based robots;
1.
Introduction
To understand the human brain, it is essential to describe its different
levels of organization, including gene expression, proteins and their
interactions, cells, synaptic connections, neuronal microcircuits, areas and
systems, and to understand the functional interactions within and between those
different levels [25]. In this paper we look at three major transformations in
Information Technology (IT): Cloud Robotics, Neuromorphic Computing and
Big Data which together could help decipher the workings of the human brain. We
now introduce each of the three topics.
The rise of cloud computing and cloud data stores has been a precursor and
facilitator to the emergence of big data. Cloud computing is the
commodification of computing time and data storage by means of standardized
technologies. The term “Cloud enabled Robotics” (CR), used for the first time
by James Kuffner [27] presented the potential of distributed networks combined
with service robots, primarily to enhance the robot agents limited capabilities.
In 2011, Google and Willow Garage introduced their foreseen application of CR
[28], which first demonstrated how to make robots smarter and more energy
efficient.
Neuromorphic engineering, also known as
Neuromorphic Computing (NC) [1][2][3] is a concept developed by Carver Mead [4]
in the late 1980s, describing the use of very-large-scale integration (VLSI)
systems containing electronic analog circuits to mimic neuro-biological
architectures present in the nervous system. NC has evolved significantly in
the last couple of decades in terms of representing a wider concept that
bridges computing systems and neural systems. Thus, NC is now an
interdisciplinary subject, taking inspiration from biology, physics,
mathematics, computer science, and electronic engineering to design artificial
neural systems, such as vision systems, head-eye systems, auditory processors,
and autonomous robots, whose physical architecture and design principles are
based upon those of biological nervous systems [5].
While the initial attempts of NC were focused on
“brain-centered” techniques such as perceptrons [7] and retinas [8], research
has shifted to a more “hardware-centered” strategy with the advent of
Neuromorphic robots. Neuromorphic robotics has the potential to provide the
groundwork for the development of intelligent machines, thereby contributing to
our understanding of the brain and how the nervous system gives rise to complex
behaviour [6]. These robots are physical devices whose control system has been
modelled after some aspect of brain processing. Neuromorphic robots are built
on the notion that the brain is embodied in the body and the body is embedded
in the environment. This embodiment mediates all motion and is critical for
cognitive skills. Some of the open ended research questions include (a) how our
mind is constructed from physical substrates such as brain and body, and (b)
how complex systems such as the brain, give rise to intelligent behavior
through the interactions with the world [9]. Thus, neuromorphic robots can
provide both empirically and intuitively how the brain works.
Big data is being used by enterprises to discover facts that were previously
unknown. Big data is not just about giant data volumes; it’s also about an
extraordinary diversity of data types, delivered at various speeds and
frequencies. It is estimated that about 2 zettabytes (or 10**21 bytes) of
digital data is being generated every year by everything from underground
physics experiments to retail transactions to security cameras to global
positioning systems [36]. Advanced data analytic tools including those based on
predictive analytics, data mining, statistics, artificial intelligence and
natural language processing are being used for Big Data Analytics (BDA), which
is one of the main practises in Business Intelligence (BI) today [37].
The remainder of the paper is organized as follows: we first examine several
instances of neuromorphic engineering in section 2, and then survey current
implementations of cloud robotics, following up with a review of a few
successfully implemented robots that further our understanding of the brain, in
section 3. In section 4, we discuss some of the challenges that currently exist
in robotics and subsequently address them in section 5. We summarize our
conclusions in section 6.
2. Neuromorphic Computing
Traditional von Neumann architectures have several advantages over the human
brain, such as precision, indefatigability, logic, and lack of bias. In the
realms of pattern recognition and power consumption however, a biological
system far exceeds the capabilities of any traditional computer framework [34].
Neuromorphic computing’s mimicry of human neural networks aims to bridge the gap
between these two disparate sets of capabilities. Several institutions, namely
the Human Brain Project, Qualcomm, and IBM, have made forays into the realm of
neuromorphic engineering via developing computer chips that utilize completely
new architectures.
The European Union-funded Human Brain Project (HBP) is a collaborate scientific
research project that aims to model the human brain [15]. One neuromorphic
computing system component of the HBP is the SpiNNaker chip, which can simulate
16,000 neurons and eight million synapses. Each chip consists of several
processors, acting as “fascicle processors”, that are able to model up to 1,000
neurons, receiving and emitting spike events [15]. Although the platform can be
used for any application, it was designed to simulate neural systems, with the
facility to evaluate different algorithms and thus, different types of neurons
and connectivity patterns.
2.1 Neuromorphic Computing in Industry:
The market size of neuromorphic computing is expected to reach 6,480 million
USD by 2024, according to a new study by Grand View Research, Inc [39]. The
increasing demand for cognitive and brain-based robots is projected to impel
growth within the industry, spurred on by the numerous benefits that
neuromorphic chips can provide to users, including cognitive computing, optimum
usage of memory, high-speed performance and low power consumption. The
escalated demand within diverse industry verticals, including consumer
electronics, automotives, and robotics, is instrumental in keeping the industry
prospects upbeat. The global neuromorphic computing market is expected to gain
traction, owing to the rising demand for artificial intelligence. Several
institutions, namely Qualcomm, IBM, and Lawrence Livermore National Laboratory,
have made forays into the realm of neuromorphic engineering via developing
computer chips that utilize completely new architectures.