authorea.com/104851

Project notes

*Consider*:

- Spatially selective neteorks
- place cells
- grid cells
- border cells
- rate remapping / cues

- Alistair's ideas
- Gibson (1950) cognitive representations of objects built on surfaces
- stable support (Haggard 2013)
- slip (Adams 2013)

- Martin's locomotion / trajectory based spatial learner MSOM
- MSOMs (Strickert 2005)
- SOMs (Kohonen 1982) *Centrality of physical exploration in representing spatial properties
- Surface reps activated by both vision and touch? (Lacey 2009)
- Inability to visually identify unseen objects learned through touch
- Theories of visual learnign mediated via touch exploration?
- Objects in space vs object-independent space representations

"basic concept of surface is due to a primitive sensorimotor concept called the stable contact signal" (Ideas 3.3.1)

"For many SII cells, receptive fields are better described in relation to peripersonal space than to body parts, because they respond to contact on several different body parts. For instance, there are SII cells that respond to touches by several different fingers (see e.g. Fitzgerald et al., 2006), or even to touches by both hands (see e.g. Iwamura et al., 2001). In these cases, the fingers are often aligned within a single plane in the hand (see e.g. Fitzgerald et al., 2006). A conclusion drawn by several researchers, and well summarised by Haggard (2006), is that SII cells compute representations of surfaces in the observer’s peripersonal space. These representations can be thought of as representations of stable contact or stable support in the somatosensory system." (3.3.3)

"The axis parallel with the fingers provides a natural ‘forward/back’ axis, with ‘forward’ being the direction in which the fingers point. The axis perpendicular to the fingers provides a natural ‘left/right’ axis." 'Up' as the direction that breaks stable contact? (3.3.3.1.1)

"The key idea in my learning model is that the stable support signal is intrinsically re-warding, at least in developing infants: in other words that it is hard-wired by evolution to generate a reward signal in the motor system. This means that infants are drawn to learn how to achieve stable support states, and consequently, to learn functions mapping percep-tual representations of objects in their peripersonal space onto goal motor states associated with the stable support signal. These are at the origin of our perceptual representations of the surfaces of objects." (3.3.5)

"analogous to the environments in which the observer moves—except objects are environments that are navigated in by the observer’s effector" (Ideas 3.9.1)

"I propose that the shape of an object is represented in exactly the same format as a navigation environment: as an allocentric boundary structure," (3.9.2)

"Hamada and Suzuki (2005) found representations of simultaneous con-tact by the index finger and thumb which varied when the angle between these digits was changed" (3.9.4.1)

"Object-centred neglect is defined in a frame of reference centred on the object being perceived, rather than on the observer or the environment. Typically, neglect is expressed in relation to one of the object’s major axes (see e.g. Driver et al., 1994). The key behavioural test for this type of neglect is that the area of an object which is ignored remains the same if the object is rotated in relation to the observer "Chafee et al. (2007) gave monkeys a task in which a block had to be moved into a shape to complete it; they found neurons in parietal cortex (area 7a) which were sensitive to the block’s position in relation to the object, even when the retinal location of the block varied from trial to trial. These findings argue for a medium in the brain, strongly recruiting parietal cortex, that represents a map of ‘locations within a given object’, that is somehow specified in a coordinate system centred on that object" (3.9.5.1)

"Two squares of different sizes are both squares, but the motor actions needed to interact with them haptically are very different; so there should be a size component" (3.9.5.1)

"A common way to identify the shape of a surface is to navigate systematically around its perimeter (see e.g. Lederman and Klatsky, 1993)." (3.9.5.3)

Self Organising Maps (Kohonen 1982) are neural networks imbued with a topological structure on their units, such that afferents to each unit are coupled to those of their nearest neighbours. As a result, structure in the input data are mapped to topological structure in the SOM network, which involves a dimensional reduction from the dimensionality of the data to that of the (usually lower dimensional) SOM network.

The basic requirements for a SOM are:

- A discriminant function, which determines units' responses to inputs, e.g. a dot-product, or some distance measure
- A neighbourhood function, which determines the coupling of adjacent units' afferent plasticity
- A learning rule, to further enhance the selectivty of the best matching unit (and its n.n.s) to the current input

Implementations generally fall into two types: those based on distance measures for the *discriminant* fn. (more traditional), and those based on similarty measures (more biological). Further, both the learning rates and the coupling strength is reduced over learning, in order to stabilise learning, and ensure a wider coverage of the network

- failure modes:
- weight vector distribution 'collapses' leading to undifferentiated activity among units; can indicate lateral interaction too
*large*(?) - dimensionality of SOM representation 'piches' down to a lower order; often due to poor selectivity in discriminant fn., and/or later interaction too
*short* - few units dominate and all input weights become 'focused' onto them (often bad normalistion or lateral interaction too
*weak*)

- weight vector distribution 'collapses' leading to undifferentiated activity among units; can indicate lateral interaction too

A similarity implementation would proceed as follows. Learning here is 'online', but can also be batched.

Given units \(i=1,..,m\) receiving signals \(\vec{x} = \{\xi_1,...,\xi_n\} \in \mathbb{R}^n\) via input weights \(\vec{w}_i = [\omega_i^1,...,\omega_i^n] \in \mathbb{R}^n\), then let the units perform the 'discriminant' function

\[ \label{eqn:SOM-discrim} \eta_i = \omega_i^j \xi_j = \vec{w}_i^T\vec{x}. \]

For the most active unit \(\{k \,|\, \eta_i = max_i \; \eta_i\}\), and its nearest neighbours \(\{l\,|\,\mathrm{d}(k,l) \leq \lambda(t)\}\) adjust the weights at time \(t\) according to the signal \(\vec{x}(t)\) and

\[ \label{eqn:SOM-weights} \omega_k(t+1) = \frac{\vec{w}_i(t) + L(t)\,\alpha(t) \vec{x}(t)}{\|\vec{w}_i(t) + L(t)\, \alpha(t) \vec{x}(t)\|}.\]

\(\alpha(t)\) is the decaying learning rate, and \(\lambda(t)\) is the decaying neighbourhood threshold for some distance function. In \ref{SOM-weights} the extra factor \(L(t)\) is simply a Gaussian *pdf* of the Euclidean distance \(d(i,j)\) between units \(i\) and \(j\) in the network topology, with a *std. dev.* \(\sigma(t)\) that decays with time.

If the weights are interpreted as synaptic efficacies, as implied by the dot product form in \ref{eqn:SOM-discrim}, then learning rule \ref{eqn:SOM-weights} increments by \(\alpha \, \xi_j\) each synapse \(\omega_i^j\) from active inputs to unit \(i\), in a basic Hebbian way. However, this increases the magnitude of weight vectors without bound, leading to loss of selectivity (and instability). The normalisation in \ref{eqn:SOM-weights} ensures that weight vectors stay within the range of input values -- ideally, each will specialise to become selective to a single input -- by decreasing over-sensitive weights. In this way, \ref{eqn:SOM-weights} approximates a covariance plasticity rule.

Here is some latex!

\[\label{eqn1} \omega_k(t+1) = \frac{\vec{w}_i(t) + L(t)\,\alpha(t) \vec{x}(t)}{\|\vec{w}_i(t) + L(t)\, \alpha(t) \vec{x}(t)\|}.\]

That was \eqref{eqn1}..!

## Share on Social Media