Vectors of Bits

\label{Sec:Vectors}

Usually, computers, have to work with sequences of bits, that are ”ordered set”. These sequences, called vectors by Mathematicians, are commonly defined arrays or strings by Computer Scientists.
Some of these strings are particularly important for applications, so they have their own name. They are:

  • nibbles: vectors of \(4\) bits;

  • bytes: vectors of \(8\) bits

In general we call stream a vector whose exact size is not known a priori
Streams can be visualized as blocks of one-bit bricks:

0 1 0 1 0 1

or they can be represented between “(” and “)” with the bits separated by “,”. That is

0 1 1

is \((0,1,1)\). The set of all vectors of a fixed length \(n\) is

\begin{equation} (\mathbb{F}_{2})^{n}.\nonumber \\ \end{equation}

We will use both of these representations for convenience.
The operations on \(\mathbb{F}_{2}\)-vectors can be extended componentwise, that is if we sum two vectors bit by bit with respect to the position we have:

\begin{equation} \label{eq:vec} \label{eq:vec}(0,1,1)+(1,0,1)=(1,1,0).\\ \end{equation}
\label{VettoreOpposto}

Every vector has an opposite with respect to the sum. Are you able to find it? Try first with \((0,0,0,0),\,(0,1,0,1),(1,1,1,1),(1,1,0,0)\) and then derive a general rule.

We can also extend componentwise the idea of multiple: the multiple of a vector \(b\) with respect to an integer \(a\) is obtained by multiplying by \(a\) each component of \(b\). Here is an example

\begin{equation} 3\cdot(1,0,1)=(3\cdot 1,3\cdot 0,3\cdot 1)=(1,0,1).\nonumber \\ \end{equation}
\label{MultiploVett}

Compute \(5\cdot(1,1,0)\) and \(6\cdot(0,1,0)\).
Can you derive a rule to compute the multiple of a vector? (see exercise \ref{MaxiSum}).

Another structure useful to manage ordered sets, like vectors, is the set of polynomials described in Section \ref{Sec:Polynomials}. We recall that the sum of two polynomials, \(x+1,x^{2}+1\in\mathbb{F}_{2}[x]\), is

\begin{equation} \label{eq:pol} \label{eq:pol}(x+1)+(x^{2}+1)\,=\,x^{2}+x\,.\\ \end{equation}
\label{Vectors2Poly}

Do you see any connections between (\ref{eq:vec}) and (\ref{eq:pol})?

As you have seen in the previous exercise, nice and meaningful operations on vectors can be interpreted as operations on polynomials. In the rest of the section we give other examples.

One of the most important and fast operations for a computer (and more precisely in its Central Processing Unit, CPU) consists in shifting to the left or to the right of one position a vectors of bits (register). For example shifting to the right (or to the left) we have:

\begin{equation} \label{shift} \label{shift}(1,1,0,1,0){\Longleftrightarrow}(0,1,1,0,1).\\ \end{equation}

The polynomials associate to (\ref{shift}) are

\begin{equation} \label{shiftpol} \label{shiftpol}x^{4}+x^{3}+x{\Longleftrightarrow}x^{3}+x^{2}+1.\\ \end{equation}

That is, shifting to the left a vector seems to correspond to the multiplication of its polynomial by \(x\), while shifting to the right seems to correspond to the division of its polynomial by \(x\).

Suppose you want to double a vector of \(n\) bits. For example let \(n=3\) and we want that \((1,0,1)\) becomes \((1,0,1,1,0,1)\). Using the polynomial notation for the vector and multiplying by another polynomial \(p\) you can obtain the result expected. Who is \(p\)?

Suppose that the fourth bit (the one with “?” symbol) in a vector \(v\),

\begin{equation} v=(?,1,1,0),\nonumber \\ \end{equation}

is computed by the sum of the first and the third. In languages as “C” we can say

\begin{equation} v[3]=v[2]+v[0];\nonumber \\ \end{equation}

where \(v[0]\) represents the first entry of the vector and \(v[2]\) the third one. In a polynomial notation this “function” becomes

\begin{equation} x^{3}=x^{2}+x^{0}\nonumber \\ \end{equation}

with the exponents representing the positions of the vector entries.

Let us consider the vector \((0,1,1)\in(\mathbb{F}_{2})^{3}\), associated to the polynomial \(f=x^{3}+x+1\) as explained above. We observe that this polynomial is irreducible.

We can construct a Linear Feedback Shift Register (LFSR) over three bits, using \(f=x^{3}+x+1\).
First of all, we start with an initial vector (called the initial state), for example \((1,0,1)\), inserting it in the following structure:

? \(\longrightarrow\longrightarrow\) 1 0 1

We can associate a monomial to each position in the structure

\(x^{3}\) \(\longrightarrow\longrightarrow\) \(x^{2}\) \(x\) \(1\)

We aim to compute the mysterious bit, that is, the element we indicated with ?. This bit is in correspondence to the monomial \(x^{3}\) and since we want to use the polynomial \(f\), we must consider the portion of f without its leading term \(x^{3}\), that is, \(f-x^{3}=x+1\).
Therefore, we interpret \(x+1\) as the following operation:

  • we read the bits in positions x and 1, which are \(0\) and \(1\), respectively,

  • we sum these two bits, obtaining \(0+1=\mathbf{1}\),

  • we finally insert this new bit \(\mathbf{1}\) in the place of the mysterios bit ?.

\(\mathbf{1}\) \(\longrightarrow\longrightarrow\) 1 0 1

Now, we shift the vector to the right:

? \(\longrightarrow\longrightarrow\) \(\mathbf{1}\) 1 0

so we have a new three-bit vector, that we call again state.
We now need to compute the new mysterious value ?, and so we proceed with the same algorithm, by taking the two bits at positions \(x\) (i.e., \(1\)) and \(1\) (i.e., \(0\)).
We will then get \(1+0=1\) and so

1 \(\longrightarrow\longrightarrow\) 1 1 0

which becomes, after another shift to the right, the following

? \(\longrightarrow\longrightarrow\) 1 1 1

We repeat again the mysterious bit computation and we get \(1+1=0\), meaning

0 \(\longrightarrow\longrightarrow\) 1 1 1

and the shift to the right, as follows

? \(\longrightarrow\longrightarrow\) 0 1 1

and again the mysterious bit computation \(1+1=0\), that is,

0 \(\longrightarrow\longrightarrow\) 0 1 1

and again the shift to the right, obtaining

? \(\longrightarrow\longrightarrow\) 0 0 1

and again the mysterious bit computation \(0+1=1\) with

1 \(\longrightarrow\longrightarrow\) 0 0 1

and the shift to the right

? \(\longrightarrow\longrightarrow\) 1 0 0

and again the mysterious bit computation \(0+0=0\)

0 \(\longrightarrow\longrightarrow\) 1 0 0

and again the shift to the right

? \(\longrightarrow\longrightarrow\) 0 1 0

and, finally \(1+0=1\) and

1 \(\longrightarrow\longrightarrow\) 0 1 0

with the last shift

? \(\longrightarrow\longrightarrow\) 1 0 1

We notice the following facts:

  1. (1)

    After having performed the above algorithm seven times, we get again the initial vector \((1,0,1)\).

  2. (2)

    While performing the above algorithm, we find all seven nonzero \(3\)-bit strings

Fact (1) is a property that is shared by many LFSR’s. It is enough to use any polynomial \(f\in\mathbb{F}_{2}[x]\) of the form \(f=x^{n}+\ldots+1\) to construct an LFSR producing \(n\)-bit vectors and such that any initial state is got again after a finite number of iterations.
Fact (2) is more difficult to generalize. If we want an LFSR that can start from any initial nonzero vector and obtain all other nonzero vectors, then we must use very special polynomials, called primitive polynomials.

\label{Primitive}

We call primitive any polynomial \(f\in\mathbb{F}_{2}[x]\) of degree \(n\) that, used as feedback polynomials of a LFSR, can generate all the non-zero \(n\)-tuples of bits,i.e. all the \(n\)-tuples but \((0,0,0,...,0,0)\), using as initial state any of these.

An example of primitive polynomial was that used in the previous LFSR: \(x^{3}+x+1\).

We see now an example of non-primitive polynomial. Let us consider \(f=x^{3}+1\) and we start again from \((1,0,1)\), and we perform the algorithm

1 \(\longrightarrow\longrightarrow\) 1 0 1

getting

? \(\longrightarrow\longrightarrow\) 1 1 0

Then

0 \(\longrightarrow\longrightarrow\) 1 1 0

from which

? \(\longrightarrow\longrightarrow\) 0 1 1

Then

1 \(\longrightarrow\longrightarrow\) 0 1 1

finally getting

? \(\longrightarrow\longrightarrow\) 1 0 1
\label{Buffo}

Let us consider a polynomial which is not of the form \(f=x^{n}+\ldots+1\), for example

\begin{equation} f\,=\,x^{3}+x^{2}\,,\nonumber \\ \end{equation}

using it to construct an LFSR. The portion of \(f\) without its leading term is only formed by \(x^{2}\), and so no computation is needed to obtain the mysterious bit.
We take the following initial state

? \(\longrightarrow\longrightarrow\) 0 1 1

and we take the bit in the position represented by \(x^{2}\), i.e. \(0\) as the mysterious bit:

0 \(\longrightarrow\longrightarrow\) 0 1 1

After a shift to the right, we obtain

? \(\longrightarrow\longrightarrow\) 0 0 1

The mysterious bit turns out to be \(0\) again, so we have

0 \(\longrightarrow\longrightarrow\) 0 0 1

and, after a shift to the right, we get

? \(\longrightarrow\longrightarrow\) 0 0 0

From now on, we are stuck; indeed the mysterious bit will always be equal to \(0\) and so, after the shift to the right, we will always obtain \((0,0,0)\).

We recall the notion of irreducible polynomial (Definition \ref{REDIRR}). An irreducible polynomial cannot be written as product of two non-trivial factors. It turns out that there is a close links between primitive polynomials and irreducible polynomials, as shown in the following theorem.

If \(p\in\mathbb{F}_{2}[x]\) is primitive then \(p\) is irreducible.

The previous theorem cannot be inverted. For example, the polynomial \(x^{4}+x^{3}+x^{2}+x+1\) is irreducible but it is not primitive.

Verify that \(x^{4}+x^{3}+x^{2}+x+1\) is not primitive via a LFSR.