Michela Ceria edited bits1.tex  almost 8 years ago

Commit id: 57ac5fcae7254f384307e8f47a1eb7c054cb154f

deletions | additions      

       

\section{Bits standard operations and logic}\index{LectNo1}  A \emph{bit} (binary digit) is a unit of measure for information, introduced   by C. Shannon in 1948. 1948.\\  Roughly speaking, it represents the minimal amount of information needed in order to distinguish among to events two events,  occurring with the same probability. probability\\  Bits are used in Information Theory and - in general - in most of Computer Science applications.  \\  They are represented with the constants $0 $ and $1$ (the two events with the   same probability). In Mathematics, the set of bits is usually denoted by   %We have the Field of Bits $\Fb$. When we work in Computer Science we often   %consider  %two fundamental constants: $0 $ and $1$. These, viewed as a set  \[  \Fb = \{0,1\}   \]  The first topic we need to introduce introduce,  in order to work with bits bits,  is how we can perform operations with them, so how to equip the set $\Fb$ with two operations: sum and multiplication, in some sense analogous to  the possibility of performing usual  operations with them. numbers.  \\  In the following table, we can see how to sum and multiply bits.  %can be enriched with the standard operations ``$+$'' and ``$\cdot$'':    \begin{table}[!htb]  \centering  \begin{tabular}{c|c|c|c} 

\caption{Sum and product}\label{SumProd}  \end{table}    The first column  and second columns one,  represent the values of the two  input variables $a$ and $b$, used to represent the two bits we have to sum or to multiply;  the third and the fourth ones ones,  respectively represent the results of the sum and the  product of $a$ and $b$.\\ \medskip  Performing operations with bits is not the same as doing so with ``usual''   (real) numbers (from now on, we denote the set of real numbers with $\RR$).  \\