Matrices in Graph Theory

Some of the most important matrices that are used in number theory are known as the adjacency matrix and the transition matrix. An adjacency matrix is given by the vertices of that matrix and is labeled with a \(0\) or \(1\) depending on its adjacency. The way we label such a vertex with its adjacency is by \(\left(i,j\right)\), where \(i\) is the row while \(j\) is the column. Adjacency matrices can also be used to find the number of walks between vertices. To show this we raise our matrix to the \(L\), where \(L\) is the length of the walk and read off the matrix as \(\left(i,j\right)\)
This is an example of an adjacency matrix. We can see that vertex \(1\) is adjacent to vertex \(2\) and \(3\) therefore, they get a \(1\) placed in their position and a \(0\) in any other vertex. 
Another important matrix is the transition matrix. A transition matrix also denoted as \(P\), is a matrix that shows random walks and its probabilities for each step. The way to read the probability between vertices is \(P_{ij}\) where \(i\) and \(j\) are the row and column respectively. The transition matrix can also be done for different lengths \(L\). To do this we take our transition matrix \(P\) and raise it to the \(L\) power. 
The transition matrix above states that there is a probability of \(\frac{2}{3}\) from vertex \(1\) to vertex \(2\)