This is a hand-in written by **Group 7** (FIRE) for the Algorithms course (**TIN092**) at Chalmers University of Technology.

Group 7 consists of:

**Mazdak Farrokhzad**901011-0279

twingoow@gmail.com

Program: IT

**Niclas Alexandersson**920203-0111

nicale@student.chalmers.se

Program: IT

This hand in deals with complexity analysis of an algorithm \(prime(n)\), that works as follows: ```
prime: array [1..n] of boolean = all true
for p in 2..sqrt(n) loop
if prime[p] then
m = p*p
while m <= n loop
prime[m] = false
m = m+p
end loop
end if
end loop
```

To make things more clear, we rewrote the algorithm so that it only uses for-loops.

The rewritten algorithm looks like: ```
#1 prime: array [1..n] of boolean = all true
#2 for p in 2..sqrt(n) loop
#3 if prime[p] then
#4 for m in p..n loop
#5 prime[m * p] = false
#6 end loop
#7 end if
#8 end loop
```

First things first, regarding line #1 - we couldn’t decide whether the array is an in-parameter and if the algorithm in that case begins by setting all elements in the array to true - in which case \(n\) would have to be added to \(T(n)\). In the section “Solving the sum” we haven’t added it, but it would be trivial to do so, and in the end it wouldn’t change the complexity class.

Let any constant initial cost of the algorithm be denoted \(a\). In this algorithm it is for example one might suspect that it would be the initialization of the for-loop at line #2.

Imagine a loop from \(i \to n\) that steps by \(x\) each time. Such a loop will by definition run \(\frac{n}{p} - \frac{i}{p} + 1\) times. This is exactly what the loop starting at line \(\#4\) did, but we have rewritten it to use step-size 1, instead, dividing the starting and ending point both by \(p\). This loop has a constant cost \(c\) per iteration and costs: \(\displaystyle\sum_{m=p}^{\frac{n}{p}} c\) for the entire loop.

The inner loop is in turn run inside an outside loop which runs from \(2 \to \sqrt{n}\) and has an additional (excluding the inner loop) constant body cost \(b\) per iteration.

Therefore the total cost of the algorithm can be expressed as: \[T(n) = a + \sum_{p = 2}^{\sqrt{n}}\left[b + \sum_{m = p}^{\frac{n}{p}} c\right]\]

These summation laws/formulas are used frequently. They are very common and thus we won’t prove any of them. Most of them can be found here

\(\displaystyle\sum_{n=s}^t C\cdot f(n) = C\cdot \sum_{n=s}^t f(n)\)

\(\displaystyle\sum_{n=s}^t f(n) + \sum_{n=s}^{t} g(n) = \sum_{n=s}^t \left[f(n) + g(n)\right]\)

\(\displaystyle\sum_{n=s}^t f(n) - \sum_{n=s}^{t} g(n) = \sum_{n=s}^t \left[f(n) - g(n)\right]\)

\(\displaystyle\sum_{i=m}^n 1 = n - m + 1\)

\(\displaystyle\sum_{i=1}^n \frac{1}{i} = H_n\), where \(H_n\) is the nth harmonic number.

\[\begin{split} T(n) &= a + \sum_{p = 2}^{\sqrt{n}}\left[b + \sum_{m = p}^{\frac{n}{p}} c\right]\\ &= a + \sum_{p = 2}^{\sqrt{n}}b + \sum_{p = 2}^{\sqrt{n}}\left[c\sum_{m = p}^{\frac{n}{p}} 1\right]\\ &= a + b\sum_{p = 2}^{\sqrt{n}}1 + c\sum_{p = 2}^{\sqrt{n}}\left[\sum_{m = p}^{\frac{n}{p}} 1\right]\\ &= a + b(\sqrt{n} - 1) + c\sum_{p=2}^{\sqrt{n}}\left[\frac{n}{p}-p+1\right]\\ &= a + b(\sqrt{n} - 1) + cn\sum_{p=2}^{\sqrt{n}}\frac{1}{p}+c\sum_{p=2}^{\sqrt{n}}\left[1 - p\right]\\ &= a + b(\sqrt{n} - 1) + cn\left(\sum_{p=1}^{\sqrt{n}}\frac{1}{p} - \sum_{p=1}^{1}\frac{1}{p}\right)+c\left(\sum_{p=2}^{\sqrt{n}}1-\sum_{p=1}^{\sqrt{n}}p + \sum_{p=1}^{1}p\right)\\ &= a + b(\sqrt{n} - 1) + cn(H_{\sqrt{n}} - 1) + c(\sqrt{n} - \frac{\sqrt{n}}{2} - \frac{n}{2})\\ &= a + b(\sqrt{n} - 1) + cn(H_{\sqrt{n}} - 1) + c(\frac{\sqrt{n}}{2} - \frac{n}{2})\\ &= a - b + (b + \frac{c}{2})\sqrt{n} + cnH_{\sqrt{n}} - \frac{3c}{2}n \end{split}\]

We know that \(H_n\) is bounded from above by a constant multiple of \(\log(n)\), meaning \(H_n \in \mathcal{O}(\log(n))\)

Thus: \[T(n) \in \mathcal{O}(nlog(\sqrt{n}))\] We use the logarithm identity \(log_b(x^d) = dlog_b(x)\) to simplify, which gives us: \[T(n) \in \mathcal{O}(nlog(n))\]