This is a hand-in written by **Group 7** (FIRE) for the Algorithms course (**TIN092**) at Chalmers University of Technology.

Group 7 consists of:

**Mazdak Farrokhzad**901011-0279

twingoow@gmail.com

Program: IT

**Niclas Alexandersson**920203-0111

nicale@student.chalmers.se

Program: IT

We first try to develop a naïve algorithm that doesn’t use dynamic programming. The algorithm uses exhaustive search by going through the companies one at a time, and for each amount invested in that company, comparing the result of investing a little bit more in the company with the result of not investing any more and moving on to the next company. For this, we introduce another variable \(a\), to keep tack of the amount currently invested in company \(i\).

```
function maxReturn(i, x) begin
return mr(i, x, 0)
end
function mr(i, x, a) begin
if i = 0 then
return 0
end
if x = 0 then
return 0
end
return max(g(i, a + 1) + mr(i, x - 1, a + 1), g(i, a) + mr(i - 1, x, 0))
end
```

The time complexity function can be expressed as the recursive function:

\[\begin{aligned} T(i, x) = \begin{cases} c_0 & \text{if $i = 0$}\\ c_1 & \text{if $x = 0$}\\ c_3 + T(i, x-1) + T(i-1, x) & \text{if $i \neq 0 \land x \neq 0$} \end{cases}\end{aligned}\]

Since the value of \(a\) only affects the return value and not the time complexity, we do not include it as a parameter to the complexity function.

Which when substituting \(i + x\) for \(p\) gives us:

\[\begin{aligned} T(p) &= c_3 + 2T(p-1)\\ &= 2^p + \sum_{i=0}^{p} \left[ c 2^i \right]\\ &= 2^p + c\left[ \frac{1-2^p}{1-p} + 2^p \right]\\ &= (1 + c)2^p + c\frac{1-2^p}{1-p}\\ &\in \mathcal{O}(2^p)\end{aligned}\]

With the above algorithm, consider the following paths of execution: ```
mr(i, x, 0)
mr(i, x-1, 1)
...
mr(i-1, x-1, 0)
...
mr(i-2, x-1, 0)
mr(i-1, x, 0)
mr(i-1, x-1, 1)
...
mr(i-2, x-1, 0)
...
```

\label{fig:execution-path}

Looking at this tree, we can see that `mr(i-2, x-1, 0)`

is computed twice. The same will be true for many other values in the execution tree. Thus, it is likely that dynamic programming will be of use here.

Another thing to realise, is that `mr(i, x, a)`

where \(a>0\) can only be called from `mr(i, x+1, a-1)`

. `mr(i, x, 0)`

on the other hand, can be called from `mr(i+1, x, a)`

with any value of \(a\). Since each unique `mr(i, x, a)`

is only computed once, it therefore becomes wasteful to store anything other than values where \(a=0\) in memory. To avoid having to treat \(a=0\) and \(a>0\) as separate cases, we change the algorithm to use loops instead of recursive calls to `mr(i, x-1, a+1)`

, and then add memory to it.

```
function maxReturn(i, x) begin
initialise mem[0..i, 0..x] to -1
return mr(i, x)
end
function mr(i, x) begin
if mem[i, x] != -1 then
if i = 0 then
let mem[i, x] = 0
end
let m = 0
for a in 0..x loop
let m = max(m, g(i, a) + mr(i - 1, x - a))
end
let mem[i, x] = m
end
return mem[i, x]
end
```

From the outset, we know that the complexity of initializing the memory is \(ix\). Nothing will ever be filled twice, so except for memory, the complexity (except for constants) is the amount of calls made, which is \(\displaystyle\sum_{k=0}^{x} k = \frac{x^2 + x}{2}\) for each \(i\) except for the last, so the total complexity is:

\[\begin{aligned} T(i, x) &= (i-1)\frac{x^2 + x}{2} + ix\\ &= \frac{ix^2 + 3ix - x^2 - x}{2}\\ &\in \mathcal{O}(ix^2)\end{aligned}\]

= [draw,anchor=center,minimum size=3em]

= [draw,anchor=center,minimum size=3em, thin]

= [->, draw, thin, red]

= [->, draw, thin, blue]

iin 1, ..., at (0,- i+ 1) (tii) \(i_\i\); in 1, ...,

at (,+ 1) (tx) \(x_\x\);

iin 1, ..., at (,i) (ci) ;

iin 1, ...,

in 1, ...,

=1

(,i) – +(-+ 1,1);

;

\label{fig:dp-memory-calls}