ROUGH DRAFT authorea.com/6020
Main Data History
Export
Show Index Toggle 0 comments
  •  Quick Edit
  • Module 4 - Probability models

    Introduction

    This is a hand-in written by Group 60 (FIRE) for the weekly module #4 in the mathematical modelling course (DAT026) at Chalmers University of Technology.

    Group 60 consists of:

    • Mazdak Farrokhzad

      • 901011-0279

      • twingoow@gmail.com

      • Program: IT

      • Time spent: 22 hours, 43 minutes

    • Niclas Alexandersson

      • 920203-0111

      • nicale@student.chalmers.se

      • Program: IT

      • Time spent: 21 hours, 25 minutes

    We hereby declare that we have both actively participated in solving every exercise, and all solutions are entirely our own work.

    Disclaimer: Please do note that the numbering of the sections in the TOC and elsewhere are not intended to follow the numbering of the problem. The problem referred to is always exlicitly stated in the title itself (e.g: Problem 1...)

    Problem 1 - Monte carlo simulation

    In the case of stochastic model, it is impossible to draw any exact conclusions due to the random nature of the model. What can be done is to observe the behavious of the model and then draw statistical conclusions based on what can be seen. Any predictions made by a stochastic models will be based on probability, and while possibly accurate with high probability, they are not guarenteed to be exact.

    A more realistic choice of probability distribution will result in a more realistic simulation of reality.

    Deterministic models on the other hand will give more precise predictions, but may be harder to model when you have uncertainties in your model and data. Especially when parts of the modelled behaviour/fenomena is hard to predict. Human behaviour in general is a good example of this - something which is difficult to predict only when using deterministic models. While human behaviour isn’t entirely rational, the main problem is that there are too many factors involved in human decision making in order to take them all into account in a deterministic model.

    In the case where discrete entities are involved in the model, a deterministic approach will only be able to make “optimal” choices based on some non-random calculation. While this can often be a pretty good estimation of what will really happen, it is possible that some kind of equilibrium will arise, the balance of which might be broken by using a stochastic model instead. If the equilibrium does not arise in the stochastic model, it is unlikely to arise in reality.

    Problem 2 - Rental cars

    Let \(P(R_X)\) be the probability that a car rented from site \(X\) is returned to \(X\). In our case we have 2 sites \(A\) and \(B\), with the probabilities: \(P(R_A) = 0.6, P(R_B) = 0.7\)

    a) Probability of a single car

    The problem of the probabilitiy of a specific car rented at \(A\) is returned to \(A\) deals with disjunct events. That is, the probability that the same car ending up at \(A\) for \(n_{k+1} = k + 1\) (the second event) is dependent on \(n_{k}\), the first event.

    Let \(P_A(n)\) be the probability function modelling the probability of the car remaining at \(A\) after \(n\) steps.

    It is quite obvious that the probability of a car that has not yet been rented and which is stationed at site \(A\) being at site \(A\) is \(1\). The probability of a car being in \(A\) for the \(n\):th renting is the probability of it already being there before (previous case \(n - 1\)) times the probability of it staying, plus the probability of it not being there before times the probability of it leaving from site \(B\), or in mathematical terms:

    \[P_A(n) = \begin{cases} 1 & \text{if } n = 0\\ P_A(n-1) \cdot P(R_A) + (1 - P_A(n-1)) \cdot (1 - P(R_B)) & \text{if } n > 0 \end{cases}\]

    Simplified with constants substituted, this becomes: \[P_A(n) = \begin{cases} 1 & \text{if } n = 0\\ 0.3 P_A(n-1) + 0.3 & \text{if } n > 0 \end{cases}\]

    Examining the value of this function for some values of \(n\), we get:

    \[\begin{aligned} P_A(0) &= 1 \\ P_A(1) &= 0.3 + 0.3 \cdot 1\\ P_A(2) &= 0.3 + 0.3(0.3 + 0.3 \cdot 1)\\ P_A(3) &= 0.3 + 0.3(0.3 + 0.3(0.3 + 0.3 \cdot 1))\\ P_A(4) &= 0.3 + 0.3(0.3 + 0.3(0.3 + 0.3(0.3 + 0.3 \cdot 1)))\end{aligned}\]

    Expanding:

    \[\begin{aligned} P_A(0) &= 1 \\ P_A(1) &= 0.3 + 0.3\\ P_A(2) &= 0.3 + 0.3^2 + 0.3^2\\ P_A(3) &= 0.3 + 0.3^2 + 0.3^3 + 0.3^3\\ P_A(4) &= 0.3 + 0.3^2 + 0.3^3 + 0.3^4 + 0.3^4\end{aligned}\]

    These values appear to follow the pattern (we assume this): \[P_A(n) = \sum_{i=1}^n \left[0.3^i\right] + 0.3^n\]

    We use proof by induction.

    Since we know that this is true for base cases \(n \in \{1, 2, 3, 4\}\), if it is also true for \(n+1\), then it will be true for all \(n \in {\mathbb{Z}}^+\) (for \(n=0\), \(P_A(n)\) is trivially \(1\)).

    \[\begin{aligned} P_A(n+1) &= 0.3 + 0.3 P_A(n)\\ &= 0.3 + 0.3 \left(\sum_{i=1}^n \left[0.3^i\right] + 0.3^n\right)\\ &= 0.3 + 0.3 \sum_{i=1}^n \left[0.3^i\right] + 0.3 \cdot 0.3^n\\ &= 0.3 + \sum_{i=1}^n \left[0.3^{i+1}\right] + 0.3^{n+1}\\ &= 0.3 + \sum_{i=2}^{n+1} \left[0.3^i\right] + 0.3^{n+1}\\ &= \sum_{i=1}^{n+1} \left[0.3^i\right] + 0.3^{n+1}\end{aligned}\]

    Q.E.D.

    Solving the sum gives us:

    \[P_A(n) = \begin{cases} 1 & \text{if } n = 0\\ \frac{10}{7} \left(0.3 - 0.3^{n+1}\right) + 0.3^n, & \text{if } n > 0 \end{cases}\]