Virgil Șerbănuță edited untitled.tex  over 8 years ago

Commit id: c0a5fdfad10296754d8eef95e5e1d6533adcf1ef

deletions | additions      

       

Then we can say that prediction would mean predicting the state of the world (maybe at a given point) from the state of the world at a subset of the previous points in time. If we are interested in predicting the state at a given point $P$, this subset should include a full section through $P$'s past (e.g. a plane which intersects it's past cone), i.e. it should separate $P$'s past in two parts, one which is "before the subset" and one which is "after the subset"; all lines which fully lie in $P$'s past and connect a point which is before the subset with a point which is after the subset must go through the subset. One could think of similar definitions for predicting the entire state of the world. If needed, this definition could be changed to work for more concepts of space and time.  In a deterministic universe, knowing the laws of the universe and its full state we could, in theory, fully predict $P$'s state. But an universe does not have to be deterministic and, even if it is, one could have only a statistical model for it. Then a set of axioms which only allows statistical predictions (I'll call this a \definitie{statistical axiom set}) is fine and for the purpose of this document we don't need to make a difference between a non-deterministic universe and a deterministic one but for which we only have a statistical model.[TODO: add the probability to the reasoning about knowing a $0$ fraction of the universe.]  Also note that there are cases when one can't have a statistical axiom set, e.g. when the perceived laws of the universe change in fully unpredictable ways from day to day (of course, this can happen without any change in the actual axiom set for the universe). [TODO: move this near the discussion about finite models].  If we define "prediction" in some useful way, as suggested above, and restrict the "possible worlds" term to the ones where we can make predictions, then it makes sense to use only systems of axioms that allow predictions. We will always use the best possible system for predictions (statistical or not). This solves the "too-general problem" since such a system would describe its possible worlds in a way that does not leave out important things. Still, there is nothing that prevents such a system from going into too much details.  Given a specific formalism for specifying axioms that uses a finite alphabet, for each possible world, we could only consider the smallest set of axioms that allow predictions, smallest being defined as "having the smallest length when written on paper". This is not a well defined notion for a few reasons. First, there could be multiple systems with "the smallest length" (one obvious case is given by reordering the axioms). In such a case, we could define an order for the symbols that we are using in our formalism and we could pick the system with the smallest length and that is the smallest in the lexicographic order. Second, there could be systems of axioms of infinite length. For this, we will only consider systems which, when written on an infinite paper, use a countable number of symbols. This means that all will have the same length, but we can still use the lexicographic order to compare them. We will ignore systems which need an uncountable set of symbol places. With an axiom system chosen in this way we would also solve the "too-specific problem" since we would remove any axiom that's not absolutely needed. 

In this case one could study each planet and add a specific description of the laws for each, but at any moment in time the humans in this world would only have a finite part of an infinite set of laws, so we wouldn't be able to say that they cover a non-zero fraction of the laws. If one would think that they cover a non-zero fraction because they cover a non-trivial part of the fundamental forces, then we could also vary all the forces from one planet to the other or we could add a new set of forces for each planet. The point is that we can have a case when the fraction of the universe that can be axiomatized at any moment is zero, even if one is able to model meaningful things about the universe and the model is continuously extended.  We should note that in the second and third cases it can also happen that one can’t improve their axiom set to cover more even when using a statistical axiom set. One such case would be when the perceived laws of the universe change in fully unpredictable ways from day to day (of course, this can happen without any change in the actual axiom set for the universe).  [TODO: Make sure these things work for nondeterministic universes]  For the given intelligent beings we would say in the first case that their universe has a finite observable description and in the second and third case that it has an infinite observable description. Of course, a possible universe $U$ could have multiple types of intelligent beings, each type perceiving the universe in a different way. Because of this difference in perception, for some types $U$ may have a finite observable description while for others it may have an infinite observable description. 

\item restrict ourselves to universes in which we can measure things with real numbers;  \item ignore things which happen rarely, even if we can measure that they happened using the $\epsilon$ precision. [TODO: this is included in the fraction $f$ below]  \end{itemize}  This is a bit hand-wavy, but we could use any reasonable definition of "measuring" and "happen rarely". Then we could say that the important things are the ones which we can measure with a precision greater than $\epsilon$ and which do not happen rarely. Let us also fix an arbitrary time length $t>0$, $t\ge 0$,  an acceptable error $\delta \ge 0$ and a probability $q\ge 0$  for our predictions and let us denote by $f$ with $0 < f \le 1$ a fraction of the world where we can make predictions using the given time $t$ and length $t$,  the acceptable error $\delta$. $\delta$, having a probability $p$ that the prediction is correct.  Then, if the world is not created, for any continuous distribution, the probability of having a finite description with which we can make predictions for a time length of $t$, with an error $\delta$ $\delta$, with a probability $p$  and for a fraction of the world $f$, is $0$. To have a non-zero probability either $t \le =  0$ (which means that we are not making any prediction) or prediction),  $\delta = \infty$ (which means that our predictions have no connection to the reality) reality), $p=0$ (which means that our predictions always fail)  or $f=0$. We can discard the first option since then we would have no predictions. We can also discard the second and the third  since such a description would not be useful in any way. The only remaining option is that $f=0$; as argued above, a description with $f=0$ can actually make sense. Therefore, with probability $1$, we have $f=0$ and the world has an infinite model. There is a distinction that we should make. When predicting (say) weather we can't make long-term precise predictions, and this happens because weather is chaotic, that is, a small difference in the start state can create large differences over time. This could happen even if the universe is deterministic and we know the laws of the universe perfectly, as long as we don't know the full current state of the universe. However, as argued above, with probability $1$, our hypothetical intelligent beings would not be able to make predictions for a significant part of the universe because they would have no idea about how their universe works, not because they don't know its state precisely enough.