Virgil Șerbănuță edited untitled.tex  about 8 years ago

Commit id: 0d35aa44bb8b79369be760ea01a167320851d59b

deletions | additions      

       

\begin{enumerate}  \item As the fraction of the observable space for which the axiom set predicts something with a reasonable error margin.  \item As the fraction of the optimal set of axioms that is implied by the current axiom set.  \item As something between the first two cases, where we use a weighted fraction of the optimal axiom set, each axiom having a weight proportional to the fraction of the world where it applies. As an example, let us say that we have an infinite set of axioms, and that for each point in space we can predict everything that happens using only three axioms (of course, two different points may need different axiom triplets). Let us also assume that there is a finite set of axioms $S$ such as each point in space has an axiom in $S$ among its three axioms. Then $S$ would model at least a third of the entire space. world.  \end{enumerate}  In all of these cases,the  predictions made only from the  artificial constraints imposed by this paper, e.g. that the world can be modelled mathematically or that it contains intelligent beings, should not count towards the fraction of the world that is modelled by an axiom set. In other words, this \definitie{fraction of the world} is actually the fraction of the world that is modelled above what is absolutely needed because of the constraints imposed here. We can use any of these definitions (and many other reasonable ones) for the reminder of this paper. Then we would have three possible cases\footnote{All of these assume that the intelligent beings use a single axiom set for predicting. It could happen that they use multiple axiom sets which can't be merged into one. One could rewrite the paper to also handle this case, but it's easy to see that the finite/infinite distinction below would be the similar.}. 

In order to make the first two cases more clear, let us assume that those intelligent beings would study their universe and would try to improve their axiom sets in some essential way forever. Since they have infinite time available to them, they could use strategies like generating possible theories in order (using the previously defined order, which works for finite axiom sets), checking if they seem to make sense and testing their predictions against their world, so let us assume that if there is a possible improvement to their current theory, they will find it at some point.   Note that the fraction of the world that can be modelled is (non-strictly)  increasing, but is limited, so it converges at some value. Also, the prediction error (it's not important to define it precisely here) is (non-strictly)  decreasing and is limited, so it converges. converges\footnote{The prediction error can be different for different kinds of prediction and for different parts of the world. However, even then, it will still be decreasing and limited, so, in order to avoid unneeded complexity, I have assumed that there is only one. This part of the paper could be rephrased to handle the multiple prediction errors case. Note that even when replacing an old theory with one that covers more but has a higher prediction error one could still use the old one when it works and only use the new one otherwise, keeping the prediction error non-increasing}.  If the fraction converges at $1$ and the prediction error converges at $0$, then we are in the first case, because we reach a point when the fraction is so close to $1$ and the error is so close to $0$ that one would find them good enough. If the fraction or the error converges to different values then we are in the second case. Their convergence shows that all improvements converge to zero so, after some point, one can't make any meaningful improvement. However, there is another meaning of \ghilimele{meaningful} for which there are some worlds where one me can  make meaningful improvements forever. I will count this as the third case, although it is actually a subcase of the second case above. Of course, it will still be true that, after some point, these improvements do not really grow the fraction of the world that is covered by the set and they do not decrease the prediction error. As an example, imagine a world with an infinite number of earth-like planets that lie on one line and with humans living on the first one. The laws of this hypothetical world, as observed by humans, would be wildly different from one planet to the other. As an example of milder differences, starting at $10$ meters above ground, gravity would be described with a different function on each planet. On some planets it would follow the inverse of a planet-specific polynomial function of the distance, on others it would follow the inverse of an exponential function, on others it would behave in some way if the distance to the center of the planet in meters is even and in another way if the distance is odd, and so on. Let us also assume that humans can travel between these planets freely in some bubble that preserves the laws of the first planet well enough that humans can live, but that also lets them observe a projection of what happens outside. In this case they could study each planet and find a good enough set of axioms that describes how that planet behaves, but at any moment in time the humans in this world would only have a finite part of an infinite set of laws, so they would only cover a zero fraction of the laws and a zero fraction of the world. If one would think that they cover a non-zero fraction because (say) they know a non-trivial part of the fundamental forces, even though they don't know the exact functions that describe them, then we could change this example to also vary the type of all forces from one planet to the other or we could add a new set of forces for each planet. The point is that one can build an example where the fraction of the universe that can be axiomatized at any moment is zero and the humans in that example world can't improve this fraction, even if they are able to continuously model new meaningful things about the universe and the part of the world that is covered by the axiom set is continuously extended. 

When talking about a mathematical description of the universe as one sees it, it is obvious that the description may depend both on time and place, i.e. the laws of the universe as observed at a given time and place can be quite different from the laws at another time and/or place. If these differences are unpredictable, then an intelligent being will never be able to find a full mathematical description of the universe, even if we assume that it could live through all these changes (as time passes, and/or as it moves through the space).  Note that a being living on Earth and ignoring everything outside of it may think that tides happen because gravity changes with time in ways which are more or less predictable, which seems similar to the case above. However, we also have another explanation available, i.e. actually think  that gravity works in the same way regardless of time, but the state of the universe, i.e. the relative position of the Sun, Moon and Earth, changes with time. \section{Description probabilities}  For the purpose of this paper, let us denote by \definitie{finite property} of something any property of that something which can be written using a finite number of words. Of course, all the properties that we will ever use in speech and writing are finite.  Since we will use only finite properties here, let us drop \ghilimele{finite} and call any of them simply \definitie{property}. These observable descriptions of possible worlds are general enough and different enough that it's hard to say something about them, except that they make sense in a mathematical way. Still, given any property $X$ we could try to see what is the chance that it's true in the set of observable descriptions.  If our universe is not designed, then any possible universe could have existed (and maybe all possible universes actually exist). Focusing only on universes which have a space-time and in which intelligent beings can exist, if we would want to pick a random one one,  for a reasonable definition of random, each universe would have a zero probability of being chosen. If we further restrict these universes to ones which allow a predictive set of axioms for the entire universe, then each set of axioms is as likely to be randomly picked as any other, so each has a zero probability. I argue that, even more, the observable descriptions, i.e.  sets of axioms that would be produced describe the universes as perceived  by the intelligent beings in that universe (in the sense mentioned above) them,  have each a zero probability. In other words, any reasonable probability distribution over these axiom sets is continuous. Above As a parenthesis, above  I required that there is a predictive axiom set for the entire universe. That was done for simplicity, but a similar parallel construction could be made for the case when only a part of the universe can have a predictive set of axioms. While this is designed such that we can't directly say anything about a specific observable description, we can say things about what has a real chance of being true for a random observable description. First, let us note that any property that is true for only one description has a zero probability (i.e. it is false virtually everywhere). Even more, any property which is true for a countable number of descriptions has a zero probability. This means that any property with a non-zero probability is for sure true for an uncountable number of descriptions. Of course, there may be properties which are true for an uncountable number of descriptions and still have a zero probability.