Virgil Șerbănuță edited untitled.tex  over 8 years ago

Commit id: c5081e67a3f30c4b9ee0a583e6f3dc5a6d2b3975

deletions | additions      

       

Let us restrict the "possible worlds" term to the worlds where we can make predictions and let us use only sets of axioms that allow predictions. As mentioned above, for a given world, a good set of axioms is one which allows us to make all possible correct predictions for that world (statistical or not). Using only good sets of axioms solves the "too-general problem" since such a set would describe its possible worlds in a way that does not leave out important things. Still, there is nothing that prevents such a set from going into too much details.  Given a specific Let us chose any  formalism for specifying axioms that uses a finite alphabet, alphabet. Then,  for each possible world, we could say that the best[TODO: Not working with maximal]  set of axioms is the smallest good one, smallest being defined as "having the smallest length when written on paper". This is not a well defined notion for a few reasons. First, there could be multiple sets with "the smallest length" (one obvious case is given by reordering the axioms). In such a case, we could define an order for the symbols that we are using in our formalism and we could pick the system with the smallest length and which is the smallest in the lexicographic order. Second, there could be systems of axioms of infinite length. length [TODO: These can't really be sorted. I can have an infinite sequence of sets of axioms s1, s2, ... where each element is smaller than its predecessor, but which has no limit. I think I can't reject all systems which include other systems, I may again have an infinite chain. I can reject systems for which some part is implied by a smaller finite part in another system and the reminder is the same.]  For this, we will only consider systems which, when written on an infinite paper, use a countable number of symbols. This means that all will have the same length, but we can still use the lexicographic order to compare them. We will ignore systems which need an uncountable set of symbol places. With an axiom system chosen in this way we would also solve the "too-specific problem" [TODO: I only solved it for finite systems. Do I also need to solve it for infinite systems?]  since we would remove any axiom that's not absolutely needed. If $U$ is an universe and $A$ is the smallest set of predictive axioms as described above, then we would say than $A$ is the \definitie{optimal set of axioms for $U$}. If $A$ is a set of axioms which is optimal for some universe $U$ then we say that $A$ is an \definitie{optimal set of axioms}.  Now let us see if we actually need infinite length systems. We can have infinite systems of axioms, and there is no good reason to reject such systems and to ignore their possible worlds, so we will take them into account. It is less clear that we can't replace these infinite systems with finite ones. Indeed, let us use any binary encoding allowing us to represent these systems as binary strings, i.e. as binary functions over the set of natural numbers, i.e. the numbers. The  encoding of a set of axioms $A$, $encoding(A)$, would be a function $f:\naturale\longrightarrow from the natural numbers to a binary set, giving the value of the bit for each position in the encoding, $encoding(A):\naturale\longrightarrow  \multime{0, 1}$. Then the following scenario becomes plausible: for any universe $U$ with an infinite system of axioms $A$, we can consider $U+encoding(A)$ to be an universe in itself which has $encoding(A)$ as part of its state at any moment in time. Then it's likely that we can find a finite system of axioms which allows predictions for such an universe. While, strictly speaking, this would be a different universe than the one we had at the beginning, it may seem similar enough to it so one may be tempted to use only finite systems of axioms. On the other hand, using only finite systems of axioms in this way seems to be some sort of cheating. In order to get a more "honest" system of axioms, we could work with very specific systems of axioms, e.g. we could only talk about worlds which have $\reale^4$ as their space and time, whose objects are things similar to the wave functions used by quantum mechanics and so on.  \section {Intelligent beings / Finite and infinite axiom systems [TODO: but then I would need to move the title above.] / Modelling {Modelling  from inside} We could also completely avoid the axiom system encoding problem by talking only about worlds which could contain intelligent beings and talking about how the intelligent beings would model their world. Let us note that if those intelligent beings are similar enough to us and the optimal set of axioms for their world is infinite then they will never have a complete description of how their world works, but they will be able to build better and better models. 

First, let us note that having intelligent beings in an universe likely means that their intelligence is needed to allow them to live in that universe, which likely means that they can have a partial model of the universe. That model does not have to be precise (it could be made of simple rules like "If I pick fruits then I can eat them. If I eat them then I live.") and it can cover only a small part of their world, but it should predict something. Of course, these predictions do not have to be deterministic. Also, they might not be able to perceive the entire universe.  In the following we are interested only in how good an axiom set is for describing a given world, so when saying Note  that someone predicts something using an axiom set this will the previous definition of prediction does  not mean say  that a human-like intelligence it  is able feasible  to make that prediction, actually predict everything,  it only  means thatin theory there is a mathematical way to make that  prediction is possible for an all-knowing being.  [TODO: rephrase, say why this  does not sound well.].  Then we would have three possible cases.   First, those intelligent beings could, at some point in time, find an axiom system which gives the best predictions that they could have for their world, i.e. which predicts everything matter: requiring  that they can observe. In other words, they wouldn't be able to find anything which prediction  is not modelled by their system. We actually possible  couldrelax this "best axiom system" condition by  only requiring an axiom system that is good enough for all practical purposes. As an example, for an universe based on real numbers, knowing the axioms precisely with make  the exception of some constants and measuring all constants with a billion digits precision might (or might not) be good enough. Only caring paper stronger. However, I chosed to argue  aboutthings which occur frequently enough (e.g. more than once in  a million years) could also be "good enough".  Second, those intelligent beings could reach a point where their theory clearly does not fully model better result.] A related case is  the world, but it's also impossible to improve in a meaningful way. This could following: It is possible that almost all macroscopic events can  be predicted very precisely using quantum physics. Assuming that this is indeed  the case if, e.g., they can model a part of their world, but modelling any part case, many  of the reminder would these predictions  require adding an infinite set of axioms and too many computational resources, making them infeasible. I am requiring even less than this, I am allowing axiom systems where there is  no finite set of axioms would get one way to infer  a better model.  [I had here: TODO: Maybe say that predict means "predict given enough time and resources"; also see prediction from  the definition in axiom system, but if  one checks all possible models  of that system,  the previous pages.] prediction turns out to be true.  We can define the \definitie{fraction of the world} that is modelled by an axiom system in at least three ways:  \begin{enumerate} 

\item As the fraction of the optimal set of axioms that is implied by the current axiom system.  \item As something between the first two cases, where we use a weighted fraction of the optimal axiom set, each axiom having a weight proportional to the fraction of the world where it applies. As an example, let us say that we have an infinite set of axioms, and that for each point in space we can predict everything that happens using only three axioms (of course, two different points may need different axiom triplets). Let us also assume that there is a finite set of axioms $S$ such as each point in space has an axiom in $S$ among its three axioms. Then $S$ would model at least a third of the entire space.  \end{enumerate}  [TODO: put We can use any of these definitions (and many other reasonable ones) for the reminder of  this definition somewhere where it makes paper.  Then we would have three possible cases.   First, those intelligent beings could, at some point in time, find an axiom system which gives the best predictions that they could have for their world, i.e. which predicts everything that they can observe. In other words, they wouldn't be able to find anything which is not modelled by their system. We could relax this "best axiom system" condition by only requiring an axiom system that is good enough for all practical purposes. As an example, for an universe based on real numbers, knowing the axioms precisely with the exception of some constants and measuring all constants with a billion digits precision might (or might not) be good enough. Only caring about things which occur frequently enough (e.g.  more sense.] than once in a million years) could also be "good enough".  Second, those intelligent beings could reach a point where their theory clearly does not fully model the world, but it's also impossible to improve in a meaningful way. This could be the case if, e.g., they can model a part of their world, but modelling any part of the reminder would require adding an infinite set of axioms and no finite set of axioms would get one a better model.  In order to make this split into cases more clear, let us assume that those intelligent beings would study their universe forever and, if needed, they and  would try to improve their axiom systems in some essential way forever []. forever.  Since they havean  infinite time, time available to them,  they could use strategies like generating possible theories in order (using the previously defined order), checking if they seem to make sense and testing their predictions against their world, so let us assume that if there is a possible improvement to their current theory, they will find it at some point.Note that in this case the fraction of the world that can be modelled is increasing, but is limited, so it converges at some value. Also, the prediction error [TODO: define separately – after I finish rewriting the definition should be towards the end of the document and should be moved before this paragraph.] is decreasing and is limited, so it converges. If the fraction converges at $1$ and the prediction error converges at $0$, then we are in the first case, because we reach a point when the fraction is so close to $1$ and the error is so close to $0$ that one would find them "good enough". If the fraction or the error converges to different values then we are in the second case.  There is also a third case, when one can improve the axiom system in ways Note  thatseem meaningful, without growing  the fraction of the world that can be modelled  is covered by the system and without decreasing increasing, but is limited, so it converges at some value. Also,  the prediction error. As an example, imagine a world with an infinite number of earth-like planets that lie on one line error (it's not important to define it precisely here) is decreasing  and with humans living on is limited, so it converges. If  the first one. The planets would be close enough and would have enough resources like food fraction converges at $1$  and fuel so that humans would have no issues travelling between them. Light would have to come to them the prediction error converges at $0$, then we are  in the first case, because we reach  a different way than in our world and something else, not gravitation would keep them in place. The laws of this hypothetical world, as observed by humans, would be both point when the fraction is so  closeenough  to $1$ and  the laws in our world error is  so close to $0$  that humans can live on any of the planets, but also different in an easily observable way. As an example, starting at $10$ meters above ground, gravity would be described with a different function on each planet. On some planets it would follow the inverse of a planet-specific polynomial function of the distance, on others it one  would follow find them "good enough". If  the inverse of an exponential function, on others it would behave in some way if fraction or  the distance error converges  to the center of the planet in meters is even and different values then we are  inanother way if  the distance is odd, and so on. second case.  [TODO: I had this comment: Will use There is also a third case, when one can improve the axiom system in ways that seem meaningful, without growing the fraction of the world that is covered by the system and without decreasing the prediction error. As an example, imagine a world with an  infinite descr. instead number of earth-like planets that lie on one line and with humans living on the first one. The laws of this hypothetical world, as observed by humans, would be wildly different from one planet to the other. As an example of milder differences, starting at $10$ meters above ground, gravity would be described with a different function on each planet. On some planets it would follow the inverse of a planet-specific polynomial function of the distance, on others it would follow the inverse of an exponential function, on others it would behave in some way if the distance to the center of the planet in meters is even and in another way if the distance is odd, and so on. Let us also assume that humans can travel between these planets freely in some bubble that preserves the laws  of ... obser.] the first planet well enough that humans can live, but that also lets them observe what happens outside.  In this case one could study each planet and add a specific description of the laws for each, but at any moment in time the humans in this world would only have a finite part of an infinite set of laws, so we wouldn't be able to say that they cover a non-zero fraction of the laws or a non-zero fraction of the world. If one would think that they cover a non-zero fraction because (say)  they cover a non-trivial part of the fundamental forces, then we could also vary the type of all forces from one planet to the other or we could add a new set of forces for each planet. The point is that we can have a case when the fraction of the universe that can be axiomatized at any moment is zero and one can't improve this fraction, even if one is able to model new meaningful things about the universe and the part of the world that is covered by the axiom system is continuously extended. We should note that in the second and third cases it can also happen that one can’t improve their axiom set to cover more even when using a statistical axiom set. One such case would be when the perceived laws of the universe change infully  unpredictable ways from day to day (of course, this can happen without any change in the actual axiom set for the universe). [TODO: Make sure these things work for nondeterministic universes] 

[TODO: Fix ``quotes".]  [TODO: Fix spaces between math mode and punctuation.]  [TODO: Fix the usage of I and we.] [TODO: Decide when I use axiom set and when axiom system. Say explicitly that they mean the same thing.]  [TODO: Use can't, won't, isn't and can not, will not, is not consistently.]