Virgil Șerbănuță edited untitled.tex  over 8 years ago

Commit id: 9d821bea004874b123d766fc087365b16e5a8129

deletions | additions      

       

If we define "prediction" in some useful way, as suggested above, and restrict the "possible worlds" term to the ones where we can make predictions, then it makes sense to use only systems of axioms that allow predictions. We will always use the best possible system for predictions (statistical or not). This solves the "too-general problem" since such a system would describe its possible worlds in a way that does not leave out important things. Still, there is nothing that prevents such a system from going into too much details.  Given a specific formalism for specifying axioms, axioms that uses a finite alphabet,  for each possible world, we could only consider the smallest set of axioms that allow predictions, smallest being defined as "having the smallest length when written on paper". This is not a well defined notion for a few reasons. First, there could be multiple systems with "the smallest length" (one obvious case is given by reordering the axioms). In such a case, we could define an order for the symbols that we are using in our formalism and we could pick the system with the smallest length and that is the smallest in the lexicographic order. Second, there could be systems of axioms of infinite length. For this, we will only consider systems which, when "written written  on an infinite paper", paper,  use a countable set number  of symbol places on that paper and we will say symbols. This means  that all the infinite length systems will  have the same length, but all of them have a length greater than any finite length. we can still use the lexicographic order to compare them.  We will ignore systems which need an uncountable set of symbol places. This should With an axiom system chosen in this way we would also  solve the "too-specific problem". problem" since we would remove any axiom that's not absolutely needed.  Now let us see if we actually need infinite length systems. We can have infinite systems of axioms, and there is no good reason to reject such systems and to ignore their possible worlds, so we will take them into account. It is less clear that we can't replace these infinite systems with finite ones. Indeed, let us use any binary encoding allowing us to represent these systems as binary strings, i.e. as binary functions over the set of natural numbers, i.e. $f:\naturale\longrightarrow \multime{0, 1}$. Then it seems the following scenario becomes plausible: for any universe $U$ with an infinite system of axioms $A$, we can consider $U+encoding(A)$ to be an universe in itself. Then it's  likely that we can define find  a finite system of axioms that will allow prediction for a universe that also contains an encoding of an axiom system. which describe $U+encoding(A)$.  While, strictly speaking, this would be a different system universe  than the one we had at the beginning, it is also similar enough to it so one may be tempted to use only finite systems of axioms. On the other hand, using only finite systems of axioms in this way seems to be some sort of cheating. In order to get a more "honest" system of axioms, we could work with very specific systems of axioms, e.g. we could only talk about worlds which have $\reale^4$ as their space and time, whose objects are things similar to the wave functions used by quantum mechanics and so on.  We could also completely avoid this problem by talking only about worlds which could contain intelligent beings and talking about how the intelligent beings would model their world. This approach may also be a more interesting one, since we may never have a complete and correct model for our world, but we can build better and better models.  As a parathesis, parenthesis,  note that until now we restricted the "possible world" concept several times. The argument below also  works with any concept of larger  "possible world" that is "large" enough, has concepts as long as those worlds have  a few basic properties (e.g. one can make predictions and it can contain intelligent beings) and at the same time it is plausible that our world is such a "possible world". [TODO: Do I need this paragraph? Can I remove most of it?]  First of all, let us assume that those intelligent beings are continuously trying to find better models for their world and, at any given time, they are trying to use the most simple model which is reasonable at that time (the model could, of course, change as they find out more things about their world) and that they are reasonably efficient at this, i.e. they don't need to find the most simple model, axiom system,  but they aren't very far from it, for a reasonable definition of "very far" (e.g. for each length $n$ of if  the most simple model optimal axiom system has a finite length $l$ then  there is a maximum length $Max_n$ number $M_l$  such that the actual model their axiom system  is not never  longer than $Max_n$). $M_l$).  First, let us note that having intelligent beings in an universe likely means that their intelligence is needed to allow them to live in that universe, which likely means that they can have a partial model of the universe. That model does not have to be precise (it could be made of simple rules like "If I pick fruits then I can eat them.") them. If I eat them then I live.")  and it can cover only a small part of their world, but it predicts something. Note that also, Of course,  their model does predictions do  not have to be deterministic. Also, they might not be able to perceive the entire universe.  Then we would have three possible cases.  First, those intelligent beings could, at some point in time, find a model an axiom system  which is gives  the best possible model predictions  that they could find have  for their world. They could stop because world, which means that  they found the perfect model would stop searching  for their world or because the model is so precise that it seems to predict axiom systems, since what they have predicts  everything that happens in their world and they can't find anything which is not part of modelled by  their model. system.  We couldalso  relax these conditions this "best axiom system" condition  by not only  requiring them to find the best model, but to find one an axiom system  that is good enough for all practical purposes. As an example, for an universe based on real numbers, knowing the axioms precisely with the exception of some constants and measuring all constants with a billion digits precision might (or might not) be good enough. Only caring about things which occur frequently enough (e.g. more than once in a million years)  could also be "good enough". Second, those intelligent beings could reach a point where their theory clearly does not fully model the world, but it's also impossible to improve. This could be the case if, e.g., they can model a part of their world, but modelling any part of  the reminder would require adding an infinite set of axioms and no finite set of axioms would get one a better model. If those intelligent beings could study their universe forever, they would improve their models in some essential way forever. Since they have an infinite time, they can and may use strategies like generating possible theories in order, checking if they seem to make sense and testing their predictions agains their world, so if there is a possible improvement to their current theory, they will find it at some point. Note that in this case the fraction of the world that can be modelled (if that notion makes sense) is increasing, but is limited, so it converges at some value. Also, the prediction error [TODO: define] is decreasing and is limited, so it converges. If the fraction converges at 1 and the prediction error converges at 0, then we are in the first case, because we reach a point when the fraction is so close to 1 and the error is so close to 0 Maybe say  that one would find them "good enough". If the fraction "predict" means "predict given enough time and other resources"  or the error converge[S?] to different values then we are in the second case. something similar.]  Let us assume that if those intelligent beings could study their universe forever, they would try to improve their models in some essential way forever. Since they have an infinite time, they could use strategies like generating possible theories in order, checking if they seem to make sense and testing their predictions against their world, so let us assume that if there is a possible improvement to their current theory, they will find it at some point. Note that in this case the fraction of the world that can be modelled [TODO: Define separately] (if that notion makes sense) is increasing, but is limited, so it converges at some value. Also, the prediction error [TODO: define separately] is decreasing and is limited, so it converges. If the fraction converges at 1 and the prediction error converges at 0, then we are in the first case, because we reach a point when the fraction is so close to 1 and the error is so close to 0 that one would find them "good enough". If the fraction or the error converge[S?] to different values then we are in the second case.  There is also a third case when there is no reasonable way to define the fraction of the world that can be modelled. modelled, except when the fraction is $0$.  As an example, imagine a world with an infinite number of earth-like planets that lie on one line and with humans living on the first one. The planets would be close enough and would have enough resources (food, fuel)  that humans would have no issues travelling between them. Light would have to  come to them in a different way than in our world, but all of them world and something else, not gravitation  would have enough light. keep them in place.  The laws of this hypothetical world, as observed by humans, would be both close enough to the laws in our world so that humans can live on any of the planets, but also different in an easily observable way. Let us say that, starting at 10 meters above ground, gravity would be described with a different function on each planet. On some planets it would follow an inverse of a planet-specific polynomial function of the distance, on others it would follow the inverse of an exponential function, on others it would behave in some way if the distance to the center of the planet is even and in another way if the distance is odd, and so on. In this case one could study each planet and add a specific description of the laws for each, but at any moment in time the humans in this world would only have a finite part of an infinite set of laws, so we wouldn't be able to say that they cover a non-zero fraction of the laws. If one would think that they cover a non-zero fraction because they cover a non-trivial part of the fundamental forces, then we could also vary the other forces from one planet to the other or we could add other forces. The point is that one can't speak of a fraction of the world that is modelled, even if one is able to model meaningful things (or, at least, the fraction is always 0).