Virgil Șerbănuță edited untitled.tex  over 8 years ago

Commit id: ac353fea31902e6893d7b125fc11e75746539fb6

deletions | additions      

       

On the other hand, using only finite systems of axioms in this way seems to be some sort of cheating. In order to get a more "honest" system of axioms, we could work with very specific systems of axioms, e.g. we could only talk about worlds which have $\reale^4$ as their space and time, whose objects are things similar to the wave functions used by quantum mechanics and so on.  \section {Intelligent beings / Finite and infinite axiom systems [TODO: but then I would need to move the title above.]} above.] / Modelling from inside}  We could also completely avoid the axiom system encoding problem by talking only about worlds which could contain intelligent beings and talking about how the intelligent beings would model their world. This approach may also be a more interesting one, since we may Let us note that if those intelligent beings are similar enough to us and the optimal set of axioms for their world is infinite then they will  never have a complete and correct model for our world, description of how their world works,  but we can they will be able to  build better and better models. As a parenthesis, note We will assume  thatuntil now we restricted the "possible world" concept several times. The argument below also works with larger "possible world" concepts as long as  thoseworlds have a few basic properties (e.g. one can make predictions and it can contain  intelligent beings) beings are continuously trying to find better models for their world  andat the same time it is plausible  that our world is such a "possible world". they are reasonably efficient at this.  [TODO: Do I need this paragraph? Can I remove most of it?]  First of all, let us assume As a parenthesis, note  that those intelligent beings are continuously trying to find better models for their world and, at any given time, they are trying to use until now we restricted  the most simple model which is reasonable at that time (the model could, of course, change "possible world" concept several times. The argument below also works with larger "possible world" concepts  as they find out more things about their world) and that they are reasonably efficient at this, i.e. they don't need to find the most simple axiom system, but they aren't very far from it, for long as those worlds have  a reasonable definition of "very far" few basic properties  (e.g. if one can make predictions and it can contain intelligent beings) and at  the optimal axiom system has a finite length $l$ then there same time it  is a number $M_l$ such plausible  that their axiom system our world  is never longer than $M_l$). such a "possible world".  First, let us note that having intelligent beings in an universe likely means that their intelligence is needed to allow them to live in that universe, which likely means that they can have a partial model of the universe. That model does not have to be precise (it could be made of simple rules like "If I pick fruits then I can eat them. If I eat them then I live.") and it can cover only a small part of their world, but it predicts should predict  something. Of course, their these  predictions do not have to be deterministic. Also, they might not be able to perceive the entire universe. In the following we are interested only in how good an axiom set is for describing a given world, so when saying that someone predicts something using an axiom set this will not mean that a human-like intelligence is able to make that prediction, it means that in theory there is a mathematical way to make that prediction [TODO: nu suna bine.]. rephrase, does not sound well.].  Then we would have three possible cases.   First, those intelligent beings could, at some point in time, find an axiom system which gives the best predictions that they could have for their world, i.e.  whichmeans that they would stop searching for axiom systems, since what they have  predicts everything thathappens in their world and  they can't can observe. In other words, they wouldn't be able to  find anything which is not modelled by their system. We could relax this "best axiom system" condition by only requiring an axiom system that is good enough for all practical purposes. As an example, for an universe based on real numbers, knowing the axioms precisely with the exception of some constants and measuring all constants with a billion digits precision might (or might not) be good enough. Only caring about things which occur frequently enough (e.g. more than once in a million years) could also be "good enough". Second, those intelligent beings could reach a point where their theory clearly does not fully model the world, but it's also impossible to improve in a meaningful way. This could be the case if, e.g., they can model a part of their world, but modelling any part of the reminder would require adding an infinite set of axioms and no finite set of axioms would get one a better model.  [I had here: TODO: Maybe say that predict means "predict given enough time and resources"; also see the definition in one of the previous pages.]  We can define the \definitie{fraction of the world} that is modelled by an axiom system in at least three ways:  \begin{ennumerate}  \item As the fraction of the observable space for which the axiom system predicts something with a reasonable error margin.  \item As the fraction of the optimal set of axioms that is implied by the current axiom system.  \item As something between the first two cases, where we use a weighted fraction of the optimal axiom set, each axiom having a weight proportional to the fraction of the world where it applies. As an example, let us say that we have an infinite set of axioms, and that for each point in space we can predict everything that happens using only three axioms (of course, two different points may need different axiom triplets). Let us also assume that there is a finite set of axioms $S$ such as each point in space has an axiom in $S$ among its three axioms. Then $S$ would model at least a third of the entire space.  \end{ennumerate}  [TODO: put this definition somewhere where it makes more sense.]  Second, In order to make this split into cases more clear, let us assume that  those intelligent beings could reach a point where would study  their theory clearly does not fully model the world, but it's also impossible universe forever and, if needed, they would try  to improve. This improve their axiom systems in some essential way forever []. Since they have an infinite time, they  could be use strategies like generating possible theories in order (using  the case if, e.g., previously defined order), checking if  they can model a part of seem to make sense and testing their predictions against  their world, but modelling any part so let us assume that if there is a possible improvement to their current theory, they will find it at some point. Note that in this case the fraction  of the reminder would require adding an infinite set world that can be modelled is increasing, but is limited, so it converges at some value. Also, the prediction error [TODO: define separately – after I finish rewriting the definition should be towards the end  of axioms the document  and no finite set of axioms would get one should be moved before this paragraph.] is decreasing and is limited, so it converges. If the fraction converges at $1$ and the prediction error converges at $0$, then we are in the first case, because we reach  a better model. point when the fraction is so close to $1$ and the error is so close to $0$ that one would find them "good enough". If the fraction or the error converges to different values then we are in the second case.  Let us assume that if those intelligent beings could study their universe forever, they would try to improve their models in some essential way forever. Since they have an infinite time, they could use strategies like generating possible theories in order, checking if they seem to make sense and testing their predictions against their world, so let us assume that if there is a possible improvement to their current theory, they will find it at some point. Note that in this case the fraction of the world that can be modelled [TODO: Define separately] (if that notion makes sense) is increasing, but is limited, so it converges at some value. Also, the prediction error [TODO: define separately] is decreasing and is limited, so it converges. If the fraction converges at $1$ and the prediction error converges at $0$, then we are in the first case, because we reach a point when the fraction is so close to $1$ and the error is so close to $0$ that one would find them "good enough". If the fraction or the error converges to different values then we are in the second case. I stopped rewriting here.]  There is also a third case when there is no reasonable way to define the fraction of the world that can be modelled, except when the fraction is $0$. As an example, imagine a world with an infinite number of earth-like planets that lie on one line and with humans living on the first one. The planets would be close enough and would have enough resources like food and fuel so that humans would have no issues travelling between them. Light would have to come to them in a different way than in our world and something else, not gravitation would keep them in place. The laws of this hypothetical world, as observed by humans, would be both close enough to the laws in our world so that humans can live on any of the planets, but also different in an easily observable way. Let us say that, starting at $10$ meters above ground, gravity would be described with a different function on each planet. On some planets it would follow the inverse of a planet-specific polynomial function of the distance, on others it would follow the inverse of an exponential function, on others it would behave in some way if the distance to the center of the planet in meters is even and in another way if the distance is odd, and so on.