Virgil Șerbănuță edited untitled.tex  over 8 years ago

Commit id: 420b4fed3df56a886de467b6a119245fe1eb78dd

deletions | additions      

       

[I had here: TODO: Maybe say that predict means "predict given enough time and resources"; also see the definition in one of the previous pages.]  We can define the \definitie{fraction of the world} that is modelled by an axiom system in at least three ways:  \begin{ennumerate} \begin{enumerate}  \item As the fraction of the observable space for which the axiom system predicts something with a reasonable error margin.  \item As the fraction of the optimal set of axioms that is implied by the current axiom system.  \item As something between the first two cases, where we use a weighted fraction of the optimal axiom set, each axiom having a weight proportional to the fraction of the world where it applies. As an example, let us say that we have an infinite set of axioms, and that for each point in space we can predict everything that happens using only three axioms (of course, two different points may need different axiom triplets). Let us also assume that there is a finite set of axioms $S$ such as each point in space has an axiom in $S$ among its three axioms. Then $S$ would model at least a third of the entire space.  \end{ennumerate} \end{enumerate}  [TODO: put this definition somewhere where it makes more sense.]  In order to make this split into cases more clear, let us assume that those intelligent beings would study their universe forever and, if needed, they would try to improve their axiom systems in some essential way forever []. Since they have an infinite time, they could use strategies like generating possible theories in order (using the previously defined order), checking if they seem to make sense and testing their predictions against their world, so let us assume that if there is a possible improvement to their current theory, they will find it at some point. Note that in this case the fraction of the world that can be modelled is increasing, but is limited, so it converges at some value. Also, the prediction error [TODO: define separately – after I finish rewriting the definition should be towards the end of the document and should be moved before this paragraph.] is decreasing and is limited, so it converges. If the fraction converges at $1$ and the prediction error converges at $0$, then we are in the first case, because we reach a point when the fraction is so close to $1$ and the error is so close to $0$ that one would find them "good enough". If the fraction or the error converges to different values then we are in the second case.