Virgil Șerbănuță edited untitled.tex  about 8 years ago

Commit id: 69d72ab2b72a833c6ba009a1629903cefb1b9ab0

deletions | additions      

       

We can use any of these definitions (and many other reasonable ones) for the reminder of this paper. Then we would have three possible cases\footnote{All of these assume that the intelligent beings use a single axiom set for predicting. It could happen that they use multiple axiom sets which can't be merged into one. One could rewrite the paper to also handle this case, but it's easy to see that the finite/infinite distinction below would be the similar.}.  First, those intelligent beings could, at some point in time, find an axiom set which gives the best predictions that they could have for their world, i.e. which predicts everything that they can observe. In other words, observe and  they wouldn't be able to find anything which is not modelled by their axiom set. We could also include here axiom sets that are good enough for all practical purposes. As an example, for an universe based on real numbers, knowing the axioms precisely with the exception of some constants and measuring all constants with a billion digits precision might (or might not) be good enough. Only caring about things which occur frequently enough, e.g. more than once in a million years, could also be good enough. Second, those intelligent beings could reach a point where their theory clearly does not fully model the world, but it's also impossible to improve in a meaningful way. This could be the case if, e.g., they can model a part of their world, but modelling any part of the reminder would require adding an infinite set of axioms and no finite set of axioms would improve the model.