Virgil Șerbănuță edited untitled.tex  about 8 years ago

Commit id: 3a6cc48b4da79065f54a42d1191fd7d3c1e8b989

deletions | additions      

       

First, those intelligent beings could, at some point in time, find an axiom set which gives the best predictions that they could have for their world, i.e. which predicts everything that they can observe. In other words, they wouldn't be able to find anything which is not modelled by their axiom set. We could relax this \ghilimele{best axiom set} condition by only requiring an axiom set that is good enough for all practical purposes. As an example, for an universe based on real numbers, knowing the axioms precisely with the exception of some constants and measuring all constants with a billion digits precision might (or might not) be good enough. Only caring about things which occur frequently enough, e.g. more than once in a million years, could also be good enough.  Second, those intelligent beings could reach a point where their theory clearly does not fully model the world, but it's also impossible to improve in a meaningful way. This could be the case if, e.g., they can model a part of their world, but modelling any part of the reminder would require adding an infinite set of axioms and no finite set of axioms would get one a better improve the  model. In order to make this split into cases more clear, let us assume that those intelligent beings would study their universe and would try to improve their axiom sets in some essential way forever. Since they have infinite time available to them, they could use strategies like generating possible theories in order (using the previously defined order), checking if they seem to make sense and testing their predictions against their world, so let us assume that if there is a possible improvement to their current theory, they will find it at some point.