Virgil Șerbănuță edited untitled.tex  about 8 years ago

Commit id: e7cf26448292651dffe6a39d08ca14843cd65601

deletions | additions      

       

Second, those intelligent beings could reach a point where their theory clearly does not fully model the world, but it's also impossible to improve in a meaningful way. This could be the case if, e.g., they can model a part of their world, but modelling any part of the reminder would require adding an infinite set of axioms and no finite set of axioms would improve the model.  In order to make this split into cases more clear, let us assume that those intelligent beings would study their universe and would try to improve their axiom sets in some essential way forever. Since they have infinite time available to them, they could use strategies like generating possible theories in order (using the previously defined order), order, which works for finite axiom sets),  checking if they seem to make sense and testing their predictions against their world, so let us assume that if there is a possible improvement to their current theory, they will find it at some point. Note that the fraction of the world that can be modelled is increasing, but is limited, so it converges at some value. Also, the prediction error (it's not important to define it precisely here) is decreasing and is limited, so it converges. If the fraction converges at $1$ and the prediction error converges at $0$, then we are in the first case, because we reach a point when the fraction is so close to $1$ and the error is so close to $0$ that one would find them good enough. If the fraction or the error converges to different values then we are in the second case.