Virgil Șerbănuță edited untitled.tex  about 8 years ago

Commit id: bb1e0cc577e4a817bf2879d5b158e1c970b498e0

deletions | additions      

       

In order to make the first two cases more clear, let us assume that those intelligent beings would study their universe and would try to improve their axiom sets in some essential way forever. Since they have infinite time available to them, they could use strategies like generating possible theories in order (using the previously defined order, which works for finite axiom sets), checking if they seem to make sense and testing their predictions against their world, so let us assume that if there is a possible improvement to their current theory, they will find it at some point.   Note that the fraction of the world that can be modelled is increasing, but is limited, so it converges at some value. Also, the prediction error (it's not important to define it precisely here) is decreasing and is limited, so it converges. If the fraction converges at $1$ and the prediction error converges at $0$, then we are in the first case, because we reach a point when the fraction is so close to $1$ and the error is so close to $0$ that one would find them good enough. If the fraction or the error converges to different values then we are in the second case. Their convergence shows that all improvements converge to zero so, after some point, one can't make any meaningful improvement.  There However, there  is also a another meaning of \ghilimele{meaningful} for which there are some worlds where one me make meaningful improvements forever. I will count this as the  third case, when one can improve although it is actually a subcase of  the axiom set in ways that seem meaningful, without growing second case above. Of course, it will still be true that, after some point, these improvements do not really grow  the fraction of the world that is covered by the set and without decreasing they do not decrease  the prediction error. As an example, imagine a world with an infinite number of earth-like planets that lie on one line and with humans living on the first one. The laws of this hypothetical world, as observed by humans, would be wildly different from one planet to the other. As an example of milder differences, starting at $10$ meters above ground, gravity would be described with a different function on each planet. On some planets it would follow the inverse of a planet-specific polynomial function of the distance, on others it would follow the inverse of an exponential function, on others it would behave in some way if the distance to the center of the planet in meters is even and in another way if the distance is odd, and so on. Let us also assume that humans can travel between these planets freely in some bubble that preserves the laws of the first planet well enough that humans can live, but that also lets them observe a projection of what happens outside. In this case they could study each planet and find a good enough set of axioms that describes how that planet behaves, but at any moment in time the humans in this world would only have a finite part of an infinite set of laws, so they would only cover a zero fraction of the laws and a zero fraction of the world. If one would think that they cover a non-zero fraction because (say) they know a non-trivial part of the fundamental forces, even though they don't know the exact functions that describe them, then we could change this example to also vary the type of all forces from one planet to the other or we could add a new set of forces for each planet. The point is that one can build an example where the fraction of the universe that can be axiomatized at any moment is zero and the humans in that example world can't improve this fraction, even if they are able to continuously model new meaningful things about the universe and the part of the world that is covered by the axiom set is continuously extended.