Appendix A: Proof of proposition 1
To reason about the conditional density \(f_{Y_{i+1} | Y_i = y_i}\) it is useful to consider the procedure of drawing a sample of the corresponding random variable . Suppose at time \(t = j\) the observed record for the minimum is \(y_j.\) Then, given that the CDF of \(X_{j+1}\) is \(F_{X_{j+1}} = F_X\), the probability that the record \(Y_{j+1}\) exceeds \(y_j\) is \[P(Y_{j+1} \geq Y_{j} | Y_j = y_j) = 1 - F_X(y_j)\]and in this case we must have \(y_{j+1} = y_j\) since the attempt doesn't change the record for the minimum. In the alternative case that \(Y_{j+1}\) is less than \(y_j\), a sample is drawn from the distribution of \(X\)conditioned on the event \(X \leq y_j\). Let \(Z_j\) denote a random variable that admits a PDF
\[f_{Z_j}(z) = \frac{1}{F_X(y_{j-1})} f_X(z) \chi_{z \leq y_{j-1}}\]where \(\chi\) denotes the indicator function. Then our sampling procedure can be summarized as 
\[\begin{align} u &\sim \text{Bernoulli}(F_X(y_j)), \\ y_{j+1} &\sim \begin{cases} Z_j, & \text{if } u = 1,\\ \text{Constant}(y_j), & \text{if } u = 0. \end{cases} \end{align}\]
From this expression it is clear that the random variable \(Y_{j+1} | Y_{j} = y_j\) is a mixed random variable, meaning that it consists of both a discrete and a continuous component. The PDF of such a random variable involves Dirac delta functions and the CDF has jump discontinuities. Omitting its derivation (see the Appendix), the likelihood function for a sequence of observed records \(\left\{r_1, r_2,\ldots, r_n \right\}\) can be expressed as
\[\begin{align} f_{Y_{1:n}}(r_1, \ldots, r_n | \theta) &= \Bigg( \prod_{j \in C} F_X(r_{j-1}) f_{Z_j}(r_j) \Bigg) \Bigg( \prod_{j \in D} (1 - F_X(r_{j-1})) \Bigg) \\ &= \Bigg( \prod_{j \in C} F_X(r_{j-1}) \cdot \frac{1}{F_X(r_{j-1})} f_X(r_j) \chi_{r_j \leq y_{j-1}} \Bigg) \Bigg( \prod_{j \in D} (1 - F_X(r_{j})) \Bigg) \\ &= \Bigg( \prod_{j \in C} f_X(r_j) \Bigg) \Bigg( \prod_{j \in D} (1 - F_X(r_{j})) \Bigg) \end{align}\]where \(C\) denotes the set of time indices for the records which changed the running minimum and \(D\) denotes the set of time indices for the records which didn't change the running minimum. Note that we can determine the sets \(C\) and \(D\) by checking successively checking whether or not arecord changed since the previous record. Note also that we drop the factors \(\chi_{z \leq y_{j-1}}\) from the likelihood since they are redundant ; \(j \in C\) if and only if \(r_j \leq y_{j-1}\). Presuming that we can evaluate the CDF and PDF of \(X\) for any \(\theta\),  the evaluation of this form of the likelihood is straightforward. If we seek a model for a running maximum rather than a minimum, we can make a similar argument to find that the likelihood in this case becomes
\[\begin{align} f_{Y_{1:n}}(r_1, \ldots, r_n | \theta) &= \Bigg( \prod_{j \in C} (1 - F_X(r_{j-1})) \cdot \frac{1}{(1 - F_X(r_{j-1}))} f_X(r_j) \chi_{r_j \geq y_{j-1}} \Bigg) \Bigg( \prod_{j \in D} F_X(r_{j}) \Bigg) \\ &= \Bigg( \prod_{j \in C} f_X(r_j) \Bigg) \Bigg( \prod_{j \in D} F_X(r_{j}) \Bigg) \end{align}\]with the appropriate modifcations in notation for switching to the maximum.
 Appendix B:    Forecasts for all events using a Weibull attempt distribution.
The plots show the historical record in green, and the forecasted distribution in blue. In purple we show the mean.
In the tables we include the 5%, 15%, 50%, 85%, 95% percentiles and the mean of the distribution for each year between 2023 and 2032.