Figure \ref{fig:NC_schematic} provides an overview of the classic Neyman construction corresponding to the left panel of Fig. \ref{fig:neyman}. The left panel of Fig. \ref{fig:neyman} is taken from the Feldman and Cousins’s paper \cite{Feldman:1997qc} where the parameter of the model is denoted \(\mu\) instead of \(\theta\). For each value of the parameter \(\mu\), the acceptance region in \(x\) is illustrated as a horizontal bar. Those regions are the ones that satisfy \(T({{{\mathcal{D}}}})<k_\alpha\), and in the case of Feldman-Cousins the test statistic is the one of Eq. \ref{eqn:tmu}. This presentation of the confidence belt works well for a simple model in which the data consists of a single measurement \({{{\mathcal{D}}}}=\{x\}\). Once one has the confidence belt, then one can immediately find the confidence interval for a particular measurement of \(x\) simply by taking drawing a vertical line for the measured value of \(x\) and finding the intersection with the confidence belt.

Unfortunately, this convenient visualization doesn’t generalize to complicated models with many channels or even a single channel marked Poisson model where \({{{\mathcal{D}}}}=\{x_1,\dots,x_n\}\). In those more complicated cases, the confidence belt can still be visualized where the observable \(x\) is replaced with \(T\), the test statistic itself. Thus, the boundary of the belt is given by \(k_\alpha\) vs. \(\mu\) as in the right panel of Figure \ref{fig:neyman}. The analog to the vertical line in the left panel is now a curve showing how the observed value of the test statistic depends on \(\mu\). The confidence interval still corresponds to the intersection of the observed test statistic curve and the confidence belt, which clearly satisfies \(T({{{\mathcal{D}}}})<k_\alpha\). For more complicated models with many parameters the confidence belt will have one axis for the test statistic and one axis for each model parameter.