Our quantitative toolset comprises the popular probabilistic Dirichlet topic modeling \cite{Blei2003} that implements unsupervised soft-clustering of text snippets. Using the byproducts of supervised text classification mapping text regions into their epoch of origin: for logistic regression on word unigrams, the byproducts are word weights, for modern distributional approaches with task-specific word embeddings, the byproducts are the similarity of the derived dense continuous word representations.