Term |
Definition |
Precision Medicine
|
A form of practice using specific therapies which are selected
(‘personalised’) for patients based on their individual characteristics
or the characteristics of a group to which they belong [1]. The aim
of personalised therapy is to maximise patient outcomes whilst reducing
adverse effects.
An example of precision psychiatry, could be taken as selecting a
therapy for depression e.g. cognitive behavioural therapy (CBT) versus
an SSRI based on the likelihood of success for that treatment given a
patient’s characteristics (clinical/biological).
|
Biomarkers
|
“A defined characteristic that is measured as an indicator of normal
biological processes, pathogenic processes or responses to an exposure
or intervention.” [2]
Biomarkers refer to substances which, found in the body, indicate
information about a disease. These substances are usually component
parts or bi-products of the disease process itself, and have
traditionally been biological substrates such as proteins e.g.
C-reactive protein as a biomarker for inflammation. As part of the
disease process, biomarkers have value as they indicate the presence or
prognosis of a disease.
In psychiatric disease, due to a lack of traditional biomarkers for
disease prognostication, increasing attention is being paid to
computational parameters which can capture a behavioural process related
to a particular disease.
|
Transdiagnostic Psychiatry |
Transdiagnostic psychiatry aims to look
across diagnoses to discover new dimensions of disease based on
biological and behavioural mechanisms [3]. |
Nosology |
Related to the classification of disease |
Factor Analysis |
Modelling observed variables as a weighted combination
of a smaller number of latent variables (e.g. modelling scores from 9
questionnaires as 3 factors). |
Reinforcement Learning |
A framework for adaptive decision-making in the
context of rewards and punishment. |
Computational Model |
In neuroscientific terms, a computational model is
a mathematical description that can be used to characterize complex
cognitive processes, such as learning or decision-making. Parameters
(see below) of the model can be estimated and quantify a specific part
of the learning process, e.g. the weighting of new information compared
to old. These parameters are estimated based on an individual’s
behavioural responses during a cognitive task. |
Parameter
|
Models are composed of parameters which represent a specific part of the
learning process. When models are fitted to data from a task, parameters
can be used to describe an individual’s performance mechanistically.
For example, two people with major depressive disorder may show the same
negative emotional bias on a cognitive task, but that behaviour may be
caused by two different mechanisms - captured as differences in model
parameters.
|
High dimensionality |
Data can be described as highly dimensional when
there are more measurable features or variables than there are
independent samples. In these scenarios, machine learning algorithms
perform poorly. Reducing the number of variables is important as it
improves the performance of the algorithm. [4] |
Machine Learning |
Machine learning involves applying algorithms to data
in order to make predictions or classifications based on input data,
which either does (supervised) or does not (unsupervised) have known
labels. A machine learning algorithm will produce an estimate about
pattern or structure in the data. |
Dynamic Causal Modelling (DCM) |
A method commonly used for the
quantification of effective connectivity (e.g. the influence that one
neural population exerts over another), DCM allows comparison between
models of interconnected networks of neuronal populations in order to
explain data gained from dynamic imaging during cognitive tasks.
[5] |
Overfitting |
Overfitting is a process in which models become extremely
sensitive to noise when they are fitted to a training data set. The
model inaccurately treats noise as signal of interest, so that it can
better predict outcomes for the data that it is trained on. Highly
dimensional data sets can lead to overfitting which in turn leads to
poor predictions in new data (poor generalisability).
[6] |
Number needed to treat (NNT)
|
The number needed to treat describes the number of patients needing to
receive a particular intervention so that one additional patient has a
positive outcome.
For example in a computational context, applying an algorithm which can
predict remission in depression, the number needed to treat describes
the number of patients the algorithm has to be applied to for the
algorithm predict remission in an additional patient.
|