Thomas Lin Pedersen edited Introduction.tex  over 9 years ago

Commit id: 8af0a9fd5e7c16434c3a35564679eb2b3dc3cf96

deletions | additions      

       

\section{Introduction}  Mass spectrometry based proteomics (from here on proteomics) has greatly improved the coverage of protein analyses, compared to older gel-based methods. Usually the mass spectrometer is coupled to a liquid chromatograph for better separation of the complex samples by means of an electrospray ionisator (ESI). This setup, though powerful, is subject to a lot of unstability, and sample-to-sample, instrument-to-instrument as well as day-to-day variations are a sad fact of doing proteomics. Despite of this, rigorous quality control have yet to become a standardized part of proteomic pipelines. Sample-to-sample variations are usually assessed after the data acquisation has ended and, if not too severe, normalisation is employed to circumvent it. Standard practise is usually to only compare samples from the same run in order to avoid instrument-to-instrument as well as day-to-day variation in the data, but while this approach is statisitically sound it removes the possibility of collecting data from multiple sources as well as tracking the performance of the equipment over time.  Some effort have been put into the area, mostly centered around defining metrics that can be extracted from raw data files and used to monitor the different aspect of the instrumentation. The first iteration of this effort\cite{19837981} effort \cite{19837981}  defined 46 different metrics that was could be extracted from a standard run and used to trace subtle variations back to different parts of the instrumentation. Recently QuaMeter \cite{22697456} refined these and used this to compare data from several different laboratories \cite{24494671}.