Currently, a number of standard proteomics methods to quantify protein using liquid chromatography–mass spectrometry (LC-MS) are available to researchers. In addition, a wide variety of software solutions exist for analyzing and processing the data resulting from these experimental techniques. Researchers today must carefully choose optimal solutions for their experimental designs in addition to determining adequate quality controls for the procedures they run. Sandin et al.1 have summarized the information available on data processing methodology and quality control strategies, creating a useful review of the issues important in label-free protein quantification.
The authors present an overview of the workflows involved in both LC-MS/MS (tandem MS) and targeted LC-SRM (selected reaction monitoring) experimental designs, commenting on the data processing steps and the potential errors that can be encountered. They also present quality control strategies for each experimental design. Sandin and co-authors focus on academic or freely available software programs for their review, compiling a useful table for reader reference.
Although they note that data-dependent acquisition is the most common means for acquiring data in LC-MS/MS workflows, Sandin et al. choose to confine their comments to precursor-based quantification methods. Their reasons include that data are acquired at higher resolutions allowing both relative and absolute quantification. They also note that this method requires several computational steps.
The authors break down the individual data-processing steps required for each label-free quantification strategy. With LC-MS/MS analysis, they note the different data acquisition and processing software algorithms required, commenting on stages at which error might be introduced. These include feature detection (scan-based and extracted ion chromatogram-based), alignment (reference-based and reference-free) and normalization procedures. The authors discuss strategies to minimize the risk of errors such as missing peaks or the false inclusion of clusters through inaccurate MS spectra alignment.
Commenting on LC-SRM quantification, the authors note that software for this relatively new technique primarily features tools for assay development—the selection of suitable peptides and fragments for MS analysis—rather than quality control algorithms.
Sandin et al. continue by commenting on steps in data processing common to both LC-MS/MS and LC-SRM methods, including protein interference and quantification, normalization and quality controls. They note researchers should choose algorithms used to normalize data to compensate for the slight drift in MS output over time (with the specific data set in mind); an inappropriate strategy can introduce errors. The authors admit that stable “housekeeping” proteins are seldom available.
Sandin and co-workers conclude their review by noting that software systems integration is useful but can add extra normalization steps. Too much customization can introduce more error; researchers must be aware of running good quality controls for each step in data processing to ensure accuracy and consistency in results.
Reference
1. Sandin, M., et al. (2014) “Data processing methods and quality control strategies for label-free LC–MS protein quantification,” Biochimica et Biophysica Acta, 1844 (pp. 29–41).
Post Author: Amanda Maxwell. Mixed media artist; blogger and social media communicator; clinical scientist and writer; SAHM and expat trailing spouse.
A digital space explorer, engaging readers by translating complex theories and subjects creatively into everyday language.




Leave a Reply