Thankfully, in the science world, or more specifically the small molecule profiling world, we can have our cake.
Every. Single. Day.
Now what am I on about here? Well, one of the big challenges for a lot of scientists is that they have a sample, or many samples, and have to look at this sample and ask the question, “Just what do I actually have here?” It isn’t something unique to metabolomics, forensic toxicology, food & beverage, environmental emerging contaminants or extractables and leachables studies, to name but a few; it is a shared challenge, something that unites scientists.
Thankfully, scientists have been able to access technologies that have simplified these challenges. From being able to ensure adequate sample preparation, through to chromatographic separations and the levels of mass accuracy and isotopic resolution, available with High Resolution Mass Spectrometry, required to make unambiguous elemental compositions. Oh, and the software needed to aid in the assignments of known knowns, known unknowns and to also help with the identification of unknown unknowns.
Now where is this cake he was talking about?
Good point. Well, even with these tools, it is still a challenge to capture everything that could be of importance within a sample. It can still be a challenge to get the depth and quality of data that you need, and it is certainly a challenge to ensure that you can understand all of this valuable data should you manage to get this far.
So, onto the cake. Thankfully, when you really need to find out what everything is in a sample you can have your cake, and eat it, every day.
Through some significant advancements in hardware, the software that controls the hardware, and the software that processes all of the data, scientists really can now afford to focus on what is important to them – obtaining knowledge and using it to make a real difference.
I’m going to take a little bit of a walk through the life of a sample, (more specifically, the samples contents) from acquisition all the way through to sharing knowledge gained. This may take a few blog posts, but throughout our little journey we will follow what the challenges are, and their solutions.
Let’s capture everything
For those that need to profile everything that is important within a sample, there’s no longer the need to manually create extensive exclusion lists, or manually process data to work out what components of the sample need to be added to an inclusion list. Then there is the challenge of acquiring high quality data, with sufficient fragmentation information to allow confident identification.
These challenges are no more thanks to a clever relationship between hardware and software, where the ability to capture extensive MSn fragmentation information on all important compounds within a sample can not only be fully automated, but set-up with a couple of clicks of the mouse.
As the video shows, a blank sample is analysed to create an exclusion list of unimportant compounds, followed by the analysis of the sample which needs to be exhaustively profiled. With the ability to capture MSn fragmentation information on all target compounds comes also the ability to allow the automatic determination of optimal collision energy to ensure broad MSn fragmentation to increase the depth of coverage of each component – something which will be important when we take a look at how to identify what everything was.
Once a compound has been profiled, it is added to the exclusion list, with the inclusion list being updated accordingly. The sample is re-injected as many times as necessary to ensure that every component has been profiled and sufficient data captured.
So, imagine the ability to be able to place your samples into the autosampler, set up the acquisition method with a few clicks to capture the levels of data that you require and then leave everything to run to completion!
Going from days to hours to capture all of the high quality data you need. That’s quite something.
Don’t take my word for it though, have a read of these to really understand how you can capture more meaningful data, not just more data. Oh, and join me next time to take a look at how we actually process all of this data and deal with known-knowns, known-unknowns and unknown-unknowns!