Two general areas of interest arise when talking about next generation sequencing in food testing: whole genome sequencing and metagenomic sequencing, so let’s briefly discuss them and how they could benefit food producers.
Whole genome sequencing is a discovery tool that allows us to accumulate sequence information about the genetic makeup of a microbiological target. The information is useful for understanding the composition of the genome, comparative genomics, and in identifying regions of the genome that have the highest informational value (this is dependent on knowing what you are trying to ask as we’ve discussed previously). As an example, we have successfully developed a bioinformatics pipeline that is capable of identifying sequence fingerprints that are unique to an inclusionary group of organisms but lacking in a given exclusionary set. The power of this pipeline comes from our ability to feed it comprehensive sets of inclusionary and exclusionary genomes that come from whole genome sequencing.
Metagenomics is a technique for taking a snapshot of all of the organisms that are present in a complex sample to understand the composition and relative abundance of each target genome. Think of it as an enormous bucket of Lego™ that you computationally organize according to some criteria, such as brick color or size. Let me give a couple of examples of how this could benefit a food producer’s operation:
Scenario 1: As a ground meat producer, I want a new QC method to monitor the biological signature of my product. I take samples at time intervals and perform metagenomics to measure a) the composition of the target matrix, b) the level of non-target matrix, and c) the composition of the non-target matrix. By gathering this information as a function of time, I have a whole new level of understanding of the biological content of my food with regard to authenticity and microflora.
Scenario 2: As a goods producer, I want to understand the biological content of incoming raw material and begin metagenomics testing at regular intervals. I find that one of my raw material suppliers has significantly higher off-target content than the others. This might drive me to implement either an improvement process or penalize that raw materials supplier in order to improve my product at the raw materials stage, saving me money later.
Scenario 3: I have measured the biological content of my product, and perform controlled spoilage experiments to understand how the microbiological flora changes over time and how target organisms might be involved in the spoilage process. This allows me to design assays for detection of the spoilage organism, or develop processes that eliminate/reduce the offending organisms from my product in a program to extend the shelf life of a finished good.
So to conclude, the volume and speed at which data is accumulating will continue on an exponential trajectory. Success in mining this data for useful information will depend on our ability to identify the information that has the most value, and to develop techniques that gather that information as effectively as possible. There is no doubt that this knowledge will change the way we test, the way we study, and even the ways we define microorganisms.
Dan Kephart, PhD, is R&D Leader, Food Safety Testing at Thermo Fisher Scientific.
Learn more about the possibilities of microbial sequencing technologies at Ion Torrent™ Next-Generation Sequencing