“By the turn of the century, we will live in a paperless society.” – Roger Smith, Chairman General Motors (1986).
So far we can definitively say that this above prediction has not taken place. Neither has the complete move from coal and oil power generators to atomic energy; from landline telephones to mobile phones; or the move from capillary electrophoresis to next generation sequencing. Champions of innovation often fail to appreciate the practicalities of these transformations in everyday life, and the time, energy and resources needed to change the status quo.
In genetic analysis, we see a tendency for the champions of innovation to speak loudly about how they can solve problems easier, faster with lower cost and greater sensitivity, yet a considerable resistance exists globally in research laboratories to adopt these new technologies, even in cases where they are demonstrably and reproducibly superior to current ones. There are the practicalities associated with change: cost of equipment acquisition, time, cost and resource needed to validate a test on a new workflow, time it takes for the education to pervade research communities and per sample run costs which can be high relative to slower, more labour intensive approaches where cost benefits are only realised by dramatic changes in laboratory staffing and resource allocation.
Finding genetic mutations in the average research laboratory
So how does the average research laboratory approach human genetic disease research, given these restrictions to technology transformation?
We might not always have the research funding for the latest technology, but a combination of technologies can be applied to find often difficult mutations – and many of those technologies have been around for years.
At ESHG this year, a presentation by Steve Jackson, Associate Director of Product Applications at Thermo Fisher Scientific, took a look at simple single gene disorders to more complex genetically and phenotypically heterogeneous disorders to ask the question “what’s the right tool for the job?” showing how to combine the old and new to efficiently discover novel gene variants underlying human genetic disorders.
The example of Hereditary Hearing loss (HHL) – single gene disorder
Genome-Wide Association Studies (GWAS) have been useful in identifying loci responsible for HHL, but many research is performed in small families where genetic and family history information is limited.
In the example of Sakuma et al. (2015), 220 DNA samples from subjects with sensorineural hearing loss from unrelated and non-consanguineous families were sequenced using an Ion AmpliSeq gene panel, identifying 4 putative causative mutations in the PTPRQ gene which were subsequently verified by Sanger sequencing. Whilst one family was caused by homozygous recessive mutations inherited from each parent, a second pedigree appeared to be due to 2 different mutations in same gene appearing in the affected offspring, potentially by trans-heterozygous transmission or a congenital de novo mutation.
This same approach was applied in Kitano et al (2017) who uncovered causative variants in POU4F3 in 15 probands (2.5%) among 602 families exhibiting an autosomal dominant form of HHL.
In these types of studies where candidate genes are already well known and characterised, a sensible approach is to consider assembling a ready-made gene panel (Ion AmpliSeq on Demand) but also sanger sequencing is appropriate here (for lower throughput needs or for verification of NGS results.
When a combination of multiple contributing loci lead to disease – example of Autistic Spectrum Disorder (ASD)
When studying more genetically complex non-Mendelian disorders, not only do you need to identify causative loci, but you also need to understand the contribution of these loci to phenotypes, and find ways to use them for screening or treatment guidance.
As in HHL, GWAS studies have identified candidate loci for autism. The difficulty comes when the impact of the genetic loci is small, but they add up when combined together.
In current times, the innovation champions call for whole genome studies – yet this approach comes with a heavy bioinformatics burden, long wait times for the data, not to mention data storage challenges. By far a quicker way is genotyping by microarray, and verification of putative genetic loci using orthogonal technology such as NGS or Sanger. Take the example of Ronald A et al. (2010) which identified 3 candidate SNPs associated with Autism:
- Identified autistic individuals
- Pool DNA to obtain mixture (20 pools of ~74 each)
- Hybridized to Affymetrix GeneChip genotyping array
- Selected candidate SNPs that show difference between groups
- Designed an Applied Biosystems SNPlex panel of 47 candidate SNPs
- Verified in 3494 individuals using microarray genotyping
This approach identified three SNPs that appeared to be associated with autism. Homozygotes for the candidate SNPs had a low social index.
A similar study by Tammimies K et al. (2015) JAMA of 258 unrelated children with ASD underwent whole exome sequencing (WES) using Ion AmpliSeq Exome/Ion Proton sequencing and chromosomal microarray analysis (CMA). Of the 258 probands, 24 received a molecular diagnosis (known pathogenic alleles) from CMA and 8 of 95 from WES and a further 96 de novo variants that could be contributive were also identified.
So what’s the right tool for the job?
Well, it depends on your scientific question – and what you have available to you, of course. But here’s a guide to the fastest route to an answer.
Technologies for gene variant detection in human genetic disorders
For more information and ideas for novel approaches to human genetics variant detection, go to thermofisher.com/humangenetics
1. Sakuma N et al. (2015) Ann Otol Rhinol Laryngol 124 Suppl 1:184S
2. Kitano T et al. (2017) PLoS One 12(5):e0177636.
3. Ronald A et al. (2010) Behav Genet of Psychopathology
4. Tammimies K et al. (2015) JAMA.