We’re pleased to see that the steady stream of publications citing the use of Pippin Prep and BluePippin as our automated DNA size selection tools help more and more users get more accurate and reproducible results. The instruments have been cited in some truly impressive publications, so our blog team took a tour of the literature to put together this roundup of recent activity. Enjoy!
SASI-Seq: sample assurance spike-ins, and highly differentiating 384 barcoding for Illumina sequencing
Michael Quail et al., BMC Genomics, 15:110
In this publication, scientists from the Wellcome Trust Sanger Institute and Frederick National Laboratory for Cancer Research describe Sample Assurance Spike-In sequencing, or SASI-Seq. With this method, scientists add uniquely barcoded amplicons to samples prior to library prep and sequencing; these barcodes allow them to deconvolute samples that get mixed up or to spot cross-contamination issues. The team used Pippin Prep to test barcode robustness through the size selection process, finding that barcodes were not affected.
Semiconductor-based DNA sequencing of histone modification states
Christine Cheng et al., Nature Communications 4:2672
Broad Institute scientists, along with collaborators at other institutions, provide optimized sample preparation protocols for generating ChIP-seq libraries on the Ion Torrent PGM. They show that Pippin size selection was required to generate usable libraries, even down to sub-nanogram input. Results were comparable to those from an Illumina ChIP-seq workflow.
Epigenetic Regulation of the DLK1-MEG3 MicroRNA Cluster in Human Type 2 Diabetic Islets
Vasumathi Kameswaran et al., Cell Metabolism (2014)
In this publication, researchers from the University of Pennsylvania and the Children’s Hospital of Philadelphia compared small RNAs from tissue samples gathered from people with and without type 2 diabetes. The scientists found evidence that a specific cluster of microRNAs was downregulated in diabetes patients, with the promoter of that locus hypermethylated. Pippin Prep was used to size-select microRNAs for sequencing.
Variant calling in low-coverage whole genome sequencing of a Native American population sample
Chris Bizon et al., BMC Genomics 2014, 15:85
Researchers from the University of North Carolina at Chapel Hill tested Thunder, a linkage disequilibrium-aware variant caller, on a community sample of Native Americans and determined that low-coverage whole genome sequencing is better at finding novel variants and associations than fixed-content genotyping arrays. They used Pippin Prep to select fragments of 300 base pairs for sequencing on the HiSeq 2000.
A recent article in Nature News delves into the trends behind the rise of ancient genomics and finds that improvements in sample preparation have been essential to scientists’ success in this area.
As a team that focuses every day on new ways to make sample prep more robust, it was gratifying to see that advances in this section of the genomics workflow are making a real difference in what scientists can accomplish.
The article, “Human evolution: The Neanderthal in the family” from Ewen Callaway, describes the big challenge facing researchers looking to sequence genomes using DNA from fossils: getting enough sample that’s not too degraded to use. A decade ago, this was a major stumbling block. Today, the community has witnessed several papers describing genomic or mitochondrial sequences from a number of hominin fossils, mammoth and mastodon, and recently a horse that may have lived 700,000 years ago.
“Enabling this rush are technological improvements in isolating, sequencing and interpreting the time-ravaged DNA strands in ancient remains such as bones, teeth and hair,” Callaway writes. “Pioneers are obtaining DNA from ever older and more degraded remains, and gleaning insight about long-dead humans and other creatures.”
In the ancient horse project, for instance, scientists found new ways to improve DNA yield from the fossil. By lowering extraction temperature and making other small changes, the team boosted recovery 10-fold.
These sample prep advances are not only making it possible to sequence DNA from ancient remains, but they are also enabling virtually any lab to perform this kind of work, the article says. “New procedures mean that researchers can now reliably obtain DNA from all but the most degraded samples, and then sequence only the portions of a genome that they are interested in,” Callaway reports.
This is exactly the kind of impact that companies like ours hope to have in the genomics field — streamlining and improving the sample prep process to the point that scientists are no longer limited by these protocols. Congratulations to all the researchers and organizations who contributed to these tremendous advances for ancient genome studies!
From the opening session to the time our team broke down the Sage Science booth to go home, the ABRF conference was a terrific event.
We really enjoyed the big data keynote from Phil Bourne and moderated session that followed. With all the challenges facing people in the data storage and management realm, let alone data analysis, we’re glad that we get to focus on sample prep — it seems a lot more straightforward!
We also had a great time showing off SageELF, our new instrument for whole-sample fractionation of DNA or proteins. If you didn’t get a chance to swing by our booth for a chat, you can check out this video for a quick glimpse of how SageELF works and what it can be used for:
Next time, ABRF, we’ll meet you in St. Louis for the 2015 conference!
It’s day two of the ABRF meeting, and a session comparing next-gen sequencing platforms had attendees riveted. The room was packed for “ABRF Next Generation Sequencing Study,” which featured speakers including George Grills, Don Baldwin, Scott Tighe, Christopher Mason, and Marc Salit.
Members of this team helped lead the cross-platform RNA-seq study, which is expected to publish soon in Nature Biotechnology. Chris Mason presented findings from this study, including reassurance that degraded RNA works just fine (phew!) and underscoring the conventional wisdom that while it is possible to compare RNA-seq studies from different sequencing platforms, it is not wise to do so. The study, which used most commercially available sequencers, was not intended as a bake-off between instruments but rather as a way to establish the best practices for running each platform and determining the advantages and disadvantages of each.
Don Baldwin spoke about the next cross-platform study, which will focus on DNA sequencing. (The team started with RNA-seq because there were readily available external RNA standards, thanks to the External RNA Control Consortium spearheaded by the National Institute for Standards and Technology.) The effort will be formalized in June, so there’s still time for interested core labs to participate. The study is expected to progress in three phases, with the first comparing the Illumina, Ion Torrent, and PacBio platforms looking at whole-genome sequencing. The second phase will incorporate FFPE samples, and the third phase will deploy broadly available plus emerging sequencing platforms for small genomes.
NIST’s Marc Salit was on hand to talk about the DNA controls to be used in this upcoming study, for which ABRF has partnered with Salit’s Genome in a Bottle consortium. Salit said that existing consortium materials have been developed with the NA12878 human genome, but long-term work will use eight trios from the Personal Genome Project.
These studies are precisely the kind of high-quality, devil-in-the-detail work for which we look to ABRF. The participating core labs offer tremendous attention to detail, highly robust and reproducible results, and clear protocol guidelines — invaluable attributes that make experiments more reliable across the whole scientific community. ABRF, many thanks for this diligent work!
The annual ABRF meeting kicked off today, and some of the earliest talks focused on proteomics. The Sage team was glad about that, as our newest product, the SageELF, works with proteins as well as DNA.
The conference’s opening keynote came from Albert Heck, director of the Netherlands Proteomics Center and of the Bijvoet Center for Biomolecular Research at Utrecht University. The presentation offered a fascinating view of protein analysis, integrated ’omics data sets, and the need for more enzymes in measurement studies. Heck also spoke about protein separation technologies, which is where our SageELF fits in. It’s designed to perform whole-sample fractionation, making sure every bit of protein gets scooped up for your experiment. We have the instrument in our booth, so if you’re here at ABRF please stop by to check it out.
Another talk came from proteomics pioneer Leigh Anderson, CEO of SISCAPA Assay Technologies. His talk covered high-throughput quantification of biomarkers, noting that mass spec offers better specificity than immunoassays do. Anderson said that the SISCAPA technology can be used to target protein biomarkers with more sensitivity than other methods, allowing it to see important changes that may be missed in as many as 40 percent of patients with other tools.
In other sessions today, the DNA Sequencing Research Group presented results of its global reproducibility survey, finding that quantitative methods are more consistent than qualitative ones. We were also glad to see presentations on next-gen sequencing for clinical applications; ABRF has made a real push to the clinical realm in recent years and it’s great to get this kind of information here.
Thanks to the speakers and organizers for an excellent kick-off to the conference. We look forward to more in the days to come!