At the University of Delaware, Early Adopter Genomics Core Spots New Science Trends

kingham_bruceThe genomics core facility at the University of Delaware has set itself apart from other service providers by being among the first to adopt new sequencing technologies. The strategy has been a success: today, the facility serves customers around the world, hailing from research and nonprofit institutes, federal agencies, and even foreign governments. While projects range from microbial to human and everything in between, agrigenomic studies are especially popular for users looking to improve growth and disease resistance among crops and livestock.

Bruce Kingham, who runs the genomics core lab, has also focused on adopting state-of-the-art tools to keep the sequencers running happily. Size selection has been essential for delivering optimal results to his user base, from their first Illumina NGS platform in 2007 to the PacBio single-molecule sequencing system. It was the acquisition of the Illumina GA that spurred his team to offer library prep as a service, for which they invested in the Pippin Prep for automated DNA size selection. “That allowed us to not only get a very focused size for the libraries that we were preparing, but more importantly it allowed us to start with a much smaller quantity of DNA,” he says. Prior techniques relied on inefficient fragmentation procedures and gel extraction to isolate the desired fragment size, resulting in a great deal of undesirable sample loss.

Today, Pippin sizing — now with BluePippin — continues to be important for Kingham’s Illumina workflow, including PCR-free projects. “Size selection has been critical because the PCR-free library preparation process can be prone to generating libraries that have a broader size range,” he says. “Illumina technology for a number of reasons does not like libraries that are broad in size.” From clustering efficiency to optical analysis, these sequencers perform best when fed libraries with tightly sized DNA fragments. For Illumina sequencing in general, Kingham says, “downstream analysis, including mapping or de novo assembly, is going to be more efficient and have more statistical significance if the size range of individual libraries is focused.”

For PacBio sequencing, Kingham’s team uses both BluePippin and SageELF for size selection. Because the BluePippin is so useful for eliminating small fragments and keeping the PacBio platform focused on generating the longest reads possible, it dramatically improves the quality of results. “With the volume of sequencing that we do, the BluePippin paid for itself in a couple of months,” Kingham says. By increasing average read length and N50 read length, BluePippin “lowers the cost of the data that needs to be generated to achieve a certain sequencing goal, such as the lowest number of contigs,” he adds. The lab uses SageELF for Iso-Seq protocols, where it significantly reduces the amount of input DNA required.

Looking ahead, Kingham sees increased demand from scientists for pairing genomics and proteomics data. It’s a trend that fits nicely at his home institute, which has a mission of promoting interdisciplinary research. To that end, his team has already begun evaluating the SageELF for use in protein fractionation. “That could be a welcome service, and I’m always looking for new services to provide,” Kingham says. “I want to see my instruments running as much as possible.”

This entry was posted in Blog and tagged , , . Bookmark the permalink.

Comments are closed.