Sage Blog

PacBio User Meeting: Size Selection for Best Results

Last month we got to attend PacBio’s user group meeting in Palo Alto, Calif. Sage Science co-sponsored the event, as we have in the past, because the PacBio community is doing some extraordinary work with size selection and we’re always eager to learn more about what they’ve accomplished.

This year, one of the speakers gave a presentation on how optimizing size selection and library prep can improve PacBio sequencing results. David Moraga Amador, scientific director of the NextGen DNA Sequencing core at the University of Florida in Gainesville, gave a terrific view of life in the core lab, where top-notch scientists are always trying to get that extra performance out of a less-than-perfect sample. (He got sympathetic laughs from the audience when he showed examples of some of the most challenging samples his users have handed off to him — we could tell there were plenty of core lab folks at the event!)

Moraga Amador’s talk, “The impact of optimum library size-selection on PacBio sequencing results,” covered a few sample prep methods, including the use of our BluePippin to maximize read length and improve sequencing yield. He emphasized the up-front protocols because that’s how his core facility manages to turn suboptimal samples into libraries that will generate solid sequencing results with PacBio.

He offered one example that was really interesting to us, based on a plant DNA sample where his team prepared half the library with BluePippin and the other half with AMPure beads to generate a 20 Kb library for PacBio sequencing. An analysis of the libraries prior to sequencing reported that they were virtually indistinguishable, Moraga Amador said. But after sequencing results from each library were compared, the BluePippin-selected library had yielded significantly longer reads. The final tally: nearly 60 percent of reads from the BluePippin-selected library were longer than 4,500 bases, while only about 25 percent of reads from the AMPure library were in that category.

Moraga Amador also noted that BluePippin is handy for the Iso-Seq method with PacBio because it’s important to feed sequences of fairly uniform length into the instrument. He size-selects into three or four groups, making sure that each group has fragments of similar size, to boost the efficiency and effectiveness of isoform sequencing.

We were glad to hear the instrument is performing so well at the Florida core lab, and it was a real treat to attend the user group meeting. We look forward to learning more at the next PacBio user event!

Posted in Blog | Tagged , | Leave a comment

You Asked for It: Pippin Goes High-Throughput!

Welcome to the family, PippinHT. Today we’re proud to be launching the high-throughput version of our automated DNA sizing platform for NGS workflows. PippinHT can run as many as 24 samples at a time, with half the run time of the Pippin Prep while maintaining our strict standards for accuracy and reproducibility. Our customers have been asking for this advance and we’re pleased to be able to meet the needs of high-capacity labs.

PippinHT comes at a critical time in the NGS world, as the community increasingly shifts toward whole-genome sequencing and demand for robust, accurate sample prep methods has never been greater. As Sage Science customers know, precise size selection is necessary for optimizing sequencing efficiency, improving genome assemblies, and reducing project costs.

Click here to check out product specs and how PippinHT stacks up to Pippin Prep, or click here to request a quote.

Posted in Blog | Tagged | Leave a comment

BluePippin Optimized for Illumina’s Moleculo Kit

As Sage Science blog readers know, BluePippin is used throughout the genomics community for size-selecting larger fragments — we’ve gotten lots of attention for how well the tool performs with extremely long reads from the Pacific Biosciences sequencer, for example.

Now we can report that BluePippin has also been validated and optimized for use with the new Illumina TrueSeq Synthetic Long-Read DNA Library Prep Kit (TSSLR), based on the Moleculo technology.

The workflow was validated with a few different experiments. In one key project, we examined various size selection settings with the kit’s recommended DNA input (600 ng per lane) and shearing conditions. With the TSSLR team at Illumina, we agreed that a range mode of Bpstart=7,000 bp and Bpend=11,000 bp gives the best results.

In another project, we evaluated performance with different DNA input volumes. Starting with the recommended 600 ng of DNA and decreasing all the way to 100 ng, we found minimal changes in average fraction size. The TSSLR team determined that no adjustment to BluePippin settings is required for use with sample volumes as small as 100 ng.

This chart shows the results of using BluePippin with the TSSLR protocol for a low-concentration sample:

tsslrchart

You can check out the official recommendations for using BluePippin sizing with the TSSLR kit in this prep guide from Illumina (starts on page 95).

Already working with the TSSLR kit? We’d love to hear about your experience and discuss how BluePippin sizing can improve your results. Ping us at info@sagescience.com.

Posted in Blog | Tagged , , | Leave a comment

7,000 Handshakes Later, We Bid Adieu to ASHG

The Sage Science crew is back in Boston, ready for some much-needed rest after a fun-filled ASHG 2014. We didn’t meet all 7,000+ attendees at the event, but we certainly gave it our best shot! Many thanks to all of the scientists who stopped by our booth to find out how something as simple as precise DNA sizing can make a real difference in data quality for next-gen sequencing results. People who did stop by got a sneak preview of a new instrument we’ll be launching soon — for the rest of you, stay tuned!

Several of the final talks of the conference focused on cancer studies, which had us particularly interested since so many of our customers use fractionation tools for this application. Whether cancer genomics is interrogated with whole-genome sequencing, targeted sequencing, RNA-seq, miRNA isolation, structural studies, or ChIP-seq, our users have published on it. (You can check out a sampling of those papers here.) The talks on cancer epigenetics, such as using methylation markers to detect tumor DNA in blood, were fascinating.

Of course, there was the usual debate about exome sequencing versus gene panels for a range of uses. We understand the need to choose one of these options now, but as ardent DNA sequencing champions, we’ve got high hopes for a time when whole-genome sequencing is affordable enough to be the no-brainer solution for every application.

Already looking forward to ASHG 2015 in Baltimore!

Posted in Blog | Tagged | Leave a comment

At ASHG, the Big Picture Is Big Studies

Here at ASHG 2014, we’ve taken a moment to step back and look at the bigger picture for the narrative tying together the top-notch presentations, stellar posters, and frenetic exhibit hall breaks. What we’re seeing is that, several years after the wave of genome-wide association studies first hit, scientists have learned the value of sample size in the study power equation.

You can’t swing a free T shirt in the exhibit hall without hitting a scientist actively working on a massive-scale study — and by that, we mean anything including tens of thousands of people or more. (The Million Veteran Program was the topic of one talk we enjoyed.) Whether they’re analyzing exomes, whole genomes, or just targeted genomic regions, the projects are bigger than ever. It’s a trend we’re thrilled to see, and one that leaders in the field have spent years calling for.

Clearly, this is due to the precipitous drop in the cost of DNA sequencing. (And from what we’re witnessing here, it’s safe to say that cost is only going to keep dropping.) The fact that regular scientists in regular labs — not just the best-funded, world-class genome institutes — are embarking on studies like this is a real testament to the democratization of this technology. We’re proud that our own sample prep instruments play an important role in making these NGS workflows reliable, robust, and inexpensive.

Of course, all of this new data puts tremendous pressure on analysis. We’ve heard about lots of open-source tools and commercial solutions for data interpretation, but there have been some creative approaches presented as well. In one talk, Andrew Su from the Scripps Research Institute spoke about a crowdsourcing method using citizen scientists — people interested in the field but who lack the usual PhD credentials — to annotate and curate scientific literature. Su told attendees that by gathering enough annotations from non-experts, he was able to get results at least as good as if he’d hired a PhD scientist to perform the annotation.

We can’t believe ASHG is already half over. If you haven’t heard about the Sage Science products yet and how they can simplify your NGS library prep, please swing by our booth (#935). We’d love to meet you.

Posted in Blog | Tagged | Leave a comment
Page 1 of 1912345...10...Last »