Pacific Biosciences of California, Inc. logo

Pacific Biosciences of California, Inc.

PACB US

Pacific Biosciences of California, Inc.United States Composite

1.45

USD
-0.02
(-1.36%)

Q4 2012 · Earnings Call Transcript

Feb 5, 2013

Executives

Trevin Rard - IR Mike Hunkapiller - Chairman, President & CEO Susan Barnes - EVP & CFO Ben Gong - VP, Finance & Treasurer

Analysts

Bryan Brokmeier - Maxim Group Dave Clair - Piper Jaffrey Ramesh Donthamsetty - JPMorgan Sylvia Chao - William Blair Dan Brennan - Morgan Stanley

Operator

Good day, ladies and gentlemen, and welcome to the Pacific Biosciences of California, Incorporated Fourth Quarter 2012 Earnings Conference Call. At this time, all participants are in a listen-only mode.

Later, we will conduct the question-and-answer session and instructions will be given at that time. (Operator Instructions) I would now like to introduce our host for today, Ms.

Trevin Rard. Ma’am, please go ahead.

Trevin Rard

Good afternoon and welcome to the Pacific Biosciences fourth quarter 2012 conference call. With me today are Mike Hunkapiller, our Chairman and CEO; Susan Barnes, our Chief Financial Officer and Ben Gong, our Vice President of Finance and Treasurer.

Before we begin, I would like to inform you that comments made on today's call may be deemed to contain forward-looking statements. Forward-looking statements may contain such words as believe, may, estimate, anticipate, continue, intend, expect, plan, the negative of these terms or other similar expressions and include the assumptions that underlie such statements.

Such statements may include, but are not limited to, bookings, revenue, margin, costs and earnings forecast, future revenue implied by the company’s backlog, expectations of future cash usage and other further cash balances, our expectations about the performance of our products and benefits that our customers will realize from using our products and other statements regarding future events and results. Actual results may differ materially from those expressed or implied as a result of certain risks and uncertainties.

These risks and uncertainties are described in detail in the company’s Securities and Exchange Commission filings, including the company’s most recently filed quarterly report on Form 10-Q. The company undertakes no obligation to update and prospective investors are cautioned not to place undue reliance on such forward-looking statements.

Please note that today’s press release announcing our financial results for the fourth quarter and the year 2012 is available on the Investors section of the company’s website at www.pacb.com and has been included on the Form 8-K, which is available on the Securities and Exchange Commission’s website at www.sec.gov. In addition, please note that today's call is being recorded and will be available for audio replay on the Investors’ section of the company's website shortly after the call.

Investors electing to use the audio replay are cautioned that forward-looking statements made on today's call may differ or change materially after the completion of the live call and that Pacific Biosciences undertakes no obligation to update such forward-looking statements. At this time, I would like to turn the call over to Mike.

Mike Hunkapiller

Thanks, Trevin. Good afternoon and thank you for joining us today.

We are pleased with the continued progress we are making and driving the adoption of our products. Highlights of our most recent quarter’s achievements are as follows: We booked orders for five new PacBio RS systems, up one from the four orders we booked during the third quarter.

We installed five systems bringing our install base up to 71. We launched our XL chemistry which enables our customers to achieve read lengths of 5,000 bases on average with 5% of those reads about 13,000 bases and the longest reads about 20,000 bases.

We launched the software upgrade which included automated tools for detecting and characterizing methylated bases and bacteria. We recently announced a partnership with UC Davis to participate in the 100K pathogen genome project.

Using PacBio sequencing, UC Davis plans to completely finish 1000 genomes from strains of five pathogenic bacteria. Last week, we launched another software release, which we expect will have a significant impact on the adoption rate of PacBio biotechnology.

This software release contains a bio informatics tool set, which takes advantage of the very long sequence from the PacBio system to generate highly accurate genome assemblies. And finally, earlier today, we announced that we have completed a $20.5 million debt financing with Deerfield; one of our largest stockholders.

Deerfield has been a long-term stockholder of Pacific Biosciences, dating back years prior through initial public offering. They are well respected in the financial and healthcare communities and we are pleased to expand our relationship with them.

This cash provides us with greater flexibility and time to further develop our products and grow the business. Now I would like to take this time to briefly review our 2012 accomplishments.

We began the year dealing with a host of reliability issues with the PacBio RS systems in the field and many of our customers were having difficulties getting projects done with their systems. As a result, prospective customers were not receiving enough positive references to encourage them to buy our sales pipeline stalled.

Nevertheless, the PacBio RS was delivering some data that cannot be generated with any other platform and we felt we were close to solving some of our biggest problems. So during the first quarter last year, we set out to shift the momentum by focusing on four simple priorities, which were: One, improving system reliability and performance.

Two; delivering a series of product enhancements. Three; providing full customer solutions and including front-end sample prep and back-end [bio traumatics].

And four, focusing on the key applications where we have significant value, which were de novo assembly, targeted sequencing and base modification analysis. Each quarter of last year, we made incremental progress on these goals, which we have highlighted in our quarterly earnings calls.

When looking back on the year as a whole, I am proud of how far we have come. For example, at the beginning of the year many customers felt their system up time was only around 50%.

By the end of the year, measured up time reached 93% across our installed base; while we are working to make additional gains in system reliability, we now feel that the PacBio RS is at least as reliable as most of the more mature second generation sequencing systems. Every quarter last year, we delivered product enhancements which have greatly increased the performance of the PacBio RS.

Compared to last year at this time, average read lengths and throughputs have quadrupled. DNA sample input requirements have increased by 80% to 90% and consensus those accuracy went for approximately Q40 which is 99.99% to Q50 which is 99.999%.

We make sample prep easier and more robust with the automated MagBead Station we introduced in Q3. We developed and integrated series of bioinformatics tools into our SMRT Analysis software tool kit that have enabled our customers to achieve highly accurate completely finished genomes.

We made base modification analysis a reality by introducing software tools that enable customers to generate full epigenetic profiles of bacterial strains. Our customers demonstrate the power of our technology and targeted sequencing to study complex human genomic regions associated with diseases such as Fragile X and Acute Myeloid Leukemia.

SMRT sequencing started becoming the gold standard for sequencing bacteria as it can deliver completely finished genomes including epigenetic profiles very quickly and at low cost. And finally, over two dozen scientific publications and numerous presentations at scientific meetings demonstrated the utility of the PacBio RS in a variety of applications.

As a result of these accomplishments, we started to see a turn in our business around the middle of last year which was evidenced by an uptick in our new system bookings. After booking just three new systems in the first half of 2012, we booked nine new systems in the second half.

We still have a way to go to get the cash flow breakeven, but we are trending in the right direction. Now I would like to elaborate on some of the more recent developments.

A few weeks ago, we announced the partnership with UC Davis to complete the sequence 1000 strains of bacteria. In the first phase of a much broader 100K food-borne pathogen genome project, UC Davis, the FDA, the CDC, the USDA and others are working on.

As background, regulators have been beset by a series of food-borne pathogen outbreaks that have caused illness and deaths in recent years. With each breakout, federal agencies worked furiously to try to identify the source of the pathogen in order to contain the spread of the disease.

The problem lies in the very subtle differences between the pathogenic strains of bacteria and the multitude of other related strains that are benign. In the past, positives have led the federal agencies to unnecessarily shutdown food supplies causing millions of dollars of damage to food suppliers; weeks and even months have passed for the true sources of the outbreaks have been isolated.

The idea behind the 100K pathogen genome project is to create a genetic catalog, the most important outbreak organisms that impact human health. With this catalog as a reference, rapid complete genome sequencing of new outbreaks strains can provide the unique sequence regions of the new strains.

Investigators can then target these regions with diagnostic probes to track down the source of the outbreak with much more specificity and hit fewer false leads than they can now using generic species reagents. This past week, we announced an upgrade to our smart analysis software tool kit which includes powerful new algorithms for assembling PacBio data.

This new set of tools enables users to achieve the highest consensus accuracy available while simply using a single PacBio library for long read sequencing runs. Previous techniques for obtaining high consensus accuracy on long pieces of DNA included the use of multiple sequencing protocols or platforms such as [Alumina] and PacBio and performing a hybrid assembly using a combination of short and long read data.

While this approach works, it is complex because it requires multiple libraries, multiple sample prep techniques and multiple sequencing runs, sometimes on two types of sequencers. Users often shy away from adopting these techniques simply because it involves extra work and it often requires a sophisticated Bio informatics approach that few sites are comfortable using.

Our new software kit makes it much easier and more straightforward to obtain a complete and accurate genome assembly with an ordinary sample prep and sequencing run on the PacBio RS. It’s still early and users need some time to work with these new tools to see how they can incorporate them into their work flows but we believe this will be instrumental in driving broader adoption of our products.

Many reports about the performance of the PacBio RS have referred to the relatively high error rate of individual single pass reads on single DNA molecules. This has led to the erroneous conclusion that it provides low accuracy finished data.

With any sequencing system, finished sequencing data requires a consistent combination of multiple individual reads on the same sequence region. This is necessary both for correct assembly or mapping of individual reads relative to a larger sequence region and for correct identification of individual bases.

Obtaining high finished accuracy is dependent on several factors. First, errors in individual reads should be random, any systematic or reproducible errors will not wash out with consensus.

Second, sequence context bias should be absent, high AT regions, high GC regions, homopolymers, simple repeat structures and (inaudible) for example should not cause sequence coverage to be minimal or absent in the data. Third, assembly and mapping should not be confused by closely related sequences being present in different regions of a genome.

Long repeats and gene paralogs for example can [wax] both assembly and mapping and give erroneous finished answers. While short read sequencing technologies have significant problems with each of these factors, the PacBio RS actually excels with all of them.

Its individual read errors are random, exhibits little if any sequence context bias, its long read capability is unmatched and its ability to correctly expand long repetitive regions are properly mapped related sequences. As a result, it can provide the highest quality results both for the novel assembly and resequencing applications.

With the tools now available on our latest software release, we and our customers involve with early evaluation of them. And demonstrate to finished accuracies even better than a general regarded gold standard for accuracy the capillary based Sanger systems.

While the product enhancements we have then produced over the last year were impressive. We have only begun to realize the full potential of this smart sequencing platform.

We are the clear leader in generating long read lengths, we planned to continue pushing this advantage during 2013. As I mentioned earlier in our most release of enables (inaudible) to generate read lengths up to 5,000 on average.

The longest read lengths is about 20,000 reads. We are working on enhanced (inaudible) for this year that can double those read lengths again.

We expect this to not only double read lengths but also double the throughput of RS. Another way to increase the performance of the RS is to increase the number of zero (inaudible) wave guides or [wells] that can be read simultaneously.

The RS has the ability now to monitor about 70,000 wells at the same time. Our plan is to double that to 150,000.

Therefore, just as we (inaudible) the throughput of the PacBio RS this past year, we are planning to do so again this coming year. We expect to see these enhancements in several other scheduled for this year combined with our recent (inaudible) release will help us drive the adoption of smart sequencing across the larger application space and continue the improvement on sales momentum we saw in the second half of 2012.

Finally, I wanted to share that Dr. Lucy Shapiro from Stanford was awarded the National Medal of Science at White House ceremony last week.

We are fortunate and honored that Dr. Shapiro was the member of our board.

Her expertise in the area of microbiology makes her great advisor for us as we're continuing to drive our business in this area. And with that, I will turn the call over to Susan.

Susan Barnes

Thank you, Mike and good afternoon everyone. I will begin my remarks today with a financial overview of our fourth quarter that ended December 31, 2012.

I will then provide details on our operating results for the quarter with a comparison to the third quarter of 2012. I will provide full year results for 2012.

However, I will not be providing historical comparisons to the 2011 results. We believe that this comparison would be meaningless due to the fact that prior to the launch of our commercial operations in late 2011, the different accounting treatments required under GAAP created a significant variance for the classification of our expenses between 2012 and 2011.

Finally, I will conclude my remarks with a brief discussion of our balance sheet. Starting with our first quarter financial highlights.

Consistent with the outlook we made earlier in the year, our bookings have picked up in the second half of 2012 as we booked five instrument orders in Q4, following up to four instruments we booked in Q3. Consequentially, the instrument backlog has risen in the second half of the year from one at the end of Q2 to a current backlog of five instruments.

During the fourth quarter, we recognized revenue of $5.9 million and incurred a net loss from $21.7 million. While cash used during the quarter totaled $18.8.

For the year of 2012, we recognized revenue of $26 million, incurring a net loss of $94.5 million by using $76.9 million of cash. Breaking down our revenue.

Total revenues for the quarter were $5.9 million, an increase of $3.1 million from the $2.8 million of revenue realized in Q3. For the full year 2012, we recognized $26 million in revenue.

For instrument revenue, in Q4 we recognized $3 million on five instruments compared to no instruments recognized in Q3. For the full year, we recognized $15.5 million on 23 instruments.

With regard to our recurring revenue, consumable revenue in Q4 totaled $1.3 million, increasing approximately 5% from Q3. For the full year, consumables revenue totaled $4.6 million.

Service revenue increased slightly in Q4 from Q3 to $1.3 million as our installed base on which we recognized service revenue was consistent throughout most of the Q4. For the full year, service revenue totaled $4.7 million.

Gross profit in the quarter was $600,000 represented a gross margin of 11%. This is greater than the $200,000 or 7% gross margin recognizing Q3.

The increase in profit and margin is largely due to the increase of the number of instruments recognized and the lower service cost realized in Q4. For the full year, gross profit was $900,000; representing gross margin of 4%.

Moving to operating expenses, operating expenses in the fourth quarter totaled $22.3 million, including $2.4 million of non-cash, stock-based compensation expense. Q4 spending was $500,000 below our third quarter operating expenses of $22.8 million.

For the full year, operating expenses were $95.3 million including $9.2 million in non-cash, stock-based compensation expense. Breaking down our operating expenses, R&D expenses decreased $1 million during the quarter to $11.7 million, this decrease in R&D operating expenses was a result of the increase of manufacturing overhead expense recognized during Q4 and in accordance with GAAP, as manufacturing resources were increasingly used to build product for sale in Q4 and away from internal R&D activities.

In addition, we continue to identify and realized cost efficiencies in the area of R&D. R&D expense for the quarter including $1.2 million of non-cash, stock based compensation.

R&D expenses for the year totaled $47.6 million, including $4.6 million of non-cash, stock based compensation. Sales, general and administrative expenses for the quarter increased $500,000 to $10.7 million in Q4 from $10.1 million in Q3, primarily as a result of an increase in litigation expenses in the quarter.

We do not expect this higher level litigation expense to persist. Ben will provide further guidance on our ongoing expense rate later in the call.

SG&A expense for the fourth quarter includes $1.2 million of non-cash, stock-based compensation expense. For the year SG&A expenses totaled $47.7 million including $4.6 million of non-cash, stock-based compensation expense.

Now turning to our balance sheet, cash and investments totaled $100.6 million at the end of the fourth quarter, down $18.8 million from the previous quarter, and down approximately $77 million from the end of last year. Cash used during the quarter reflects our fourth quarter net loss of $21.7 million with $4.2 million in non-cash expense, primarily comprised of $2.5 million of stock compensation expense and $1.6 million in depreciation.

Cash used also reflects working capital changes stemming primarily from the $2.3 million increase in accounts receivables offset in part by $600,000 reduction of inventory. Accounts receivables increased from $500,000 at the end of Q3 to $2.8 million at the end of Q4.

Average day sales outstanding remains well below the industry average of approximately 90 days. And lastly inventory balances decreased this quarter down $600,000 sequentially to $9.6 million at the end of December 31, 2012.

This concludes my remarks on the financial results for the quarter. I would now like to turn the call over to Ben.

Ben Gong

Thank you Susan. I will be providing guidance on our near term and 2013 financial performance.

In brief review of our previous guidance on revenue we ended up installing five systems in Q4, which is a little more than we had anticipated and it resulted in higher revenues than we expected this past quarter. With the five new bookings from the fourth quarter, we started Q1 with five systems and backlogs.

As we have mentioned in the past, we expect to convert systems in backlog to revenue in either the first or second quarter after it was booked, depending on the readiness of our customers. We anticipate that one or more of the five customers in backlog will not be ready to have their system fully installed by the end of the first quarter and therefore we expect to install fewer systems in Q1 than we did in Q4.

On the other hand, we are continuing to see an up tick in system utilization and expect sequential growth in consumable sales. For Q1, our growth in consumable sales is not likely large enough to compensate fewer systems installed, and as a result total revenues will likely decrease sequentially from Q4 2012 to Q1 2013.

For the full year 2013, we are targeting growth in total revenue compared with the full year of 2012. So I would like to remind you that we had 16 systems in backlog at the beginning of 2012, compared with the five we have in backlog at the beginning of 2013.

As a result, we expect our first half revenue in 2013 to be lower than the first half revenue we recorded in 2012. However, as we make progress on instrument sales and continue to drive consumable revenue growth each quarter, we are targeting to make up the difference in the second half of the year.

This means that for the full year 2013, we expect the number of new system bookings to be significantly higher than the 12 units we (inaudible) in 2012. Please keep in mind that our quarterly bookings can fluctuate and timing of orders is often difficult to predict.

With regard to gross margin, we expect to record positive gross margin throughout the year. At our current revenue levels, small changes in gross profit dollars can cause fluctuations in our quarterly gross margin percentage.

For the year, we expect gross margin to be in the mid single-digit. Our operating expenses decreased from roughly $23 million in Q3 to a little over 22 million in Q4, as we worked on controlling expenses.

Our quarterly expenses can vary due to the timing of certain R&D expenses. However in general, we're targeting our 2013 quarterly operating expenses to be at or below $22 million.

Please note that our operating expenses have included non-cash stock compensation expense and depreciation expense, that together amount to approximately $4 million per quarter. With regard to cash, we continue to work on reducing our cash usage.

In 2012, we consumed approximately $77 million, ending the year with a little over $100 million in cash and investments. For 2013, we're targeting to consume approximately $70 million from operation.

Finally, we announced earlier today that we have obtained a $20.5 million loan from Deerfield, one of our largest stockholders. With this additional infusion of cash, we're targeting to maintain at least $50 million in cash through the end of the year.

And with that we would like to open the call to your questions.

Operator

(Operator Instructions) Our first question comes from the line of Bryan Brokmeier from the Maxim Group.

Bryan Brokmeier - Maxim Group

My first question of the bookings that you had in the quarter, were any of them directly related or can you attribute any of them directly to the 100,000 genome project?

Mike Hunkapiller

No.

Bryan Brokmeier - Maxim Group

Do you expect some instruments to be booked during 2013 due to 100,000 genome projects?

Mike Hunkapiller

Several of the participants in that group already have systems and we have some expectation that they may need additional capacity in order to carry out the programs. So depending on government budgets the answer is yes.

Bryan Brokmeier - Maxim Group

Any color you can provide on the consumable revenue related to the pilot program that you have right now, in terms of do you have any expectations of what that revenue would be due to the project?

Ben Gong

This is Ben I think most certainly we expected to generated additional consumable revenues. We had yet to see any significant increase in consumable revenues from that project, and one thing that we will point out is even though Q4 rounded to a similar number as Q3, our consumable revenues (inaudible) about 5% sequential increase on consumable revenues from quarter-to-quarter and that was even though we had some holidays to deal with in Q4 and also some impacts from hurricane sandy.

So we are definitely seeing a nice uptick in consumable revenues. But to your question it’s hard to put an exact number on what we expect to come out of that particular program, but so far we have not yet seen a bunch because we are still working on lets call it preparing for the sequencing that’s suppose to come out of that program.

Bryan Brokmeier - Maxim Group

Okay, you had a nice increase in your bookings and often your installations, do you have to make any changes to your sales force and you need to increase or add to your sufficient capacity with them in meeting demands?

Mike Hunkapiller

I don't think we have to break any large scale changes, one other things that we have done as made a decision to focus some of our sales effort particularly on the consumable usage side, so that we brought on capability within the sales group to focus on consumable sales along side of our instrument sales people.

Operator

And our next question comes from the line of Bill Quirk from Piper Jaffrey.

Dave Clair - Piper Jaffrey

Good afternoon everybody it’s actually Dave Clair in for Bill. Just a couple of quick ones from me, the read links improvement that you discussed on the call is that something that we should be looking for in the first half of the year and the back half, when do you expect to roll that out?

Mike Hunkapiller

I would expect that we will roll that out in the middle of the year?

Dave Clair - Piper Jaffrey

Okay, fair enough, that's good.

Mike Hunkapiller

In a sense that we’ve got to try to target the release of the chemistries no more than once every six months or so because otherwise there is a point at which it confuses customers. So that…..

Dave Clair - Piper Jaffrey

So is that kind of a cadence that we should expect going forward like new continued improvement every six months or so?

Mike Hunkapiller

Well, I might I try to emphasize, we actually have improvements one way or another at least once a quarter if not more, but sometimes that's chemistry, sometimes that's the SMRT chips, sometimes that's the software.

Dave Clair - Piper Jaffrey

Okay. And can you give us any kind of color on the pipeline of orders; I mean how many accounts are actively evaluating the system at this point?

Mike Hunkapiller

It’s a substantial number, but I don't know, I won't go into too much detail on that. I mean you can kind of calculate the guidance that Ben gave you and we have to have a substantial amount of orders that are new this year in order to get the increase in sales given the large backlog that we started 2012 with compared to this time.

So we do have a substantial pipeline and it seems to be growing. So that's a good sign.

Dave Clair - Piper Jaffrey

Okay. And then the price of the system, is that stable or has that come down a little bit?

Ben Gong

David, so if you calculate the average selling price in Q4, it’s probably around about $600,000 and that's lower than what we had in say Q1 or Q2 of last year primarily due to geographic mix. So, not a whole lot that's happened to our pricing, but depending on where we are selling our systems and we do sell them at different prices and in different geographies that will make the average selling price move around a bit.

Operator

Thank you. And our next question comes from the line of Tycho Peterson from JPMorgan.

Ramesh Donthamsetty - JPMorgan

Hi guys, this is Ramesh Donthamsetty in for Tycho. Thanks for taking my question.

Maybe just on the gross margin line, if we could dig in a little bit there. I guess, how are you, where are the I guess substantial improvements occurring on the COGS line in 2013?

And then longer term, what is the runway or the room for improvement assuming, I guess what are you assuming I guess for topline growth maybe the out years to get to maybe gross margins that are maybe closer to the industry averages, let's say for the tool industry?

Ben Gong

Yeah, I'll try to give you as much insight into that as we can. So at the current revenue levels that we are at, the gross margins on a GAAP basis are not high, but the incremental margins that we generate on incremental sales is definitely higher than what you are seeing on the GAAP gross margin line.

It’s just a matter of how we account for the fixed costs and how much of those fixed costs go into cost of sales. And one subtle point is when Susan went through the script, she talked about the change in R&D expense in Q3 versus Q4, and she mentioned how more of our efforts were directed toward manufacturing in Q4 versus Q3, and that kind of gives you a sense of how fixed cost, it absorbed and that reflected in gross margin.

So, one point I want to make is that our variable contribution from sales is definitely significantly higher than what you are seeing in reported gross margins today. The improvement in volumes should always, as a principle, lead to improvements in gross margin because what comprises a large portion of the cost is material and especially on the instruments and the more volume you can generate, the better buying (inaudible) you have on the cost of materials.

You know, last part of your question, that’s kind of a tough one to isolate, which is what revenues do you have to get to in order to get to gross margins like you see in the 60s or something higher, that you might have seen in other places. What I will say is that in terms of our incremental contributions, especially on consumables, we're already there.

So, basically the point is as we drive more revenues, we should be getting those benefits of that incremental contribution.

Ramesh Donthamsetty - JPMorgan

Okay, and then, maybe just a follow-up on the 100,000 genome project. I think that’s a great project and your technology definitely fits on that area.

Should we expect or are you working on other similar types of projects or collaborations within the industry that we should be mindful of for this year?

Mike Hunkapiller

The answer is yes but like per say to that prefer not to tell you and some of the competitors exactly that those are, I will say that even in the pathogens space, this program is a US based effort but other similar agencies around the world have to worry about (inaudible) outbreaks and so we have seen a lot of interest from other countries who have similar problems and are looking very carefully at the US initiative there as a model so how to deal with the problem. So, one of the things that you might expect is that we would like to expand our program in the US to other places of the world.

And given the relatively unique capabilities we have in the bacterial sequencing space not only to get complete sequences but to get it quickly but to get the (inaudible) genetic profiles of those bacteria, we have got some reason to believe that we will be successful there.

Operator

Thank you. And our next question comes from the line of Sylvia Chao from William Blair.

Sylvia Chao - William Blair

Hi good afternoon I am asking for Amanda today. First, obviously is a follow-on questions on the previous question, so for the size backlog that you have booked this quarter was there any repeating orders?

Ben Gong

No, Sylvia, those are five new sites.

Sylvia Chao - William Blair

Okay, got it. And then I am curious given your recent improvements, I wonder if you can give us more details around how your customers are using the RS right now, so it can (inaudible) up percentage of users are using this specifically for base modification now versus other application if you have any data points around that, that would be helpful?

Mike Hunkapiller

We do although it fluctuates from one to the next depending on the projects that they are doing, but I would say and people are dong multiple projects with various types of applications on the platform. The largest group I think is up to now has been focused in the bacterial world which is the area that we really pushed right from beginning of the year that's paid off.

We are seeing an increasing amount of usage into organisms with larger genomes where there has traditionally been a great deal of difficulty in doing anything other than very sort of scanning type overviews or sequence because of the presence in those genomes are really miserable repeat regions because the sequence that you can get from short reads to be cable being processed only in the very short context rephrases to sequence. And so in several of the agriculturally important and food important organisms both plants and animals, we are seeing a substantial pick up usage in that arena.

We’ve got people now focused on looking at the same sort of thing genome like humans to try to get out some of the areas that are possible to sequence by any other short read or even the old Sanger methodologies. So as the throughput goes up, as the capabilities from the software and the base modification analysis have gotten better, we are seeing increasing amount of usage in that area and they are, sometimes combined in case of bacteria we are doing the genome sequencing itself, so it’s not an either odd thing, they are doing both.

Sylvia Chao - William Blair

Okay, and then last one for more longer term question. So, given the improvements you made for the chemistry biogenetics and instrument reliability, how should we think about the RS going forward in terms of over throughput and on the maybe cost per data point basis for the next 12 months horizon?

Should we expect more of a two times improvement every time or every quarter when you guys make an announcement or its more like a degree of difference or a degree of increase therefore decrease data point, I would say.

Mike Hunkapiller

Well, I think we care not to look at it so much as cost per data point, as it is the value of the data points that we provide, alright. So you can generate a lot more data cheaply on the short read technologies but you wind up missing a lot of data and so we try to focus on those areas given the throughput that we currently have but where there is a need for a much higher quality set of data points.

And that changes the equation quite a bit. So we are not trying to go head-to-head with the guys who can just reduce massive tonnes of data when sometimes that data is not as useful as a smaller amount of data in solving a particular problem.

What I try to go through in my remarks was that we expect at least to quadruple the throughput this year on the same platform. And we've got to at least that amount of increase last year.

So if you kind of combine the two we are making steady progress towards getting throughput, that's why we've got an increasing number of customers both existing and perspective who are interested in larger genome organisms than just bacteria and simple fungi.

Operator

And our next question comes from the line of Dan Brennan from Morgan Stanley.

Dan Brennan - Morgan Stanley

A few questions, first Mike on the smart analysis offered tool kit which you highlighted I think a few times I think it maybe a significant driver, maybe give a little bit more kind of color on why that's going to be such an important product for you.

Mike Hunkapiller

Well, I try to explain is that the way people thought about using the PacBio long reads early on, given the error profile that they perceived and the software tools that they had, they try to take some of the long reads from the PacBio data and overlay short read data from that either from PacBio CCS sequencing or from alumina data in particular, and combine those two sets or three sets of data together to try to piece together an accurate complete sequence using the long reads from Pac data as a scaffold. But the more massive amounts of that from the short lead technologies.

The problem was that is that you are making completely different sequencing libraries which cost you material, cost you time, cost you money and you are trying to come up with a software package that merges those data sets together, and while some of our early customers came up with very clever ways of doing that, it still was complicated and so what we did was to step back and say there's enough information inherent in the long reads and as we increased the number of long reads that we get by increasing the average read length, we've actually got enough data in a single sample (inaudible), a single library generation and a single run on the machine to have both long read data from the PacBio data and short read data in the same run, to actually do the assembly just with that set of data. So the software tool uses a new assembly scheme based just on the PacBio data alone, as well as a set of tools that once you got the assembly done, goes back and reprocesses the data, taking advantage of software (inaudible) we call Quiver, that’s based on the peculiarities of the quality associated with these base call that we see for every single read and it goes back to the methodology that it was actually used in one sets conceptually with the original [singer] sequencing efforts, where if you trained your software to go in and say that certain calls were actually more reliable than others.

You can assign a quality value to those and use that to outweigh sort of less quality of data that would give you kind of an error in the finished sequentially and that sort of software trading that we've done, along with an algorithm has boosted the finished accuracy substantially so that in trials that we've done, along with some of our early users of that. We've been actually able to correct a lot of the mistakes in the original (inaudible) work that was done and that’s why we think we can make the claim that we can reduce data quality at least as good as what's generally recognize as the goal standard and a lot of cases, even higher quality data.

So it’s the combination of a couple of software tools, but with the front end of piecing data together at the backend, kind of proof reading that data to get you really high quality complete sequences.

Dan Brennan - Morgan Stanley

Okay, great. How about on in terms of the pricing arrangement that came up earlier the 600,000 was the average price, is there any plan in the future to potentially develop more of a desktop version of the RS?

Mike Hunkapiller

Well, over the long-term sure. I am going to give you any dates associated with that, but it’s a first generation tool and like any first generation tools you do what you know to implement a new technology at that point in time.

But if you had clever people and we have some really clever people here, you learn from that and you learn what it takes to put it into a form that’s not only more robust but has more capability in it by whatever measure you want to look at.

Dan Brennan - Morgan Stanley

Okay. And then in terms of the growth that you guys are expecting Mike and team, certainly looks like you have been a lot of things in place you have improved visibility.

What are the biggest kind of swing factors do you think towards whatever growth that you are internally estimating you are going to realize, how much of it is do you feel the environment versus publications versus just the sales force kind of knocking on doors right now, given the improvements that you have made?

Mike Hunkapiller

Well I think there is several factors, obviously funding is an issue, always is because it’s a big ticket item and we certainly have people in the US who are anxiously awaiting what happens with things sequestrations and government agencies are particularly subject to that. It’s one thing for grants that are given, it sounds like a huge budget, but they spread around that across a large number of [graph] sides, whereas the government agency that might use the technology is much more quickly impacted by any decisions on having the budgets cut.

So that’s an issue and how big an issue who know, it’s been hanging around people’s back for the last year and half of the space to some degree, hopefully we’ll resolved itself eventually. I think that the publications per say it actually progress very well this year, we kind of exceeded what we expected from our customers in terms of the number of peer review publications and blew away the presentations that they are showing up at meetings in this space and that has enormous impact of long term in the space because customers reference their peers in order to validate their interest in technology.

I think that the final thing that the key issue and it is what I refer to is this misperception of our accuracy where from a single molecule, single past read perspective, we seem to have a high error rate. No one else does that right, so there is no corresponding measure of that particular parameter of anyone else.

What matters in reality is not that at all, it’s whether or not when you do consensus of reading through the same region multiple times which everyone has to do, or kinds of reasons. We actually can come out on top of that, in terms of finished accuracy.

But it takes a while to get that message across and people have to have tools that they can easily use to take advantage of long read capability that we have, because most software effort up until the last 6 to 9 months and within the field has been to improve or try to make up for the deficiencies of the short read data and stitching together bits of sequence. And as both we and some of our customers have worked on taking advantage of some of the older assembly techniques, which is present in the time of the human genome project originally that set of tools is becoming much more available.

And given the improvements that we've made we think that we are poised to completely put that issue to rest. But that has been clearly an overhang from a perception perspective.

These people didn't really understand what the capabilities were and what the actual accuracy at the end of an analysis was in a lot of cases, if they were working with the director themselves.

Trevin Rard

I think that was the last question for the day.

Mike Hunkapiller

Okay. So in closing we remain steadfast in our commitment to bring the unique advantage to smart technology and products to our customers and the scientific community in general.

We continue to make progress as we are starting to see some momentum building in system utilization. With further product and software releases on the horizon it’s exciting to see how this will impact both our consumable sales and new systems bookings in the coming quarter.

Thank you for listening in and we will talk again in three months time.

Operator

Ladies and gentlemen thank you for your participation in today's conference. This does conclude the program and you may now disconnect.

Everyone have a good day.

)