Riffyn
Home » Emerging technologies » Software » Confronting the Problem of Knowledge Mismanagement in Life Science R&D

Confronting the Problem of Knowledge Mismanagement in Life Science R&D

Today, more resources than ever before are being channeled into life sciences R&D. In recent years, annual worldwide industrial life science R&D spending has exceeded $160 billion and is expected to continue to grow, with government and nonprofit spending on life sciences adding another $60-70B annually to this total.  But, contrary to expectations, the influx of support doesn’t mean that researchers’ jobs are getting any easier—and in fact, in some ways they are getting much harder.

That’s because more life science funding often fails to bear more results. Indeed, even as their cash flow increases, many laboratories in industry and academia still struggle to transfer the fruits of their research into a market-ready product. On top of that, as the industry grows, companies are forced to turn out products at a faster rate and for a cheaper price in order to remain competitive, all while keeping up with increasingly complex regulatory and product requirements.

Thus, even in the golden age of biology, it would seem ongoing problems with technology transfer continue to plague pharmaceutical, agricultural, and synthetic biology laboratories both large and small around the world. And it’s a challenge to which even the biggest names in the industry are not immune.

We spoke to Chris Stevens, Senior Director of Strategy, Operations and Planning within GlaxoSmithKline’s Biopharm division, about challenges that his company has encountered in the realm of technology transfer.

“The nature of the work that we do to take a candidate molecule from its inception to its commercialization has a lot of inherent complexity to it, and that complexity materializes in many different forms,” Stevens said. “One of the biggest challenges to reigning in complexity is in the area of knowledge management. There are many different people who touch a molecule as it goes from just being a molecule out of our discovery labs to being a medicine. Those hands all have their different touch points and their knowledge that they acquire [about] that molecule.”

Most players in industry would agree with him that coordinating the movement a product from its discovery phase to pilot phase to commercial phase can be extraordinarily difficult, and unsolvable problems with these step-wise transfers have been the undoing of many promising startups and product lines. These problems often owe to what can be considered the “unknown unknowns” of research and development—that is, the variables and factors that, for one reason or another, that are not known to the experimenter as being impactful to the research. For one reason or another, the way that the data is collected, analyzed, and communicated obscures them from the researcher’s perception.

Then there are the “known unknowns”—sources of variation that researchers can identify, but cannot predict how they will impact the final product. Chris Stevens discussed, for example, how the partial loss of a medicine’s batch volume at GSK might have unpredictable impacts on the ability to meet product specifications. “Quickly being able to assess that can be quite challenging and also quite impactful monetarily,” he said. In addition, “this is an area that the FDA and other organizations are focused on: drug shortages. We want to make sure that everything that we make that is approved is getting to the patients that desperately need that medicine.”

Yet another source problem-causing unknowns lies in the imperfect transfer of data between one researcher or department to the next. “The challenge when you transfer that technology from node to node is having the relevant and salient knowledge available to the person who is receiving that handoff,” said Stevens. As he noted, every scientist views his or her object of study through a different lens, and therefore “the language that we choose to use and the context in which the data collected isn’t consistent over the life cycle.” When critical information is lost in translation, this can lead to even more hiccups in the technology transfer process.

But while we can probably all agree that these three things—known unknowns, unknown unknowns, and ineffective communication—are an enormous funding and efficiency sink for life science research, we’re still left asking the question: what’s to be done about it?

While these challenges might seem intractable, some software companies (such as Riffyn) are now rising to meet them. Their solution comes in the form of a collaborative system for R&D process design and data analytics, which allows researchers to visualize an entire experimental process for maximum transparency. This visibility to all researchers quickly turns the “unknowns” of R&D into well-understood cause and effect relationships, thus allowing organizations to correct and control the issues that might otherwise result in failed development and tech transfer.

Timothy Gardner, Founder and CEO of Riffyn stated, “The challenge for R&D data systems has been to simultaneously achieve data structure, global integration and workflow flexibility.  Riffyn has introduced process design as a novel organizing principle to address the otherwise intractable data integration and communication challenges.  We have also harnessed the latest developments in cloud, database, and statistical data mining technologies to deliver the flexibility and analytical insights demanded by R&D organizations.”

For example, by visualizing every step of the process across the R&D life cycle, a larger number of previously hidden variables can be tracked and accounted for.  By drawing out invisible sources of random and systematic error in those variables, the confounding impact they have can be corrected for—in a sense, taking the noise out of the system and clarifying the signal. Moreover, by correcting for variation, a wider range of experiments become comparable. All of this makes for a more streamlined R&D process wherein researchers can harness the full value of their organization’s aggregate experimental data and process knowledge. What’s more, standardizing intra- and inter-laboratory communication with such tools can also greatly increase a researcher’s day-to-day efficiency.

The truth is that error will always occur and unknowns will always be present. But laboratories and companies can ensure that their resources and manpower are not squandered on these things by taking advantage of tools that help make accessible the valuable knowledge they are producing. And in an increasingly cut-throat industry, that can be a make-or-break difference.

Christine Stevenson

Christine Stevenson

Christine Stevenson is a freelance science writer and adjunct professor of biology at the Maricopa Community Colleges in the Phoenix metropolitan area. She holds an M.S. in Biology from Arizona State University and has a background in both wet lab research and venture capital consulting. She lives in Tempe, AZ with her dogs, cats, chickens, and goat.

Stay updated on the latest news in synthetic biology

Join our weekly newsletter

Sign up

1 comment

  • This is actually a much more tractable problem than everyone believes; I would rephrase the issue not as miscommunication (although this is certainly the manifested problem), but as an organizational deficiency, and the misapplication of communication technology.

    Small groups of people working in open-lab facilities with minimum reports and meetings, and as flat a management system as possible; that would overcome much – but not all – of the communication problem mentioned in this post. I can’t count the number of reports and sage commentaries that have made this point over the past few decades, at least.

    A few other details are also required; management has to actually manage, not just call the meetings – and that means that management, not the scientists, should be writing the bulk of the reports that get passed upwards. And the organization has to appreciate and encourage those scientists – working in the open-plan labs – to take an interest in what the person next to them is doing. That leads to collaboration and efficiency. Again – no news here.

    As I said, that is not the complete solution. I agree that the regulatory burden is increasing, but it is also increasingly “known”, in the sense that the regulations can be read and the information they require can be anticipated. The importance of this can be understood by the scientists – if the explanation is made available to them, and I further admit that not all scientists will be interested (although they should be). Keeping it all siloed over in Reg Affairs and not encouraging the scientists to go ask them what is important won’t help, nor will having a bunch of other folks in your organization that have never spent an hour with a scientist at the bench or a shift with an operator in clinical production. Unfortunately, these exercises do not give immediate benefit, so existing organizational structures will not use them.

    And finally, the disconnect between monetization and problem-solving/discovery continues as it always has. Not every cost input is reflected in “the numbers”. I will return to the original premise of the post, that there is a problem in communication. The best way to find out the information you need is to go and ask. In real time. Physically. In person. Otherwise, there will just be continued mis-use of time and money in meetings, reports, presentations, webinars, virtual meetings, emails, and all other activities that are not actually solving the problem that needs to be solved, nor making the product that the market desires.

Job opportunities

More