A Regulatory Helping Hand

Image: ShutterstockIncreasingly scrutinized, regulatory agencies are imposing stricter guidelines for approving new therapies. This is mainly due to a number of high-profile drugs that were commercialized and then withdrawn after findings of adverse effects for patients. For decades, agencies have struggled to find a solution to bring higher-qualified drugs to market while minimizing risks in clinical trials and reducing the amount of animal testing.

Regulation also requires a level of data security and data provenance. These topics are of interest to the computer industry, and have been addressed during the data science boom of the past two decades.

Advances in bioinformatics—including traceability, deep learning, predictive analytics and collaborative decision-making—have enabled agencies to bring recent drugs to market with a higher degree of confidence of successful therapy and reduced risk. “Bioinformatics platforms are arising that provide decision and process traceability across all variations and changes at both the product and country level,” Tim Moran, Director of Life Science Research Marketing at BIOVIA told R&D Magazine. “Integrated bioinformatics platforms deliver connected and comprehensive regulatory and quality capabilities that accelerate therapeutic approval, production and patient adoption in a global landscape.”

Current bioinformatics tools incorporate new data management tools and techniques as they are developed, making it easier for users to collect, store and secure data, and do so according to regulatory requirements. The solutions can range from organizations implementing solutions within their firewalls, or implementing software-as-a-service (SaaS)-based solutions that run on environments that have passed regulatory compliance.

Improving drug discovery/development
Pharmaceutical development has looked to next-generation sequencing for decades to mine new insights for target discovery, validation and companion analysis, all of which require high-throughput methods to process, analyze and share a large amounts of genomic information within an organization and between organizations. “The field of bioinformatics marries multiple disciplines, such as computer engineering, statistics and life sciences to address this need,” Narges Bani Adadi, PhD, Founder and CEO for Bina Technologies told R&D Magazine. “Yet, it goes beyond crunching through massive amounts of data to get to the answer faster. It’s also about how to build a robust information technology (IT) infrastructure that employs accurate analysis pipelines that can be maintained easily, fit-for-purpose interfaces to extract information by bench scientists and data management and security protocols in place to share data.”

“It’s quite possible the easy drugs have already been discovered and developed: the compounds that are most easily synthesized, for diseases with obvious targets, and those that are effective for most people,” Antoni Wandycz, Director, Bioinformatics Solutions, Software & Informatics Div., Agilent Technologies, told R&D Magazine. However, as drug discovery progresses and more complicated models are considered, bioinformatics helps find the signal-to-noise. As drug trials become more expensive, bioinformatics predictions can reject candidates early, decreasing costs. “And, as drugs become more personalized and precise, bioinformatics tracks the increasing amounts of data, cross-references reliably and eliminates errors,” says Wandycz.

Overall, bioinformatics is advancing pharmaceutical and drug discovery/development as a whole, and is important at every stage—from discovery to delivery. The field advances drug discovery, development and delivery by enabling the capture, archiving and mining of data throughout the therapeutic product lifecycle.

“Traditional biologic research methods are no longer a viable option as advances in technology have led to an ever-increasing stream of data volume and complexity,” says Moran. Deep learning algorithms are commonplace in applications like Facebook and Google imaging, and have demonstrated the power of large data sets in predicting and prescribing. And just as deep learning algorithms have enhanced the IT industry, bioinformatics applications have enhanced the data capture and data mining process, while also providing tools for effective modeling and simulation for biologists and those working in the pharmaceutical and biopharmaceutical industry.

“As the field has grown, more sophisticated models and algorithms have led to more accurate simulations, enabling research organizations to expend fewer cycles optimizing bioactivity and pharmacological profiles,” says Moran. Predictive analytics can accelerate therapeutics to market at lower costs, with fewer resources.

Bioinformatics has evolved and entwined itself with traditional methods and data capture, transparency and traceability found in other industries such as finance and manufacturing. “Electronic information captured in laboratory notebooks and other electronic recording systems is critical to rapid regulatory approval,” says Moran. “These records contain repeatable outcomes and, more importantly, the methods and entities used to produce them.”

The vast amounts of data required to maintain transparency and support repeatability can only be managed and mined correctly with proper bioinformatics infrastructure. “By enabling faster and less error-prone information exchange, bioinformatics applications have made it possible for pharmaceutical organizations to satisfy regulatory demands while minimizing the increasing costs created by these demands,” says Moran.

However, bioinformatics alone doesn’t help if bioinformatics pipelines aren’t built with the proper tracing, logging, process management and data management. “Most home-grown environments have been optimized for accuracy of results,” says Bani Asadi. “However, they tend not to build in the infrastructure to handle constant updates of source tools and databases and growing data volumes.”

“That means the 100 samples you ran an analysis for today probably didn’t use the same pipeline as the 100 samples you ran six months ago,” continues Bani Asadi. Repeatability is difficult if you don’t automatically track what was done previously and can roll back to the exact environment.

Image: ShutterstockData overload help
When data requirements reach a painful threshold, three major challenges become apparent: massive data storage, repeatable data processing and reliable data availability. “The computer industry at large is dealing with these issues, and bioinformatics is reaping the rewards,” says Wandycz. Databases, whether part of the SQL old guard or the newer NoSQL movement, have evolved to handle larger and more diverse data sets.

Along with this, data scientists are building data analysis tools in various programming languages too quickly to enumerate. Cloud service providers are experimenting with virtualization and new deployment mechanisms that improve consistency and availability. “All these tools can be used by bioinformatics to store, process and share even larger data sets in a more consistent and repeatable manner,” says Wandycz.

Bina is merging bioinformatics tooling and scientific improvements in pipeline analysis with industry-best practices in data and process management. “For example, we track the versions of all the tools and databases that our customers use for a particular pipeline analysis,” says Bani Asadi. “We store the results of that analysis in a database, not in separate files.” This, in turn, makes the results easy to find at a later date.

Additionally, Bina stores all the metadata about how the results were generated, along with pointers back to the source genomic file used as input to that analysis. “This makes it very easy to reproduce the results of an earlier analysis,” says Bani Asadi.

Regulatory continued approval
Regulatory compliance is seen as a millstone around the pharmaceutical industry’s neck, “the fleet sprinter who wants to move quickly,” says Wandycz. However, regulatory bodies aren’t just interested in slowness; they are interested in safety, which is enabled by transparency and predictability. Bioinformatics will continue to improve predictions, and the tools will become better at storing and displaying the data and the process used to get there. “The right bioinformatics tools can help pharma move more quickly, while still providing the same, if not better, comprehensive transparency required by regulations,” says Wandycz.

The explosion of genomics research and an overall increase in available data have brought bioinformatics back into the spotlight. And the truth remains that the piles of data now can only be efficiently managed, mined and interpreted through the use of bioinformatics tools and methodologies.

“Advanced tools have led to enhanced predictive algorithms and simulation capabilities,” says Moran. “The pairing of current experimental evidence with available historical data enhances predictive capabilities and improves the efficiency and effectiveness of the drug discovery and development process.” The more effectively the industry is capturing data and maintaining traceability and integrity, the more confident regulatory agencies are in the predicted outcomes of emerging drug candidates.

With faster computational power, cheaper storage and newer algorithms being developed, more data will be analyzed and results generated. “In the genomic space, scientists are looking to examine genomes at greater coverage and depth,” says Bani Asadi. “Advancements in the computational and IT infrastructure arms of bioinformatics will help address the growing need for a genomic management system that not only provides data interpretation, but also reports generation, data storage and provenance and protection.” These are all important regulator aspects for pharmaceutical companies who are adopting next-generation sequencing technologies for advancing personalized medicine.


The highly-anticipated educational tracks for the 2015 R&D 100 Awards & Technology Conference feature 28 sessions, plus keynote speakers Dean Kamen and Oak Ridge National Laboratory Director Thom Mason.  Learn more.

Source: A Regulatory Helping Hand

Via: Google Alert for Deep Learning