Tech
Back to Basics: Developing a Common Language to Automate Regulatory Data
Back to Basics: Developing a Common Language to Automate Regulatory Data
Duncan van Rijsbergen
Duncan van Rijsbergen is associate director of regulatory affairs at Iperion, a globally operating life sciences consultancy firm.
Duncan van Rijsbergen
Duncan van Rijsbergen is associate director of regulatory affairs at Iperion, a globally operating life sciences consultancy firm.
igital transformation in complex regulated sectors can be a challenge. Many life sciences companies are stuck struggling with how to ensure high-quality, consistent data can be shared across systems. One of the main issues is the lack of a common vocabulary to describe the data. Here are some practical action points to get companies started on their data quality journey.
Life sciences companies are increasingly focused on the need for digital transformation. They face basic issues such as getting up-to-date, consistent data to communicate across functions and systems.
Regulatory systems contain data on products and licences; procedural data records interactions with authorities about a license, from the initial application through to later changes to the licence. Elsewhere in life sciences companies, expert functions from manufacturing to clinical teams collect their own data on devices or drugs. Typically, there is no communication between regulatory systems and expert functions systems. Manufacturing and clinical teams collate their data into a summary document and send it to the regulatory team. The regulatory team then takes that data and puts it together in a submittal dossier, ready to send to external authorities for approval.
In clinical development, data records clinical studies. In manufacturing and the supply chain, the enterprise resource planning system typically holds data about products and materials. Meanwhile, in the regulatory function, there is a regulatory information management system, which also contains data about the same products, but from the perspective of regulatory approval. Those systems are most often in completely separate worlds, with little or no interoperability. And yet, a change made in the manufacturing world must be reflected in the license. Currently, sharing that information is done using a large number of forms and perhaps even through an intermediate system that stores those forms.
It is particularly important to get back to basics when it comes to structured, regulatory, chemical, manufacturing, and control (CMC) data. The process of inputting specification testing data into the laboratory information management system can easily take a year or more: extracting it, entering it into regulatory documents, sending it to regulatory bodies, and then reversing the process for implementation. If this process were automated, the timeline could be reduced to mere weeks, enabling products to be brought to market and made available for patient treatment much more rapidly.
Data quality issues
A data-first starting point is key. If companies store clean and consistent data, rather than documents, they will be in a much better position to automate processes and share this data efficiently with regulatory bodies. Yet companies continue to struggle with basic data quality issues.
First, there is the compliance issue, where licenses must accurately reflect activity relating to clinical trials or manufacturing. In a regulated environment, compliance failure could lead to product recall, license suspension, or fines. Datasets in operational settings may not align with datasets shared with the authorities. While the data is essentially the same, the way the data is presented may not be aligned exactly across the two systems. The granularity of the data — how it is worded or linked — might be slightly different.
Secondly, there are issues tracking changes in data over time. Drugs that are produced over many years will experience changes in composition or manufacture. These must be reflected both in regulatory systems and in the company’s operational systems. There is a need to change the data but also to keep it in sync. That synchronization becomes much more difficult if there is a longwinded process, with multiple steps in it, where the data changes form multiple times, going from structure to document, and back to structure again, with manual copying along the way.
Ideally, the synching process should be integrated with the regulatory process. That way, when the company introduces improvements to the product, testing data can be shared with the regulator much more quickly, accelerating the time it takes to get product enhancements to market. Reducing manual processes also eliminates the potential for human error and reduces costs.
Effortless compliance
Commonly, compliance itself is the goal. Ideally, though, compliance should be effortless, a by-product of a company’s activities, not the focus of them. When data is aligned and kept in sync automatically through a properly aligned process, compliance becomes secondary. It will just happen by itself.
Here are five practical action points to help get companies started on their data quality journey:
- Communicate with all the stakeholders involved in the process. Together, identify use cases for data flow continuity, and agree on how best to measure the benefits of automating data integration. Getting everyone’s buy-in and developing solutions collaboratively drives transparency and improves trust among functions. This approach enables people within a fairly long process chain to be confident that predecessors have done things correctly and given them data they can work with.
- Develop a shared vocabulary to talk about data held commonly across functions. Presenting product data across the organization in a way that everybody understands, with commonality of language, also builds trust as well as driving operational excellence and innovation.
- Standardize data descriptions. Once use cases have been identified and a common vocabulary agreed upon, consider how best to standardize data relating to complex products. The IDMP model is a valiant effort to find a common way to describe data. The quality and consistency of individual data is also key to data standardization initiatives, such as the US FDA’s drive to standardize Pharmaceutical Quality CMC (PQ-CMC) data elements for electronic submission. The more widely accepted a product model is, the easier it is to share with external parties. This includes regulators, and also partners such as labs, manufacturers, and research organizations.
- Ensure processes are properly aligned. There needs to be a robust process for capturing and sharing changes over time — and making sure that systems keep in sync and that there is as little time lag as possible. Focus on bottlenecks. There may be one process in an operational setting and another in the regulatory section. Where do they meet? Where does the data get exchanged, and how could that be improved?
- Identify suitable technological solutions. The initial focus should not be on finding the right software, but on the system architecture and how and where to connect systems. One approach could be to build a bridge between two systems — a point-to-point connection. The issue is maintaining the link and upgrading functionality in two discrete systems that talk to each other. A better option would be to develop a looser coupling, and this is where the common language model comes in. It is important not to take a static approach — how do I solve the problem now? — but rather consider maintaining the solution and innovating over time. This is not about individual systems, but about a system of systems.
The core business of a pharma-ceutical company is to get the best medicines to patients. Data processing should be a hygiene factor. Ensuring data quality and integration won’t in itself generate innovation, but it will provide a platform on which to innovate. A consistent vocabulary is key to supporting effective data communications.
The idea that technology, systems, and software can resolve data quality issues is appealing. In fact, knowing your data is key to getting this right. The technology is secondary to a good understanding of the data and data flows within the business. Life sciences companies are experts in their own data. Once they have mapped it and standardized it, they will be ready to specify the tec-hnology needed to create auto-mated interoperable data flows, saving time and money, ticking compliance boxes, and providing a platform for innovation.