Artificial Intelligence (AI): Coming to a Policymaker Near You
By Nancy Bradish Myers
It’s as if Artificial Intelligence just reached the DC area and policy makers need to think about it. Software companies, academic institutions and innovative companies hoping to develop and harness the potential these new AI tools seem to be realizing that the technology has finally reached a tipping point and that some education and environmental softening needs to occur in DC circles to avoid or overcome potential political and regulatory hurdles.
A few powerful, relevant trade groups who tend to be policy leaders are trying to get their heads around what AI means in terms of health care broadly. So to launch a branch of the conversation, this blog focuses on AI and its potential uses in biopharmaceutical product development and, as a follow-on, how AI might be considered in the regulatory review process.
To start on the same page, I am defining AI as the capability of a machine to imitate intelligent human behavior; AI involves teaching a computer to recognize patterns through exposure to data, and building an algorithm to understand that data, such that the algorithm can “learn” and improve over time. AI is an umbrella term that encompasses a range of various sub-types, such as machine learning and natural language processing.
Looking at AI to accelerate drug discovery and overall medical product development:
In the current environment, with high drug discovery costs and drug pricing that is burdening the health care system, the concept of harnessing a learning algorithm to increase efficiency of drug discovery is very attractive. But the question for R&D teams is where to deploy AI to realize the greatest bang for the buck.
Here are some of the best use cases I am seeing:
- Improving drug discovery
- AI can and is being used to help improve target selection; rather than relying on the traditional trial-and-error approach to drug discovery, which is costly and time-consuming, AI can predict how drug candidates will behave in the body, allowing for more precise candidate selection.
- AI can also be used to predict which combinations of drugs may be optimal for development.
- The hope is that with this increased precision at the front end, AI will improve the probability of trial success at the back end.
- Super-charging the clinical trial process to improve data collection and decision making
- AI can comb through records to identify and select optimal clinical trial participants.
- AI can identify appropriate patient sub-populations by finding correlations between a patient’s genetic profile and drug candidates.
- AI can also identify potential new biomarkers, and help to optimize dose selection.
- Improving drug safety
- Detecting post-marketing safety signals is time-consuming and prone to error; AI can quickly comb through vast and heterogeneous data sources to improve signal detection.
- Real-world evidence (RWE) development
- AI can help open up hard-to-extract data from electronic health records (EHRs), for example, and turn it into useable RWE.
- AI can also be used to crunch biosensor data; with digital sensors and wearables relaying data 24/7 to providers and caregivers, AI is a tool that can help make sense of this continuous data stream.
- Improving manufacturing
- AI can be used to help predict what impact various manufacturing changes may have, potentially eliminating the need to conduct studies prior to implementing such changes.
- Decision support/diagnostic software products
- With a number of AI software products already cleared by FDA for decision support (as I discuss later in this blog), this is an area that is growing and that may vastly improve the physician’s ability to diagnose disease.
A traditionally cautious, conservative regulated industry needs FDA signals to understand the boundaries of regulator comfort levels.
OK, I hate to say it, but many leaders in industry’s drug development teams have been trained to color within the FDA regulatory lines. Taking risks to include unproven technologies in trial designs or protocols is often NOT rewarded. So, to overcome this trained behavior, it is up to FDA to signal where and when it is comfortable with the use of AI tools. It could come in the form of statements from the FDA Commissioner or Office Directors, and/or FDA-industry workshops and other avenues of formal or informal communications. But overall, FDA telegraphing its views might help R&D teams harness AI.
FDA is the final arbiter as to how AI can be used and how much it actually can streamline the drug development and review process.
I am often asked where FDA is now with regards to its comfort level in the use of AI. At this point, I don’t see a consistent policy or approach from the agency. However, there are clear pockets of enlightenment and interest. From conversations with other experts, FDAers, and former FDAers, several agency staff and Divisions are in the process of developing their understanding as to how AI can be used across various stages of drug and device development. Pilots and partnerships often serve as ways to learn about an issue or a new technology; it is often a precursor to policy development. Though not on the front page of FDA’s blogs or laid out in Administration budget documents, FDA’s interest in AI has been mentioned by the Commissioner in his speeches.
However, there have not been any indications that the agency has a comfort level yet in terms of allowing use of AI to determine causality in drug development studies. Causality is a high hurdle, and it could take major advances in use of the technology to convince regulators that AI would be appropriate here.
FDA is testing the AI waters via the following projects:
- The Oncology Center of Excellence (OCE) is exploring the use of machine learning/AI through its INFORMED (Information Exchange and Data Transformation) initiative.
- INFORMED is designed as an “incubator” to spur collaborative regulatory science research in oncology.
- FDA is integrating its in-house clinical data with RWD sources, and expanding its capabilities around big data analytics to support regulatory decision making.
- Participating organizations include: IBM Watson, Flatiron, NCI, ASCO’s CancerLinQ
- For the AI component, the goal is to design algorithms in oncology that may be used by sponsors in clinical trials.
- CDER’s Office of Surveillance and Epidemiology (OSE) has been studying AI as a way of helping the agency identify and prioritize medication-related adverse event reports.
- In collaboration with Stanford researchers, OSE recently reported that it created AI models by combining text mining with machine learning.
- According to the agency, the models “produced prioritized report orderings that enable FDA safety evaluators to focus on reports that are more likely to contain valuable medication-related adverse event information.”
- FDA concluded that “applying our models to all FDA adverse event reports has the potential to streamline the manual review process and greatly reduce reviewer workload.”
And although not much is public at this point, Gottlieb does appear to be interested in leveraging AI further in the area of safety signal detection. For example, he said recently that FDA plans to look at natural language processing (NLP), a type of AI that’s useful for codifying unstructured data, to “markedly speed recognition and remediation of emerging safety concerns” across drugs, biologics and medical devices.
The Center within FDA that has had the most experience with AI to date is CDRH, which has already cleared products that feature AI. For example, in early 2017, CDRH granted 510(k) clearance to Arterys for its Cardio DL, a software product that uses deep learning techniques for cardiovascular image analysis, to help physicians with diagnostic decision making. The label does not include an actual diagnosis claim; rather, it is a support tool to provide relevant clinical data to the physician.
On the policy front, CDRH has not yet issued any guidance or other documents outlining its thinking or approach to AI software. However, it’s possible that CDRH’s Software Pre-Certification pilot program may provide regulators with an avenue for exploring issues around development and validation of AI software.
The bottom line is that at this point, FDA seems to still be learning about how AI will be used and what its boundaries may be. Currently there is not an agency-wide policy on AI – most likely because AI has so many potential applications across the range of FDA responsibilities.
It’s also important to note, on another policy front, interested parties are making sure AI is on Congress’ radar. For example, the House Oversight & Government Reform Committee’s Subcommittee on Information Technology, chaired by Rep. William Hurd (R-TX), recently kicked off a series of hearings on the topic. Subcommittee members will explore how AI may transform a range of sectors, including health care; along with the talk of self-driving cars, for example, there is strong interest in health applications of AI, particularly in diagnostics/decision support tools. Expect to hear more over the next few months on this front, as the subcommittee looks into the uses of AI across the federal government, along with relevant regulatory issues.
I could go on and on. But I will close with this parting thought: As the biopharma and device industries collaborate with tech companies developing AI algorithms, there is a very real opportunity to partner and learn alongside regulators. FDA has demonstrated over and over again that it embraces pilot programs as a way to test the waters and gain experience before having to regulate a technology or tool. It is only with experience and tangible examples that the agency will be able to clarify regulatory pathways moving forward.
If this space is one you are interested in developing and harnessing, now is a great time to begin a conversation with the agency and think through ways of providing involved reviewers with the experience needed to further embrace use of this technology, to allow innovation to proliferate.