Global regulatory agencies are not ready for artificial intelligence (AI) that doesn’t have a human in the loop. Validating AI solutions will place a particular emphasis on how they are built and their intended use.
An Estimate of the AI Situation in Life Sciences
Exploring the capabilities of generative AI (GAI) for life sciences use cases is exciting, but as an industry, we must be mindful about what comes out of it. Is the information secure? Is it compliant? How are the algorithms validated? What are the risks for providers and patients?
Currently, companies are learning to use GAI for practical applications like:
- Content creation: producing an email or blog draft and summarizing articles and webpages to develop an outline
- Data gathering: getting answers to questions through prompts and discovering more resources for specific data
- Software coding: generating and verifying code and translating from one coding language to another
- Transcription: converting doctor-patient conversations into medical notes that can be entered into an electronic health record system
In the (not far off) future, GAI will make great strides in activities like:
- Aiding scientific discovery
- Contributing to economic growth
- Driving innovation and using GAI to discover more opportunities for GAI
- Predicting and prescribing treatments based on interactions, medical history, health trends and risk-reduction algorithms
When linked to individual or large-scale historical patient data, large language models (LLMs) are able to analyze existing symptoms and potential causes, determine trends, and prescribe the most effective treatments based on a patient’s genetic makeup.
Technology that provides insights through data analytics is great, but domain knowledge is imperative. After all, data has to be interpreted by someone who’s well-informed on how the data will be used. Someone who is the “human in the loop” of AI and GAI. Someone who knows whether or not the data is behaving correctly under the circumstance of GAI.
For example, pharmacists can use AI to analyze a patient’s medical records, lab results, and medication profiles to discover possible multi-drug interactions. With that information, pharmacists are better able to assess the safety and efficacy of medications. It empowers them to tailor their recommendations to their patient’s specific requirements.
AI Opportunities for Non-Regulated Activities
In the life sciences community and within USDM, we are seeing more frequent use of GAI. It’s a powerful tool that gives us opportunities for improvements in performance and a reduction in administrative labor.
For example, running a business intelligence (BI) report through an LLM chatbot like ChatGPT or Amazon Bedrock gives you insight into business performance. Maybe your prompt is asking about revenue performance or gross profit variances. AI is used to analyze monthly or quarterly data, synthesize that data, and provide recommendations for improvement.
Expanding on that example, you’re able to think through annual revenue planning based on historical data. Then identify who is responsible for what and determine objectives and key results (OKRs).
Circling back to an example of reducing administrative labor, it’s possible to create a job description using generative AI. Start with a template, then ask the LLM chatbot to add specifics like years of experience and areas of expertise. Continue to iterate on the results until the job description accurately defines your company’s needs.
Three stages of using GAI
A solid understanding of GAI doesn’t happen quickly. GAI is continuously evolving and revealing more ways it can be used in any company, in any location, for any purpose. This big, nay, colossal picture is hard to simplify, but we’ve got to start somewhere. Here we go:
- Phase 1: Learn. In this stage of experimentation, technical and non-technical users learn what is possible through prompts that help them discover, create, and automate.
- Phase 2: Control. Governance comes into play as citizen developers create applications that will be used by others. A platform and processes are put in place to support responsible AI. Impact and risk assessments are performed and precautions are taken.
- Phase 3: Expand. Foundational models are maturing and using the wealth of data that’s provided to them. Companies can drive innovation by sharing knowledge, solving problems, and standardizing processes. Continuous improvement is at the forefront.
Of course, privacy and security are big concerns. Cloud technology helps run the show, but regulations have the final say. Life sciences organizations must balance that with the ability to maintain control of their data and intellectual property (IP). Employees must be trained on responsible use of AI and GAI because all it takes is one mistake and your IP and data are sucked into the LLM universe. Then it’s fair game for anyone.
It’s important to choose AI and GAI services carefully. Risks to patient information are critical considerations for regulators, whereas risks to IP for system development are critical considerations for providers.
To understand how an AI system works in practice, it’s important to know how it was trained. Datasets, algorithms, and code help to train the model, then evaluation datasets validate choices made during development.
So then, how do you validate machine learning? First, you need to understand the dataset that was used to build the model, the algorithm, and the code. Then, separate datasets are used to ensure the results are secure, accurate, and of high quality. Considerations include:
- Is only internal data used because it’s more secure but limited in perceived beneficial effect?
- Are open AI sources used because they pull from a wide range of sources that are beneficial to users but potentially expose your data?
Compliance Impact and GxP Risk Assessment
There are systems in the life sciences industry that have been used for decades. They’ve been validated, they’re compliant, and they’re trusted to perform consistently.
But what about a system that’s constantly learning and suggesting new ways of working? How can you be sure it’s trustworthy? The European Union AI Act passed a plenary vote by the European Parliament in June 2023, but compliance is still years away.
The current stopgap is a voluntary code of conduct. For example, companies hire ethics, risk, and compliance officers. They develop guidelines for the use of AI and GAI for professional and citizen developers. Based on critical workflow risks, they have a human in the loop who applies domain knowledge to ensure systems are functioning with accurate data and that safe products are hitting the market. They restrict data access to only the internal AI service, thereby making their data inaccessible to the greater life sciences community.
Now, what about regulatory considerations? What’s the likelihood of a regulatory body accepting drugs and medical devices that were developed using AI but had no human in the loop? At the moment, it’s probably very low. The industry needs the checks and balances that people with domain expertise provide.
Do you have a governance process in place for AI and GAI? Contact USDM today to discuss your next steps—or first steps.
Contributing subject matter expert: David Blewitt, Vice President of Cloud Compliance
There are no comments for this post, be the first one to start the conversation!
Please Sign in or Create an account to join the conversation