Managing Generative AI Risk for Life Sciences Use Cases

Managing Generative AI Risks

Fearing the unknown prevents many life sciences organizations from adopting artificial intelligence (AI). But learning to apply generative AI to life sciences use cases has the potential to advance scientific understanding and improve patient outcomes.

What is generative AI (GAI)? It’s a type of AI that takes a prompt (for example, “analyze <topic> patterns among health care providers in <region>”) and uses large language models (LLMs) to interpret pre-existing data and generate the requested content.

The risks and benefits of AI systems depend on how they are used, how they interact with other AI systems, and who’s in charge of them.

Risks in traditional software or information systems are evaluated as long- or short-term, high- or low-probability, systemic or localized, and high- or low-impact. Then they’re quantified from a dollar, safety, and/or regulatory perspective.

However, this ignores the uniqueness of AI systems trained on data that changes over time. This can have significant and unexpected effects on how the system works and whether it can be trusted.

GAI Risks, Good Governance, and Senior Leadership

Like other software, generative AI encounters cybersecurity risks like phishing, social engineering, and zero-day vulnerabilities. Other risks specific to GAI include:

  • Biased results. Training data used in a GAI model may not represent the entire chemical space and may produce biased or limited results. This could lead to drug candidates that are ineffective or unsafe when tested in real-world scenarios. 
  • Ethical considerations. GAI models can propose novel chemical compounds, but there are ethical considerations related to risks and benefits, especially if AI-generated compounds are untested or have uncertain safety profiles. 
  • Inadequate model performance. GAI models may not accurately predict interactions between molecules and proteins, which could result in selecting unsuitable drug candidates or misinterpreting their potential effects. 
  • Model interpretability. The rationale behind AI-generated molecular structures might lack understanding and hinder the ability to explain the model’s decision. This may raise concerns about transparency and trustworthiness of generative AI models. 
  • Output validation. GAI creates content autonomously based on patterns learned from data. Therefore, there may not be ground truth to compare the output against, making it difficult to determine whether the output is correct or valid.

As mentioned earlier, risks and benefits of AI systems depend on who’s in charge of them, so good governance needs to be in place. Once an organization determines that it wants to use GAI, it needs to establish processes, develop standard operating procedures (SOPs), and implement guidelines for types of users (e.g., citizen developers, super users, and prompt engineers).

Of course, senior leaders in the organization must commit to demonstrating the benefits of GAI and managing risk by establishing responsibility and maintaining accountability. A GAI policy is a good place to start, followed by compliance training and risk management courses.

Generative AI Use Cases for Life Sciences

ChatGPT is now widely available and helps to meet the demand for GAI. For life sciences use cases, GAI has the potential to advance scientific understanding, improve patient outcomes, and enhance quality in care delivery. It may also save billions of dollars globally by streamlining R&D processes, enabling informed decision-making, and optimizing resource allocation.

There are countless use cases, but here are three for the life sciences industry.

Drug Discovery and Development

Quality: 

  • Reducing the time between research and clinical trials means faster access to potential life-saving treatments for patients. 
  • Using GAI to help predict drug side effects can minimize unforeseen complications in later stages.

Financial saving: 

  • Enabling AI-driven molecule prediction can substantially reduce the number of drug candidates that must be synthesized and tested in the lab. Potential savings can run into the hundreds of millions of dollars in a drug development process that often costs billions. 
  • Reducing the time from discovery to market means that pharmaceutical companies can realize revenues faster and improve their ROI.

Example: Atomwise, a startup that uses AI for drug discovery, claimed that its technology can predict the potential of molecules in treating certain diseases and reduce the initial stages of drug discovery from years to mere days. Such efficiency could lead to massive cost savings in the drug discovery process.

Medical Image Analysis and Diagnosis

Quality: 

  • Improving the accuracy of detecting anomalies in medical images can lead to earlier disease diagnosis and more lives saved. 
  • Generating synthetic images using GAI can aid in training medical professionals and increase their diagnostic accuracy. 

Financial saving: 

  • Diagnosing and treating patients earlier can often be less intensive and less expensive than treatments required for diseases in later stages. 
  • Reducing the number of misdiagnoses or overlooked issues means less follow-up testing and fewer malpractice cases.

Example: PathAI, a company focusing on pathology, claimed that its AI-driven system could help pathologists improve diagnosis accuracy. Accurate diagnoses can lead to the correct treatment faster, potentially saving thousands of dollars per patient by avoiding ineffective treatments and associated complications.

Patient Risk Stratification and Personalized Medicine

Quality: 

  • Using personalized plans and tailored treatments leads to better patient outcomes and fewer side effects. 
  • Improving the response time to treatment for high-risk patients helps to reduce morbidity and mortality. 

Financial saving: 

  • Implementing personalized or tailored treatment plans can prevent hospital readmissions and long-term complications, which results in savings related to associated costs. 
  • Providing proactive care for high-risk patients can mitigate the need for emergency care and intensive treatments.

Example: Insitro, a drug discovery company, used AI and machine learning to integrate with LLMs to analyze vast amounts of scientific literature to identify new drug targets and potential treatments for diseases. The result was improved accuracy and relevance of natural language processing (NLP) tasks, faster time to market for NLP models, and improved scalability due to pre-trained models.

Conclusion

Generative AI holds great potential in the life sciences industry, particularly in clinical work, where it can augment medical research, improve patient care, and accelerate drug development. Establishing robust AI governance helps organizations manage AI-related risks without overly restrictive controls that stifle innovation.

Let USDM assist you in understanding global AI laws and regulations—contact us today.

Comments

There are no comments for this post, be the first one to start the conversation!

Resources that might interest you