The EU AI Act intends to regulate artificial intelligence (AI) technology in the European Union with AI systems required to meet strict standards for transparency, accountability and human supervision.

It sets guidelines and requirements for AI use, with a focus on promoting the use of trustworthy AI with respect for fundamental rights.

The pharmaceutical industry is strictly regulated due to the impact it has on patient protocols and clinical trial participants, as well as post market surveillance and enforcement.

Compliance is vital. In contrast to this highly regulated approach, there has been criticism that the EU AI Act is too loose in nature and therefore open for interpretation. So, will the Act be a help or a hindrance to the Pharma industry?

Due to come into force in the summer of 2024, the EU AI Act has broad-reaching consequences for pharma companies that are using or plan to use AI as it aims to standardise the rules for its usage, development, market spread and adoption.

The wide scope of the Act has the potential to impact developers and deployers of AI systems based in the EU, those with bases within the EU and those that produce systems that might be used within the EU.

This is important for Pharma companies that increasingly leverage AI to improve the efficiency of drug discovery, clinical trial recruitment and finding new biomarkers.

AI can provide new, unique models that allow for greater affordability of medications and also improve quality.

New balance

A major challenge for those creating the Act is in the definition of AI, which has been amended several times already within the Act’s drafts.

The definition of AI is broad and potentially far-reaching, leading to uncertainty and ambiguity about what AI actually encompasses. To ensure the Act’s effectiveness, it is crucial to first address the challenge of defining AI clearly.

It is also fundamental that the use of AI strikes an appropriate balance between what is acceptable to society and what is not.

For example, AI should not focus on systems that deliberately exploit vulnerabilities, classify people on their age, social status or other demographics. It is also critical that the Act still enables innovation and progress whereby AI can aid business’s activities and value.

The challenge is to strike the right balance between these two ends of the spectrum; looking at what drives innovation and improvements versus what is exploitative or discriminating. Within clinical trials, AI can be used to predict which groups or demographics will have the best outcomes and success ratios.

The role of AI is fast becoming fundamental to the successful and efficient execution of clinical trials. As part of this, digitalisation of data is crucial to successful use of AI.

However, it is important for pharma companies to consider where this falls within the confines of the act due to the fact AI’s use in clinical trials often deals with demographics, backgrounds and ethnicity.

Ethically, concerns around AI bias has been a key discussion. It is here that any ambiguity within the EU AI Act, if not properly addressed, has the potential to hinder innovation and progress within the Pharmaceutical industry.

Arguably, the Act is there to create an obligation for organisations to state how AI will be used, and the challenge for Pharma will be the burden of extensive reporting rather than what AI could best be used for.

Categories within the Act

The Act broadly categorises AI into several broad areas from a risk perspective. ‘Unacceptable risks’ are AI systems which are now going to be prohibited under the Act.

This includes systems that deploy subliminal techniques beyond a person’s consciousness to influence behaviour, systems that exploit vulnerabilities within a specific group of people and potentially discriminate on areas such as age, disability or specific economical social backgrounds.

‘High risk’ use cases are those systems identified as being used in areas such as critical infrastructure that could put the life and health of individuals at risk.

These high risk cases will now require a conformity assessment, which outlines how the requirements set out in the Act will be met. For example, this could include AI systems used for the clinical management of patients, such as when diagnosing patients and informing therapeutic decisions or AI when used in precision-medicine applications.

‘Lower risk’ use cases are systems where there are a number of transparency requirements. An example would be when using AI in a screening session, in this case disclosure of AI use will be required. ‘Minimal’ or ‘no risk’ systems will be subject to voluntary codes of conduct.

An example could be AI or Machine Learning when used in earlier stage clinical trials for analysing data and modelling future studies. Or it could include AI systems that help ensure effective inventory management and efficient supply-chains and medicine waste reduction.

Problem with self-regulation

While self-regulation has been proposed, this has the potential to be problematic for many sectors, including Pharma. Dangers of self-regulation include a lack of uniformity in implementation, increasing uncertainty and a risk of fines and penalties which may have implications for the uptake of AI within the EU.

Not only that but shifting regulation onto individual Pharma companies who may interpret guidelines differently, can cause additional confusion and a conflict in business interests.

To mitigate this, stronger legal certainty is required. While there is merit in creating a separate regulatory body to oversee AI, this would need to see a coordinated effort across the EU and an effort that would require strong levels of harmonisation with the UK as well as US regulators.

This is important not only to ensure that the general principles around what constitutes unacceptable risk, high risk, minimal and no risk is understood, but also that the regulatory burden on individual downstream providers of AI systems is not so onerous as to prevent them from innovating and developing more effective AI solutions for the marketplace.

The UK government has introduced The Medicines and Healthcare products Regulatory Agency (MHRA) AI-Airlock, which is a regulatory sandbox for Artificial Intelligence Media Devices (AIaMD), expected to be launched in pilot form before the end of this year.

The purpose of this is to identify the regulatory challenges posed by standalone AIaMD. While this is a start, there is still more to do. The US Government’s Food and Drug Administration has also released plans around their regulatory approach to cyber and AI and other nations are following suit.

There are a lot of gaps in the proposed Act, putting the obligation of interpretion onto providers. The Act in its current form also classifies those who use large language models (LLMs) as providers if they modify the models used.

This could prove problematic, with Pharma companies being subjected to the same level of regulatory burden as AI providers.

Ultimate preparation

There is a very clear need to establish a formal AI governance structure that includes ownership of the comprehensive risk management framework which will be required in order to comply with the Act.

Within this, it is important to raise awareness and communicate more effectively, not just throughout individual Pharma companies, but also across the ecosystem of their suppliers and clients.

There is a lot of confusion and uncertainty, which will impact adoption and adherence to the Act, hindering its overall efficacy, and this uncertainty needs to be addressed before final plans and appropriate preparations can be made.

To mitigate against this uncertainty and to ensure appropriate compliance I would expect to see Chief AI Officer roles coming into play within Pharma businesses.

Pharmaceutical companies need to begin the process of assessing their technical landscape and thinking about their future AI, technology and data roadmap to better understand how they will be impacted by the introduction of the EU AI Act.

It is essential that they consider which areas within their organisational model will require prioritisation and focus, as well as the necessary remedial actions and reporting that will be required to comply with the Act. The role of a Chief AI Officer will help streamline this process and help ensure compliance.

The Pharma industry is heavily regulated already and as such, for the EU AI Act to have impact, there will need to be a clear allocation of responsibilities along the value chain including an understanding of the role of each stakeholder within the process.

Vikas Krishan is Chief Digital Business Officer at Altimetrik

About Author

Leave a Reply