As AI becomes embedded in the healthcare sector it is imperative that the debate on ethics, standards and security must keep pace

Andy Kinnear, former NHS CIO and now Independent Consultant, reflects on the dramatic advances in AI in the healthcare sector which he has witnessed at recent conferences and meetings. The possibilities which the technology offers are undoubtedly exciting but are the discussions on ethics, standards and security falling behind?

AI brings amazing opportunities

The Artificial Intelligence revolution is phenomenal. AI has so much power within it to enhance all our lives and particularly so in healthcare, but the debate about where lines get drawn on what we could do versus what we should do is only just beginning. The discussion on ethics is running behind the headlong rush to harness, and profit from, the technological developments.

In recent months I’ve attended health tech conferences in the UK, Germany and The Netherlands which have been totally dominated by the latest advances in AI and the new AI solutions being showcased by vendors. I’ve seen some truly amazing and exciting developments which can definitely be categorised as for ‘the good of humankind’. It’s also evident that these developments, in the hands of bad actors, could be used inappropriately. Discussions on ethics, who can access AI systems and data, and the emerging cybersecurity issues, seem to be lagging behind the drive to harness opportunities.

From ‘Creepiness to Convenience’

In May 2024 at the North West Connect Conference in Blackpool, author Sina Kahen spoke on the need to broaden the debate on AI and ethics to involve non-technical people such as philosophers, futurologists, politicians, religious leaders and academics. Such groups could challenge the boundaries of debate and bring new perspectives to help plot routes through AI’s ethical conundrums.

Especially interesting was Sina’s description of the ‘Creepiness vs. Convenience’ journey which most technology goes through to gain adoption and acceptance. For example, smart speakers, which have rapidly moved in perception from ‘creepy spies in the home’ to widespread acceptance because of the benefits they provide. The ethical issues are still there but are now largely ignored because of the convenience delivered. Can we afford to allow AI in healthcare to jump from creepiness to convenience before we’ve identified and debated the serious ethical questions?

Finding the balance between rapid benefits and enduring ethics

In June I visited a health tech conference in Munich and then HLTH Europe in Amsterdam. Both events were dominated by AI offerings. With AI the latest marketing buzzword, every vendor had an angle on AI. Everything was AI-driven or AI-powered.

For once the possibilities opened up do seem to live up to the hype. Generative AI and the ability to process vast volumes of data at unbelievable speeds are really incredible. I attended one presentation where two large pharma companies spoke of how their ability to do the analysis needed in new drug development was currently limited by the number of scientists available, whereas in the future AI can go way beyond human capacity enabling research to move to a whole new level, speeding up the delivery of new drugs.

If AI expedites the development of new cancer treatments or speeds up discoveries that could deal with dementia or diabetes, it could transform the healthcare sector, benefit the whole of humankind, and make good business sense… and profits. Such transformations would relieve some of the current crushing pressures on healthcare services around the world too.

Ethical debates need to be wide ranging and out in the open

AI is likely to deliver tremendous benefits and relatively soon. However, the more I saw and heard, the more I thought about ethical debates in the past about healthcare innovations and how the current introduction of AI was creeping up on us without discussions on ethics becoming far reaching and high profile. Previous debates, for example, on advances in genetics which may have resulted in designer babies, made front pages and TV news reports. The debate about the genetic recoding of a fetus in the womb to prevent potentially life-limiting conditions vs. the ability to design the ‘perfect look’ by changing hair and eye colour was had in the glare of publicity. Who is asking the pertinent questions around the introduction of AI into pharma and healthcare?

At the same time, I thought that in the last 200 years the human race has managed to continually progress in the development of medication, and by and large this has been done in a safe, secure and morally sound way. Why should the take up of AI be any different? Am I panicking unnecessarily? Probably not based on the technology focus of the events I’ve attended.

The 4th Industrial Revolution

The two European conferences were dominated by the technical aspects of AI, with no discussion of the ethics. The presentations were all by people who were really excited about what they were developing and how we’ve reached a revolutionary moment. There was plenty of evidence given to back up these claims.

Russ Branzell, President and CEO of CHIME proposed that we are now within the 4th industrial revolution. Most of us, it seems, are still struggling with the implications and challenges of the 3rd industrial revolution which saw digital innovations usher in the Information Age. Technological advances are coming so quickly with AI that the 4th industrial revolution is already upon us. Is it any wonder that we’ve little time for the in-depth ethical debates as we’re pushed forward by, on one side, rapid technological advances, and on the other side, the very real pressures in the healthcare sector post-COVID?

Some groups are grappling with the ethical questions. At an event held by the British Computer Society (BCS) where I welcomed a group of clinicians who have become members, I spoke to the BCS CEO, Rashik Parmer about the challenges of AI. The BCS has already done a lot of good work on the ethics of AI and has produced a white paper on how to create professional standards for this area based on a survey they did on the topic in 2023. They have also developed an education programme for leaders in healthcare who want to be qualified in AI decision making.

Vendors have a role to play

Solution vendors, be they new AI players or existing suppliers building AI into their products, have a key part to play. Responsible vendors must stand by their ethics and morals and help drive industry-wide adoption of standards. If not, the nascent sector could be plagued by irresponsible organisations just as we saw at the birth of the internet and more recently with the explosion of crypto currencies.

Similarly in the field of Big Data, where, for example, in the 2010s we saw Cambridge Analytica apply powerful new technologies in less than ethical ways, harvesting the personal data of up to 87 million Facebook profiles to be used for political advertising. There needs to be a call to arms across the vendor community so they all behave in an ethical way that, by all means, makes a profit, but does so in a responsible way.

Imprivata, for example, has a key role to play as its applications help control who has access to what systems and data, and can provide the audit trail of who did what to whom, when. This becomes increasingly important as AI could open the door to new cyber threats.

AI brings increasing cyber threats

As AI gets ever smarter and more ubiquitous, the abilities and opportunities of bad actors to threaten and do actual damage in the healthcare sector grows exponentially. At the time of writing the effects of the recent cyberattack on the Synnovis pathology service, which had major impacts on NHS in South London, rumbles on. The issues were largely around the provision of test results and the scheduling of appointments. Imagine the impacts if there was an attack in the future on AI applications much more deeply embedded into healthcare service provision such as diagnosis and selection of treatments?

The global IT meltdown caused by the CrowdStrike software bug, though not a cyber attack by third parties, has shown the whole world how we rely on computer systems which we take for granted and are largely invisible to us. Hopefully this provides a wake-up call just as AI becomes more deeply embedded into systems which are in turn built into critical healthcare provision. Up to now cyberattacks have largely been about disruption and financial gain. In the future the issues could truly be about life and death. We must grasp the nettle and rapidly focus on ethics, standards and security as AI becomes rapidly embedded into the heart of the healthcare sector.