The Ethical Adoption Of AI In Healthcare Requires A Global Effort, Now More Than Ever

Dr Tim Guilliams, co-founder and CEO of Healx, is an advocate for harnessing the power of AI to accelerate treatments for rare diseases.

Earlier this spring, the U.K. government announced a new plan to eradicate algorithmic biases in healthcare.

A world first, the project’s purpose is to prevent health inequalities from being exacerbated by the AI-based systems that will underpin the future of the U.K.’s National Health Service (NHS) by assessing the possible risks and biases of algorithms before they can access NHS data.

At first glance, this is fantastic. Artificial intelligence offers huge promise to detect illness, discover new treatments and improve health systems, but the development of these technologies is underpinned by data, and data isn’t always neutral or representative.

Indeed, it’s sadly all too clear that women and minority groups are underrepresented in studies and, sadly, are more likely to experience overlooked side effects or negative outcomes as a result. If we feed even the most advanced AI programs these same lopsided datasets, these health inequalities will continue to exist.

Worse still, some research from the Harvard School of Public Health suggests that AI programs can actually make the problem worse and exacerbate social inequities further, meaning AI might amplify the problem of bias in healthcare data if not handled carefully.

So the U.K.’s move is a fantastic start, but the ethical use of AI in healthcare shouldn’t be left to individual states to apply ad hoc. Major healthcare challenges, such as the safe, legal and effective application of AI, need to be looked at through a global lens by a collaborative, international community—for the simple reason that those challenges can impact us all.

More Eyes, More Transparency

Bias can be introduced at any stage in the process of gathering data and determining insights or actions from it.

All too often, we only discover this in hindsight. Around 80% of all genetics data, for example, has been gathered from people of white European ancestry, with the very real result that the insights it has helped to provide on the risk of heart disease do not apply to those who are not.

Avoiding these future scenarios in programs operating at a vast scale requires more people with a seat at the table, more checks, more input and more perspective.

On the one hand, the NHS pilot is a good example of how this can be introduced. The algorithmic impact assessments (AIAs) it proposes for the planned National Medical Imaging Platform are designed to challenge those developing AI programs to keep these potential threats in mind from the early stages of development.

The Ada Lovelace Institute report on which the scheme is based highlights the one lone practical example of how AIAs have been adopted so far: by the Canadian government for use in AI decision-making processes in the civil service.

This provides an interesting benchmark, but there’s a problem. In four years, only four Canadian AIAs have been published so far, and so the scope for debate about how effective these 60-question forms are has been severely limited by the fact that nobody is even using them.

So the potential is there for the U.K. to set a new gold standard here, but it will have to adopt AIAs more uniformly (Canada does not require these to be carried out in the private sector, for instance) and seriously consider incentive/enforcement.

No Man Is An Island, And No Country Is Either

The Ada Lovelace Institute report ends on a crucial question: “How will trials [of AIAs] be resourced, evaluated and iterated?” Frustratingly, no clear recommendations are laid out as to how the learnings from this scheme will be shared more widely, let alone internationally.

This, ultimately, is what concerns me. The events of the past two years have made very clear that in an interconnected world, a health crisis in one corner of the globe can soon spell disaster for the rest of it.

Healthcare research is a pan-global operation, one that spans private and public sectors and across borders. But right now, safeguarding the use of AI in the industry is not. Over the past couple of years, it’s become clear that different powers are taking wildly different approaches to AI regulation, from hands-off (the U.S.) to hands-on (the European Union). This inconsistency will continue to affect the data that we will all eventually come to rely on when we fall sick, unless we are careful.

What the U.K. government is doing is progressive and in many ways the right thing to do. But the greatest patient impact won’t be had by a single government’s efforts. A multinational organization is needed to provide the framework and dialogue in this space. Perhaps it’s the World Health Organization, which has come up with six guiding principles for the use of AI in health. But as it is only one of many WHO initiatives, I think a more specific organization is needed to stop this from being overlooked.

The nonprofit Institute of Electrical and Electronics Engineers (IEEE) once ran its own Internet Initiative to bring global technical and policy communities together to address internet governance, privacy and security. I can’t help but feel we need a parallel here. When we acknowledge the insignificance of borders in any healthcare issue, only a multinational, pan-global approach combining companies, authorities and communities will suffice.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?