Explainer: Ethical Issues Proliferate Amid the Use of Artificial Intelligence in COVID-19 Healthcare Features
geralt / Pixabay
Explainer: Ethical Issues Proliferate Amid the Use of Artificial Intelligence in COVID-19 Healthcare

During the pandemic, the scientific community took a remarkable step forward by sharing large sets of public data on COVID-19 with researchers and healthcare professionals engaged in treatments and vaccine development. Many saw great potential for this data in training Artificial Intelligence (AI) responses in managing the pandemic. A year later, as countries rushing to meet vaccination targets whilst simultaneously relapsing into second and third waves, the demand for integrating AI in healthcare is only increasing.

In this explainer, JURIST will discuss what AI entails in the context of pandemic healthcare, as well as some of the surrounding ethical considerations—particularly those related to non-discrimination.

What is ethical AI?

Though the popular imagination of AI dates tends to begin when the term was first coined in the 1950s, its underlying ideas—attributable to Charles Babbage, Alan Turing, and Ada Lovelace—date back much further. For decades, AI was considered by many to be a technological epoch capable of becoming humanity’s final invention. Today, AI systems are treated as a panacea, promising an ‘ultraintelligent’ scale of computational efficiency for virtually all spheres of our lives, with both their accuracy and fairness going unchallenged.

Like AI, the normative considerations of scientific research and innovation are not new; scientific work has always been tempered by internal regulations reflecting the prevailing norms of the time. In the modern era, these include the human rights obligations of countries, as well as the accountability of private players for the socio-economic aspirations of people within democratic states. With technology rapidly replacing the human touch, researchers advocating for AI ethics have sounded the alarm over such issues as fairness, privacy, non-discrimination, transparency, and security. Many have specifically highlighted AI’s tendency toward discriminatory results, a phenomenon that has emerged from the inadequacies and implicit bias of the data that the machine has learned from.

Principled development of AI looks beyond technosolutionism and asks governments, private companies and civil society to incorporate normative values into AI systems. Mostly, its advocates seek to regulate AI within a socially beneficial, rights-based policy framework.

How is AI used in pandemic healthcare?

AI helps to prioritize healthcare resources while treating a resource-intensive disease. Resources for COVID-19 includes sedation, associated medication, nutrition, ongoing attention while on respiratory support, durable supplies, space and various types of personnel. If not managed in a timely and efficient manner, it can produce costly bottlenecks in supply chains with disastrous consequences. In this context, India’s second wave is extremely relevant, which is struggling because of poor supply chain management despite ramping up domestic production and a generous influx of foreign aid. AI can provide early warnings for risk and disease progression, both on an individual and community level. This can help healthcare systems better organise the limited resources into a targeted and centralised response.

AI also has the ability to collate large databases of living area characteristics such as housing type, population, and movement of people with the dynamics of disease’s outbreak, thereby allowing a more accurate and incisive prediction of disease spread at city, district and neighbourhood levels, according to a recent paper published in the journal Healthcare. Countries such as South Korea, Singapore, Israel, the United States and the some parts of the European Union have already employed COVID-19 contact tracing with varying rates of success, though most of them presently rely on Bluetooth or cellular technology.

Over time, AI can be further be integrated into healthcare systems in order to automate processes of diagnosis and reduce the burden of health-workers related to repetitive as well as challenging choices during the pandemic.

However, apart from issues of individual privacy, professional liability, and regulatory compliance, a proposal for  multi-layered incorporation of AI also raises concerns of its impact on equity within healthcare, given that our existing systems are vastly riddled with systemic biases that are likely to pervade AI systems and certify human bias as a “scientific default.” This bias can take many forms in pandemic healthcare management.

What issues of non-discrimination surround the use of AI in pandemic healthcare?

AI systems are built from historically biased data with the potential to result in discriminatory behaviour that is difficult to detect without access to the source code. If AI is employed for decisions on the optimal allocation of limited resources such as ventilators and ICU beds, the implicit encoding of false or undesirable criteria arising out of prevailing societal discrimination can have devastating impacts on those in greatest need. An inscrutable ‘black box’ further hinders the accountability required for meaningful discussion on civil policy and prevents marginalized communities from exercising legal protections against discrimination and exploitation. 

For instance, a model which correlates associated comorbidities with worse outcomes for COVID-19 may perpetuate structural biases that have caused historically disadvantaged groups to suffer those comorbidities disproportionately. If resources are then allocated on purely utilitarian principles, it may further existing inequalities against such disadvantaged groups.

Further, electronic health records themselves reflect disparities in healthcare access and quality of healthcare. Usable electronic health records tend to overly reflect well-off population groups who have access to better healthcare. As David Leslie explained in a recent article for The BMJ: “resources needed to ensure satisfactory dataset quality and integrity might be limited to digitally mature hospitals that disproportionately serve a privileged segment of a population to the exclusion of others.” Thus AI systems trained on such unrepresentative or incomplete data will probably reflect and compound pre-existing structural discrimination.

Sometimes, it may be a matter of representation: demographics that are underrepresented in training data find AI systems working against them because the machine was not developed to address their specific needs. This may include biological and socioenvironmental factors  that need to be accounted for with different population groups. For instance, if data on skin colour is not collected together with pulse oximetry data, AI will not be able to account for the effect of skin tone on oximetry readings which are used for COVID-19 treatment. Similarly, disparities in living and working conditions such as overcrowding, premature aging and stress-related health deterioration, and compromised immunity also need to be accounted for in AI systems for diagnosis, resource allocation and treatment.

Further, datasets based on contact tracing from mobile technologies or social media app can under-represent or exclude those without digital access. For instance, in India while internet penetration remains just above 58% and mobile penetration is nearly 87%, these figures tumble to 34% internet penetration and just 59% mobile penetration in in rural areas. Even among the group having access to internet and mobile, using AI for mass surveillance can lead to certain communities being subject to greater enforcements because of having a greater need to leave home for meeting basic needs than others who can afford to quarantine for long durations.

In April 2020, early testing of proposed AI models for COVID-19 diagnosis reported high risk of bias due to non-representative control samples and a high risk for overfitting. The study’s results were also affirmed by other studies more recently undertaken this year. As we begin to upgrade our healthcare systems, it is so important to look for steps to alleviate and overcome these issues of equity and non-discrimination in AI.

What measures can we take?

Since the accuracy and reliability of AI models can vary for different subpopulations, we need to first acknowledge that they do not reflect an absolute scientific standard, but rather have differential impacts and even cause harmful consequences that may be difficult to predict in advance. Exclusion of sensitive attributes is a not complete solution because even when a system is forbidden to use identity categories such as race or gender as variables, it may use other data such as consumption pattern or residential locations to adjust results and mirror pre-existing patterns of discrimination.

Since the accuracy of AI prediction depends on how well its pre-training aligns with actual events, our measures should focus on developing its predictive and analytical reliability. This can be done through training the AI on large datasets that generally apply to a larger expanse of population, updating prediction models when applied to new populations or localities, collecting participant data from multiple regions and healthcare systems so as to allow better understanding of the generalizability of prediction models across different settings and populations, and including more diverse and under-represented groups in the development, assessment and scaling of AI models.

What challenges still remain?

Bias in AI can be alarming because of the tendency to replicate the very inequities it was meant to address. Many corporates claim that AI ethics are vague and non-universal. In his book Tools and Weapons: The Promise and the Peril of the Digital Age, Brad Smith, the Chief Legal Officer of Microsoft asserts: “How can the world converge on a singular approach to ethics for computers when it cannot agree on philosophical issues for people?” However, although principled AI is broad in scope and contextual in relation to Global South populations and marginalised groups in the Global North, international consensus on core values and human rights standards cannot be denied.

In times of crisis, there will be a trade-off between urgent deployment of technology and ensuring comprehensive oversight through  engagement with diverse stakeholder groups. Immediate decisions by institutional hierarchies will also conflict with the need for community consensus. One solution is the notion of “ethics by design” which involves including ethical considerations as a foresight during development of AI applications and not as an afterthought. It would involve working with experts in AI ethics and community members from the beginning, and formulating clearer guidelines for developers on these issues. A wider range of longer-term and systemic impacts would also have to be considered.

“Ethics by design” would also involve pre-fixing accountability on either the developers of AI, the healthcare system employing the AI or both of them. This can be challenging because neither of them have complete information to ensure ethical models. AI developers have limited healthcare experience and may not be in a position to know of existing inequalities in healthcare systems so to recognize the same as being perpetuated by their models. On the other hand, healthcare systems may not understand the functioning of AI that are increasing being developed as “black boxes” which provide the output but do not allow for scrutinizing the process. Hence, a clear regulatory delimitation of responsibilities, and diligence between these two entities needs to be provided. Additionally, partnerships for filling information-gaps should also be encouraged.

Even through this pandemic, it is important that we constantly deliberate about the systemic biases pervading our AI systems and the real world so that we may frame principled policy that is cognizant of these failings. Simultaneously, we must educate citizenry and empower marginalized groups to take on the discourse on ethical AI, so that our real world can be more representative of the accuracy and fairness we demand from AI.