Background Note
Artificial intelligence versus COVID-19 in developing countries

Priorities and trade-offs

In this note, I will refer to current efforts to harness artificial intelligence (AI) in the push back against COVID-19, note its promises, limitations, potential pitfalls, and identify priorities for developing countries. Artificial Intelligence is the use of algorithms, data, and statistics to teach computers to recognize patterns and predict outcomes. Pattern recognition and prediction are what underlies machine learning (ML), natural language processing (NLP), and computer vision, the main applications of modern AI.

The promise of AI versus COVID-19

Since the outbreak of the pandemic in December 2019, there has been a rush to harness AI in the fight. AI can help track and predict the spread of the infection, it can help make diagnoses and prognoses, and it can search for treatments and a vaccine. It can also be used for social control—for instance, to help isolate those that are infected and monitor and enforce compliance with lockdown measures. I document these efforts in an IZA Discussion Paper.

Limitations

Unfortunately, AI is currently not up to the job to track and predict the infection. It cannot yet provide reliable assistance in diagnoses. And, while its most promising use is to search for a vaccine and treatments, these will take a long time. The main reason for this somewhat pessimistic conclusion is inadequate data. The problem in the current crisis is that there is, on the one hand, not suitable enough (that is, unbiased and sufficient) data to train AI models to predict and diagnose COVID-19. Most of the studies that have trained AI models to diagnose COVID-19 from CT scans or X-rays have made use of small, biased, and unrepresentative samples from China. Many of these studies are not (yet) published in peer-reviewed journals.

On the other hand, the global impact and focus on the pandemic have resulted in too much data. There is too much noisy social media data associated with COVID-19, which the failure of Google Flu Trends illustrated more than five years ago. This failure is dissected by Lazer and colleagues in a 2014 paper in Science, in which they identified noisy social media data as upending ‘big data hubris and algorithm dynamics’. These factors currently also bedevil efforts to track COVID-19 using big data from social media.

Furthermore, and perhaps more importantly, the systemic shock which the outbreak has caused has led to a deluge of outlier data. In essence, COVID-19 is a massive unique event. This sudden deluge of new data is invalidating almost all prediction models in economics, finance, and business. The consequence is that ‘many industries are going to be pulling the humans back into the forecasting chair that had been taken from them by the models’.

Surveillance

So, while we will not likely see AI in prediction and diagnoses during the current pandemic, we are likely to see the growing use of it for social control. In contrast to AI’s limitations in prediction and diagnoses due to data problems, no such problems exist in using surveillance technology. The use of mass surveillance to enforce lockdown and isolation measures in China, including infra-red cameras to identify potentially infected persons in public, has been well documented. These have not been limited to China, but are being adopted by many democracies, including Australia, Germany, South Korea, Spain, the UK and USA. Here, it is not so much public infra-red cameras that are used, but rather, contact-tracing apps using personal mobile phone data.

Many developing economies are following suit. OneZero has compiled a list of at least 25 countries that by mid-April 2020 had resorted to surveillance technologies to track compliance and enforce social distancing measures. Many of these violate data privacy norms. These include developing countries such as Argentina, Brazil, Ecuador, India, Indonesia, Iran, Kenya, Pakistan, Peru, Russia, South Africa, and Thailand. In the case of South Africa, the country is reported to have contracted a Singapore-based AI company to implement a ‘real-time contact tracing and communication system’. Singapore is using ‘TraceTogether’, an app which sends out warnings if social distancing limits are breached.

In addition to social control and compliance measuring, AI systems via apps and mobile devices can also help health authorities to manage provision of care. According to Petropoulos, these can ‘Enable patients to receive real-time waiting-time information from their medical providers, to provide people with advice and updates about their medical condition without them having to visit a hospital in person, and to notify individuals of potential infection hotspots in real-time so those areas can be avoided.’

Social control, and the public information that can be disseminated via mobile devices, can be beneficial so long as we do not have a vaccine against the virus causing COVID-19. Without a vaccine, governments are left to ‘flatten’ the epidemiological curve, to prevent healthcare systems from being overwhelmed by a sudden increase in patients. And while lockdowns and social distancing measures can be effective to reduce the speed at which the virus spreads, they come at an exorbitant economic cost and, therefore, at some time, must be phased out in favour of smarter, less blunt policy instruments.  

To limit the danger that there will be a rebound in infections once restrictions are lifted, it may be necessary for large-scale diagnostic testing to identify those still infected and keep them in quarantine. In this approach, AI surveillance tools can be valuable. Large-scale diagnostic testing is also necessary to fill in the data gap that characterizes knowledge on the extent and fatality of the coronavirus. At the time of writing it is not known accurately how many people are in fact infected and how many are asymptomatic. A study in Science suggested that up to 86 per cent of all infections may be undocumented. If this is accurate, then there are two important implications—one being bad news and the other one being good news. One, the pandemic may easily rebound once lockdowns are lifted. Two, the virus may not be as lethal as is thought. In this regard, The Economist points out, ‘If millions of people were infected weeks ago without dying, the virus must be less deadly than official data suggest.’

The contribution of surveillance technology comes with one substantial risk, namely mission creep: that once the outbreak is over, that erosion of data privacy would not be reversed, and that governments would continue to keep intrusive tabs on their populations. They can even potentially use the data obtained in the fight against COVID-19 for other, nefarious, purposes.   

Pitfalls

This risk of using AI in the fight against COVID-19 is perhaps reflective of the general risk in using AI. AI has both positive and negative impacts. There will always be trade-offs. For instance, if we consider the Sustainable Development Goals (SDGs) broadly, a recent survey published in Nature Communications emphasized that ‘AI can enable the accomplishment of 134 targets across all the goals, but it may also inhibit 59 targets’. AI can do good, but it can also do bad.

Take two more examples of how AI can do both good and bad at the same time. While NLP algorithms may warn against the possible outbreak of an epidemic by mining written reports on social media and online news, a recent study found that to train a standard NLP model to do this using graphics processing unit (GPU) hardware, emits 626,155 pounds of CO2. This is five times more than an average car emits in its lifetime (120,000 lbs). Another example is that AI-driven automation may raise productivity and firm efficiency, but may increase unemployment and poor-quality jobs (‘gigs’), with higher poverty and inequality as outcomes.  

Hence, the authors in Nature Communications recommend that ‘the fast development of AI needs to be supported by the necessary regulatory insight and oversight for AI-based technologies to enable sustainable development. Failure to do so could result in gaps in transparency, safety, and ethical standards’.

The key point is that we need to limit the potential adverse consequences of AI, and we need to do so through adequate governance of AI.

Having identified current efforts to harness AI against COVID-19, and having noted their promises, limitations, and potential pitfalls, it remains to identify the priorities for developing countries.

Priorities

Developing countries are already having to deal with the economic fallout of the pandemic. As Hausmann argues, with revenues, trade, and investments dropping, developing countries would need to increase their indebtedness massively if they are to implement basic healthcare support and social distancing measures against the disease. They are losing policy space precisely when they need it the most. Therefore, prioritization of resources is vital.

Developing countries should prioritize their scarce resources on propping up their health sectors and providing social security to their citizens. In essence, they should not be investing their resources in AI in the hope of improving hospital efficiencies, or in finding a vaccine.

Although AI can be helpful in finding a vaccine, developing countries, and particularly those in Africa, are largely lagging in terms of AI research and development capability. As I document elsewhere, around 30 companies in three regions—North America, the EU, and China—perform the vast bulk of research, patenting,1 and receipt of venture capital funding2 for AI.

This is not to say that developing countries have no interest in harnessing AI to find a vaccine—they do, because it can be argued that such a vaccine is a global public good. Scott Barret has put forth the concept of a ‘single-best effort public good’, which can be applied to the search for a vaccine for COVID-19. In the case of a ‘single-best effort public good’, it can be produced by one or a few countries for the benefit of all countries. The requirement however is that the resulting vaccine be available in a non-excludable and nonrival manner .Thus, while developing countries should not be spending resources on finding pharmaceutical solutions to the crisis through AI, they should be part of a global coalition to harness the AI capabilities of high-income economies and China in this respect. What should be avoided is an unco-ordinated response, an ‘AI arms race’, if you will, between countries and regions, and uncertainty about the distribution of and access to such a vaccine. With the possibility that rich countries may hoard vaccines, there should be clear and fair protocols for the distribution of and access to such a vaccine. How to fund and incentivize such an outcome is another urgent challenge facing economists, not least to overcome production and logistical constraints.

Developing countries should also partake in the gathering and building of large public databases on which to train AI. The costs of doing so are small, and the potential benefits, given the need for unbiased and representative data on the pandemic, is high. The construction of such databases should be seen as an investment against future pandemics.

Finally, the combination of surveillance, AI, and testing, may help developing countries ease restrictions and lockdowns earlier. But, as was discussed, this will come at the risk of compromised data privacy—a risk that may have to be taken in the interest of public health and the re-opening of economies. The risk can be managed through appropriate governance.

Conclusion

How developing countries go about their AI-based surveillance and testing will be crucial. Developing country governments and the global community need to ensure adherence to the highest ethical standards and transparency. If they do not, then they may face the prospect that people will lose what little trust they had in government, which will, as Ienca and Vayena pointed out, ‘make people less likely to follow public-health advice or recommendations and more likely to have poorer health outcomes.’ For the developing countries of Africa, this makes it imperative that they ratify the African Union’s convention on Cyber Security and Personal Data Protection—the Malabo Convention—as soon as possible. Only two countries have so far done this. And, they should also, consistent with the convention, stop limiting internet access and restricting digital information flows.

Developing countries still face a substantial digital divide, and the world’s poorest region, sub-Saharan Africa, faces a particularly daunting challenge. The COVID-19 crisis, and its likely long-term consequences—in terms of accelerating automation, online trade, reshoring, and the increase of market power for large incumbent digital platforms—should spur on these countries to see the current crisis as an opportunity to speed up their digitalization, and to leverage funding for the long-term upgrading of data infrastructures and skills.