An interview with Feng-Yuan Liu: Fighting a Pandemic with Trusted Technology

28 Apr 2020 · by BasisAI

For decades, the modern, connected world has not seen a global public health challenge on the scale that COVID-19 presents. It is not only posing a grave threat to health and economies, but also the social fabric of societies. Up against this primordial biological threat, it becomes tempting to deploy the latest technological innovation of the last century: big data and artificial intelligence (AI).

Global authorities and organisations have taken very different approaches in using technology to fight the pandemic; from mass surveillance to more privacy-preserving contact tracing, but the efficacy of these solutions appears to be deeply rooted in trust and collective action. What are some of the concerns associated with the use of data and AI in governments' ability to engender trust with citizens? How far should the data privacy vs public good trade-off go? What other applications of AI are proving effective in the detection, prevention and forecasting of the virus? BasisAI’s CEO and Co-Founder, Feng-Yuan Liu, shares his views on using trusted technology to fight pandemics with CNA938’s Daniel Martin.

This interview originally aired on April 13th 2020 on CNA938 during Daniel Martin’s Life&Style program which covers the latest developments in technology, entertainment and health.

 

Using Responsible Tech to Fight Pandemics

 

Daniel Martin: As the Coronavirus outbreak continues to spread across the world, researchers and companies are looking to AI to help address some of its challenges: to help recognise, predict and explain COVID-19 infections, as well as to help manage the socioeconomic impact. I want to learn more about how AI can be leveraged to fight this pandemic and how it could be used responsibly in a public health emergency. 

I was talking to my listeners most of last year about AI ramping up and how 2020 was going to be a big year for AI, with a lot of the implications for the business world and for the private user as well. But now, obviously that pivot has to happen. Give me some background first. When it comes to BasisAI, what exactly was your company founded to do?

Feng-Yuan Liu: BasisAI was founded 18 months ago and we're in the business of building augmented intelligence software for data-driven enterprises. I think one of the key differentiators about our philosophy and our approach to AI is that we believe in building responsible and accountable systems. We believe that AI shouldn't be a 'black box’ (whereby it’s almost impossible to determine how or why an AI makes certain decisions) and that it should be able to be held to account. You should be able to trust these systems and understand the decisions or predictions they make. We believe that AI can be very powerful in helping individuals and companies make better decisions with data, but we also feel that it's very important to take a responsible approach to using the technology. 

I used to work at the Smart Nation office looking at data for public good. The other co-founders, originally Singaporean, but who have spent a lot of their careers in Silicon Valley in companies such as Uber and Twitter, had a shared vision of helping enterprises supercharge their business with AI products that are trusted by end users. And so our mission is about the use of responsible systems.

Daniel Martin: How do you think that now, in light of our global situation, AI can help in the COVID-19 pandemic, especially here in Asia?

Feng-Yuan Liu: I think COVID-19 has really seized the imagination. Governments and societies all over the world are trying to find innovative ways to help resolve the situation. No one solution is going to be a panacea but I think digital technology, big data and AI can certainly be very powerful. But one thing to keep in mind is that a lot of the solutions to COVID-related problems have to rely on trust.

This is a global problem, but it requires a lot of collective action. And so whatever technology and whatever means are being used by governments need to be trusted. A key use of data-driven technology we've already seen is in the area of contact tracing. We know that the virus spreads through close contact with others and therefore contact tracing has been a key strategy in preventing disease spread. The use of data from mobile phones is helping authorities do this contact tracing.

What's very interesting is we see AI being used in a variety of ways. On one extreme, you have AI being used for contact tracing through mass surveillance. Some countries, for example China and Israel, have been prepared to take the step to requisition vast amounts of data. So, if you want to travel you have to open up your app and you'll be given a QR code that's coloured either green, yellow or red and that determines whether or not you are allowed to move around cities freely. In China, this technology is enabled using mobile data, Alipay and WeChat apps to track individuals. That may be in some ways scary, but it allows governments to take a very tiered approach in restricting movement and enabling contact tracing. 

We have also seen a much more thoughtful approach to privacy-preserving in contact tracing, such as what the Singapore government has done with the TraceTogether app. Similarly, Google and Apple recently announced that they're going to create a system that, by design, is quite similar to what Singapore has done.

Daniel Martin: What else can be done besides contact tracing? In terms of diagnosis or treatment, is that something that AI can help with as well?

Feng-Yuan Liu: I believe so, but it's early days yet and this situation is going to be with us for months. There are lots of healthcare professionals looking at whether AI can be used in the discovery of a vaccine. DeepMind has been looking ways you can use AI to map the 3D structure of the virus proteins. AI is very good with needle-in-a-haystack type problems; handling large datasets and looking for patterns in the search for a vaccine. So that's one really exciting way of applying AI. 

Before actually doing a swab test, there may be other very early detection signals that hint at the of possibly having the virus prior to showing symptoms. So some companies are researching whether wearable technologies can help in early detection, for example, by tracking increased body temperature or symptoms such as lost sense of smell. Piecing together this information may help to detect patients who are at a high risk long before a swab test can be done. So I think there are some interesting possibilities in the fight against COVID and, as time goes on, we'll be able to see how effective these kinds of technologies can be.

Daniel Martin: What about the possibility of forecasting? Because we’re obviously worried about the next pandemic. I've talked to the World Health Organisation (WHO) and they are working on an early warning or intervention system that leans heavily on data as well. Is forecasting an area where AI could help?

Feng-Yuan Liu: Authorities have relied on researchers to do projections about potential spread; where it’s going to spread next based on travel patterns as well as severity of the disease. But one caveat and one caution is that when developing predictive models, you always want to rely on a lot of past data in order to project a forecast into the future. And one of the challenges is because this is a novel coronavirus, it’s new and we don’t have the luxury of historical data. We can take references from other coronaviruses or flus in the past, but we can’t be sure what this virus is going to be like until you have enough data to make powerful predictions. So, there is reason to be slightly cautious about how far we can forecast and predict a virus will evolve because there's so many social aspects concerned. A lot of the spread of the virus depends on how people are changing their behaviour and their interactions with other human beings. This is going to be difficult to forecast, even with the best technology.

Daniel Martin: Absolutely... Personal behaviour, social responsibility - that is a big part of the fight against COVID-19. You're not going to find panacea in AI, but it could help accelerate a lot of the processes that will hopefully help tackle the situation. Right now, in these short 3 to 4 of 2020 thus far, have we seen any positive examples of how AI has helped?

Feng-Yuan Liu: I think one really interesting example that I've seen recently is what Google has done with Community Mobility Reports. A lot of countries around the world are trying to do various forms of quarantining and lockdowns, in Singapore's case the 'circuit breaker', and one question authorities will have is: how effective have these measures been? Do I need to dial down? Do I need to dial up? Are people still going out as they have done in the past? 

If you think about when you Google search your favourite restaurant, sometimes the results will tell you that it's really popular at, say 5pm, or it's less busy at other times. In using a similar sort of technology which uses various aggregated location data they're able to assess if, before and after the circuit breaker, there has been any significant changes in mobility i.e. traffic in retail parks or workplace transits. We can say, for example, from this week to this week we saw a 20% decrease in the amount of activity in workplace transit areas. And I think this is a really powerful way of helping authorities, especially in the larger countries, to really assess the effectiveness of their measures, allowing them to then adjust whether they need to do more or if they can ease restrictions. So that's one example of AI being very effective which I'm pretty encouraged by.

 

The Data Privacy vs Public Good Trade-off

 

Daniel Martin: One of the first things you mentioned was about making sure that people have trust in AI. Understanding about how your data is being used, and how it is being used for public good is something that is incredibly important. When it comes to the legitimate concerns that people might have over how far to go with the use of AI and surveillance, maybe in this context we're thinking - ok - we see the overriding need of public health control and we're willing maybe to sacrifice some aspects of monitoring and things like that. But going forward, how do you think about trust and preserving privacy?

Feng-Yuan Liu: Yes, that is legitimate concern. In extraordinary times, the privacy trade-off becomes something that some people may be more willing to bare. But I also think that if we understand a lot about how these technologies are designed, then we can assuage some of the concerns. With the TraceTogether app, one of the key paradigm shifts is that the app doesn't collect your location data history. The whole point of contact tracing is to trace contacts - if you've encountered somebody else who is infected. What it doesn’t capture is all the information about where you've been throughout your day. So what it's done is actually collect less information and it only captures what it really needs. We therefore need to think very carefully about what data is being collected and how to reduce that as much as possible. 

The other thing you see both in the TraceTogether get the app, as well as the new tool that Google and Apple are releasing, is that they're trying to keep most of the data that is being collected on your phone locally instead sending and amassing data in the cloud. Private information is stored on your phone until you have been infected, or in contact with someone infected, and give consent for data to be shared for the greater good in letting other people know that they are at a higher risk. Another key aspect is that all of these tech companies have used open source code to develop their software so it is possible for the design of their apps and tech to be scrutinised by the software development community. The openness and transparency of the design is very important in order to hold all of these companies and governments to account.

In Singapore you’ve seen a real balance and respect for privacy, because I think we realise that trust is a major component of success. If you use technology widely but people don't trust what the technology is doing, then it's much harder to get collective action. What's been interesting to observe is how different countries have been able to gain trust from citizens and how they've used technology to engender this trust.

Daniel Martin: Thank you so much for talking to us today and helping my listeners understand how data and AI can play an important role in the current global situation and, most importantly, what levels of trust we are willing to put into the various problem-solving technologies.