Where can we drive the best socio-economic value from artificial intelligence?
23 of May of 2018
Artificial Intelligence (AI) is nothing new. It’s been around at least 60 years. What is new is that it’s now accessible to a whole range of people and organisations. What was once the preserve of NASA, is now used routinely by businesses and citizens every day. Progressive miniaturisation and integration of computing systems means that we all walk around with enough computing power in our pockets to guide 120 million Apollo rockets simultaneously.
However, despite this, autonomous vehicles are not yet ubiquitous, drones are not operating at long range beyond visual line of sight, and we still have millions of people working in hazardous environments every day. So what’s going wrong?
Perhaps not uncommonly with disruptive innovation, we now have a technological push that is maybe stronger than consumer pull. We may be able to do smart things, but who wants to? And here’s where it gets complex, for a number of reasons.
People are scared of robots
We’ve all seen Robocop right? What do we do when a robot goes bad? What is the social backlash when a driverless car has an accident? Over 1000 people die on the world’s roads every day, but how many of these fatalities make the mainstream media? Yet when an autonomous car hurts someone in the USA it makes global TV news. Or when the driver goes to sleep in the passenger seat of a Tesla on a UK motorway he gets vilified in the press. So we need to accept that the world’s population is nervous about AI and make sure that we deploy it in ways that don’t put the public at risk, and embrace appropriate regulation that keeps them safe.
Service and technology providers need to work with policymakers and regulators to create and adapt regulatory frameworks to provide appropriate control and governance, whilst enabling advancement. Current areas of focus are around the deployment of autonomous vehicles on the highway and the use of drones in urban areas and controlled airspace, such as around airports.
Deployment of AI is definitely an area where the adage ‘two steps forward, one step back’ is not something to embrace. People will remember the failures far more than the successes so it is essential to deploy carefully, in a measured and managed way, to minimise the risk of failure and harm.
People value their privacy
The news is full of data breaches (how can we forget the last one Facebook), they seemingly occur every day. The only reason that there is not more adverse reaction is that generally people recognise that there is net positive value from the deployment of AI to manage and learn from their data. It’s a trade-off – people will continue to share their data provided that the perceived benefits outweigh the risks of data loss.
Whilst the lawmakers and regulators need to continue to protect us, we need to make sure that our uses of data are honourable and value adding. As a service provider it’s an easier equation to tackle. We can use data and AI to make life better for citizens (directly and indirectly) where they live, work and travel. Improving energy efficiency, journey reliability, passenger experience, and designing and implementing citizen-centric services are all practical real-life examples of making things better through AI, with negligible data risk.
If AI is an accelerator for human behaviours, clearly we want to accelerate the good, rather than promoting the bad.
People want to protect their jobs
There are lots of things we can do with AI; the key is to prioritise the things we should do. If the untargeted use of AI results in reduced employment in a community where adaption of roles is difficult, is this a social advance?
There are some clear uses of AI which are for both an economic and social benefit. Firstly, we need to use AI to remove people from hazardous environments. This has untold value for everyone. We have the capability so why do we still have people working in high risk roles in for example, oil and gas, road and railway maintenance, waste processing? Our focus is on the elimination of people from these environments by harnessing AI to manage the repetitive stuff that can realistically be automated.
The other place where there is no doubt about the value of AI is in the process of inspection and testing, in a world of scarce skills, why would we use our most experienced people to inspect products and assets that are in good condition? This is where AI comes into its own and is most reliable – continually inspecting and monitoring condition to ensure it is within pre-determined limits, but alerting people when conditions move outside of these norms. This is when the high value human resources need to be deployed, because at this point its typically more efficient to use a ‘human processor’ whether maintaining infrastructure, or diagnosing healthcare patients!
This is where our focus on recruiting the brightest and best in maths and computer science needs to be balanced with developing talent in consulting skills. When considering the demographic spread of relevant skills – our technical AI workforce comprises mainly Millennials and Gen Z, whereas our consultants are closer to the baby boomer end of the spectrum – the key thing is to design effective knowledge management processes so that the AI wizards of the future know where to deploy their skills for maximum economic impact and social value.
There are no comments yet