Artificial intelligence (AI) is touted as the next big disruptive force in technology that will transform public services and business. Olly Buston assesses the benefits and risks.
We are in the early stages of a revolution, in which the development of artificial intelligence (AI) may well have a more profound impact than the industrial revolution—and over a shorter time frame.
There is the potential for businesses to benefit enormously. But there are also important risks that those at the top need to be informed about, and for which they will need to help find solutions.
Defining artificial intelligence is a complicated task, mainly because the concept of intelligence itself is hard to pin down. A simple definition of intelligence would be “problem solving”. Many people would also include an ability to learn and adapt as another key feature of intelligence.
The recent explosion of interest in AI has largely been driven by advances in “machine learning”. These are computer programs that automatically learn and improve with experience. Perhaps the most famous examples are Google DeepMind systems, which have learned to be very good at a wide range of classic Atari video games, and the ferociously complicated board game “Go”.
“The business plans of the next 10,000 start-ups are easy to forecast: Take X and add AI.”–Kevin Kelly, Wired
AI is already enabling a wave of innovation across every sector of the economy. It helps business use resources more efficiently, allows new approaches to old problems, and enables entirely new business models to be developed, often built around AI’s powerful ability to interrogate large data sets.
In the future, increasingly powerful and flexible AI will drive advances in productivity in almost every business area imaginable. As Kevin Kelly, founding editor of Wired, says: “The business plans of the next 10,000 start-ups are easy to forecast: Take X and add AI.”
AI will also drive great improvements in the diagnosis and treatment of diseases; in the delivery of public services; and in the safety and convenience of transport.
Right now it may not feel to most people that AI is having a big impact on their lives. This is despite the fact that artificial intelligence systems already trade shares on the stock market, filter our email spam, recommend things for us to buy, navigate driverless cars, and in some places can even determine whether you are paid a visit by the police.
Over the coming decades, the impact will be felt much more profoundly. IBM CEO Ginni Rometty says that her organisation is betting the company on AI. And according to Google CEO Sundar Pichai, the tech giant is “thoughtfully applying it across all our products, be it search, ads, YouTube, or Play”.
Huge investment in AI development by these and other companies, governments and militaries, combined with the proliferation of data for AI to crunch and the ongoing increase in available computing power, all point to an intelligence revolution that will have an impact on a scale that we can only begin to imagine.
Jobs that are likely to resist automation are those that require creativity, lateral thinking, interpersonal skills, caring, and adaptability.
This revolution will also bring important challenges. Economists argue furiously about whether or not AI will lead to mass unemployment. The pessimists suggest that as machines take on more and more routine intellectual tasks on top of the physical tasks that they already excel at, little work will be left for humans to do.
The optimists think fears of mass unemployment stem from limitations of our imagination. Historically, technological advances have created jobs in previously unimaginable sectors. Enter the words “social media jobs” into any recruitment website, for example, and you will find hundreds of jobs that simply did not exist ten years ago. One estimate predicts that 65% of children in primary school today will be working in a job that doesn’t exist yet.
What pretty much everyone does agree on is that the rate of scope of change in employment patterns will be unprecedented. Change will impact different geographies and demographic groups differently.
The future is very hard to predict, but the displacement of many professional drivers by driverless vehicles is surely a matter of “when” not “if”. It also seems quite likely that most of the jobs in call centres will simply disappear once AI-based call-centre systems cross a certain quality threshold. In the UK around one million people work in call-centre jobs in areas that have previously been hit by industrial decline.
In our report, An Intelligent Future?, Future Advocacy calls on the government to commission detailed research to assess which jobs are most at risk by sector, geography, age group, and gender. We think the government should then implement a smart strategy to address future job losses through retraining, job creation, financial support, and psychological support.
Jobs that are likely to resist automation are those that require creativity, lateral thinking, interpersonal skills, caring, and adaptability. Our education system should focus on fostering these skills as well as the critical STEM skills of science technology, engineering, and mathematics.
Given the likely rate and scope of change in job markets over the coming years, a focus on self-directed, lifelong learning techniques will be essential to creating a flexible and dynamic workforce.
A second important challenge relates to the opaque nature of AI. AI systems are often deployed as a background process, unknown and unseen by the people they impact. In any case, it is often hard even for the computer scientists who write the code to understand exactly how AI arrives at the decisions it makes.
This is particularly true of some complicated machine-learning algorithms that evolve over time. This “black box” issue is exacerbated by the fact that significant stores of data are not in the public domain, so it is impossible to test or challenge results.
Recidivism software, widely used by American courts to assess the likelihood of individuals re-offending, was found to be twice less likely to falsely flag white people and twice more likely to falsely flag black people. This example was especially concerning, since the algorithm was protected under Intellectual Property Laws and was not open to scrutiny.
Opening up the “black box” is already a focus of some governments. In the future it is likely that businesses and other actors will need to be able to provide an explanation that people can understand as to why decisions have been made. The European Parliament recently adopted new regulations that include such a “right to explanation” over algorithmic decisions that “significantly affect” individuals’ lives.
A final challenge relates to privacy and access to data. Most of us are clueless about what data is collected about us, by whom, and for what purpose. This is recognised by both parties when the necessary acquisition of consent is literally and figuratively reduced to a farcical box-ticking exercise.
What a strange contract it is when the customer signing doesn’t read it, the company knows the customer doesn’t read it, and the customer knows that the company knows that the customer doesn’t read it.
People need greater clarity on who collects what, and for what purpose. We need to argue the rights of various parties and understand how to access information about how our personal data is stored and used. Public debate should also focus on the uncertainties around how data might be used in the future.
We need a “New Deal On Data” between citizens, business and governments. This will be good for the public. But it is in the interests of business too, because it will build trust. If we do not do this we risk undermining public confidence in AI technology, sparking opposition to its uptake.
So, what should boards be focusing on right now when it comes to AI? As with much technological change we have a tendency to overestimate short-term impact and underestimate the longer-term impact, drawing a straight line from the present day, when an exponential curve is the more likely shape of the future. But there are significant immediate opportunities for companies to seize. For now, as James Hodson—CEO of the AI for Good Foundation—argues, companies should focus on using AI to address simple problems.
Currently AI works well where the problem is routine, well defined and well understood, and where ample data is available to make good decisions.
Other problems are best solved by humans for now, or by a combination of humans and machines. But it would be a mistake to think of AI applications too narrowly. As we’ve seen from the internet, disruptors can emerge from nowhere with new business models to threaten established players. Company boards also need to think about how they will address the important challenges that AI will bring.
As AI takes on more routine intellectual tasks, how will companies attract the talent needed to do the remaining highly skilled tasks? How will they attract those people who are able to work best alongside AI? How can employees whose work is taken over by AI, and whose time is freed up, be up-skilled to add greater value to companies?
Boards should also assume that issues of transparency in AI decision-making and of privacy in the use of personal data will become increasingly important and that customers will value businesses that take these issues very seriously.
The future is not at all clear, but one thing is certain: great changes are coming, and board members will have a critical role to play in maximising the opportunities and minimising the risks, both for the bottom line and for the good of society.
Founder of Future Advocacy (www.futureadvocacy.org) and author of its new report, “An Intelligent Future?”
Follow him on Twitter: @ollybuston
The article was first published on Board Agenda