It’s almost a truism now that the advance of technology outpaces that of society. From the legality of self-driving cars to the 19th Century Luddites and their opposition to mechanical looms, history is rife with examples of how the spread of particular technologies has been slowed down by the mulishness of governments and people. Yet whereas many such examples often relate to more or less fully developed technologies that are simply blocked by unprepared societies, there’s one particular area where unprepared societies threaten to prevent a technology from fully developing.
This is artificial intelligence, which despite the exponential progress it has recently been making in restricted pockets is confronted with a society and an economy ill-equipped to harness its full potential. This came out most recently in a survey of 1,600 global IT executives by Infosys, who found that “only 10% of respondents’ organizations are fully maximizing the current available benefits of AI.” Delving further, this survey also revealed that 54% of executives “feel that a lack of in-house skills to manage AI … is a concern,” and that 49% “report a lack of knowledge in their organizations about exactly where AI can assist.” While these aren’t truly terrible results, they go to show that even in an industry — IT — that’s supposed to be tech savvy, there’s a noticeable lack of personnel trained and skilled enough to make full use of AI.
And this lack isn’t unfortunate simply in the sense of equalling missed opportunities to do more business, but also in the sense of equalling missed opportunities to develop AI and make it work better. Indeed, it has longer term implications than merely failing to make best use of the AI already out there, and this is because AI isn’t a technology that’s developed solely in labs or factories and then released fully formed for general commercial use. Instead, apps and devices with artificial intelligence are improved precisely by being used in context, by adapting and being reprogrammed in accordance with the particular ends assigned to them. And they’re like this by definition, since by definition they enhance their ability to perform certain tasks by doing just these tasks for themselves and learning by their mistakes.
As IBM’s John Kelly said of the Jeopardy-winning Watson, the typical AI “has no inherent intelligence as it starts […] But as it’s given data and given outcomes, it learns, which is dramatically different than all computing systems in the past.” In other words, its programming isn’t complete until it’s planted in a particular working environment, which in this case acts as an informal engineer and coder, selecting the outputs and behaviors it will produce in order to achieve its particular goals. That most AI work like this can be seen, for example, in applications as familiar as Google’s image recognition software, which comes to recognize human faces not by being directly coded to recognize them, but by being put in a situation where it’s forced to recognize them.
It’s because of this that reports of the unpreparedness of firms and professionals in most AI-relevant industries is worrying. For instance, Forrester Research released a survey in June 2016 which showed that only 39% of companies were prepared for AI on a technological level. This is a troubling statistic insofar as Forrester have also predicted elsewhere that there’ll be a 300% increase in investment in AI in 2017, and insofar as research from Narrative Science discovered that 64% of companies expect to be employing AI in one way or another by 2018. Given that they aren’t as ready to use artificial intelligence as they’d like, these companies could soon find that their introduction of AI doesn’t boost productivity as much as they might’ve imagined.
But once again, their lack of competence and familiarity with AI isn’t just a threat to them. It’s also a threat to AI itself, since time and again all available indicators suggest that AI technologies will need input from more than Silicon Valley alone to come anywhere near matching their hype. Perhaps the biggest indicator of this is the fact that those most heavily involved in pioneering the development of AI have opened their wares to the public, with Google sharing its DeepMind AI training platform to would-be programmers, and with Elon Musk’s OpenAI doing something similar for its own platform. Such firms are doing this not for the fun of it, but because making AI software generally available for public use is essential to its progress and maturation. Without such sharing they couldn’t possibly hope to make significant progress.
And they won’t make significant progress if businesses and organisations fail to staff themselves with adequately trained employees. Even with the AI machine that has infamously beaten the world’s best players at Texas Hold ‘Em, alterations were reportedly made by its programmers in order to keep it adjusted to changes in how the human competitors were playing. This just goes to show that, even with the most current examples of the technology, AI will have to be repeatedly modified on the job by its own users if said users want it to do this job properly.
This need for users to take an active role in helping with the development of AI shouldn’t seem all that surprising when we remember that, apart from a few exceptions, most AI applications and devices won’t come off the shelves with specific functions. They won’t come pre-programmed or loaded with specific tasks to perform, but will have to be set tasks by the companies and the organisations using them. As such, they’ll be extensions of employees more than separate tools in the traditional sense, which is ultimately why employers — and education systems — need to begin giving such employees the training and skills necessary to getting the most out of them. Otherwise we might end up sitting around waiting for the next industrial revolution to happen for many more years to come.