I can remember a time when people used to laugh at the concept of artificial intelligence (AI) and machine learning. Over the years, we have seen an ambitious project such as Google Glass fall through the cracks and we have seen motor vehicles that are driven by AI crashing into people because the decision-making platform was just not right.
This has all changed. We live in a new world where AI is the pinnacle of existence in the technology industry and machine learning is improving by leaps and bounds.
I recently read an article on enterpriseinnovation.net where the journalist paints a vivid picture of how AI will influence the world. It is a pervasive technological force that’s impacting individuals, business, and society.
The article points out that while another AI winter seems unlikely, thanks to advances in deep learning this decade, it’s important to separate fact from fiction so that governments can regulate AI in a way that doesn’t stifle its potential, play up to public fears, or create a climate of overhype.
Edinburgh University’s Professor of Epistemics Jon Oberlander spoke to enterpriseinnovation.net and gave his thoughts on the current state of play of this game-changing technology.
Probably a better driver than you are
According to Prof. Oberlander, the answer to whether AI is overhyped is a “very firm yes and no,” meaning that the tech is viable, but that tangent obstacles exist. He uses driverless vehicles as an example: “I think are not quite as close as we might imagine…The reasons aren’t technical, they’re regulatory.”
The article adds that the first issue with regulating driverless cars is ethical. Imagine a child running into the road after a ball where avoidance would force the car to either swerve into an elderly couple or cause injury to its passenger – the AI would need to make its choice in a split-second. And where would insurance and the law sit in this type of scenario?
A linked second issue is accountability: Who’s responsible if a driverless car crashes? The manufacturer, tech vendor, or passenger-driver? In the blurry worlds of semi-autonomous vehicles and the impending mix of autonomous and human-driven vehicles, the liability issue gets even more complex. According to Oberlander, “It’s the designers or the owners…of the machines, the self-driving cars, who should be responsible for all of the actions of their tools.”
The article points out that manufacturers are divided: Volvo, for example, made the news in 2015 as the first car maker to say it takes full liability for its vehicles, whereas Tesla CEO and founder Elon Musk believes the occupant’s insurance should take the hit for non-design-related faults.
Distrust about AI
The article points out that when assessing the perception of driverless vehicles, surveys in both 2016 and 2017 by the insurer AAA reveal that, “Three-quarters of U.S. drivers report feeling afraid to ride in a self-driving car.” Research by MIT in 2016 shows similar results: “The trust to adopt these technologies is not yet here for many potential users and may need to be built-up over time,” while another MIT survey holds that 48% of respondents wouldn’t buy a fully autonomous car.
Oberlander believes that a mix of public trepidation and unclear regulations are why there’s “a whole lot of arguments that the AIs being developed now are not quite ready to be socially acceptable.”
It’s not just cars
The article adds that a 2016 survey by the British Science Association found that people are reluctant to use AI in other scenarios: 53% in the case of surgical procedures and 62% for commercial aircraft.
However, this hides the fact that AI is alive and kicking in both cases. In healthcare, the teleoperated Da Vinci system has to date performed more than 3 million operations, and AI is already helping radiologists check scans for tumours.
The article points out that concerning aircraft, Wired addresses the public perception issue in the title of the article, “Don’t freak out over Boeing’s self-flying plane – robots already run the skies.” Reporting on Boeing’s plan to take pilots out of the equation completely by extending more decisions to AI, the writer points out that this isn’t really that far from what’s happening now.
According to Oberlander, though, many AI’s are “not doing quite the things that you might think of as being really ‘AI-ish’ just yet.” This is a key point. While narrow AI abounds in various fields whereby the AI system can perform a very specific task outstandingly well, the public’s perception of what AI does is a bit murky because it’s hard to define. Thus many people have mixed feelings towards it, although few believe in the movie trope of robot overlords.
Nevertheless, we might be going in the wrong direction if regulations are influenced by a collective misunderstanding of AI.
<h3<ai’s tech=”” enablers<=”” h3=””>
The article points out that for those in the industry, the technological side of AI is less overhyped than the anticipation of the sci-fi-esque ways it’ll be applied. Its major technology enablers are beginning to fall into place, including broadband connectivity, data centres, cloud, big data and analytics, and IoT.
How do they slot together? Broadband connects the data centres that provide cloud services like computing, storage, and XaaS, including AI-as-a-Service. In large part, thanks to cloud, computer processing and GPU power recently became cheap enough to facilitate sufficiently fast parallel processing on a massive scale and enable deep learning. IoT and its potentially billions of sensors yield the big data that AI needs for its algorithms to perform deep learning and analytics.
However, Oberlander points out a current issue with AI’s dependence on big data: “On the one hand, we have a surfeit of data… But, a lot of data is not labelled, and so to use some of the most powerful techniques, supervised learning techniques, you need to label that data.
Going deep
The article points out that in the area of deep learning applied to computer vision, big data and improved computer processing power helped Google’s Andrew Ng make a breakthrough in 2012 by bombarding a vast neural network with 10 million video thumbnails from YouTube over three days. The system was given a list of 20,000 items without being instructed on how to distinguish between them in an unsupervised learning scenario using unlabeled data.
Over the course of the experiment, it began to detect human faces, human body parts, and cats with 81.7%, 76.7%, and 74.8% accuracy, respectively. “There’s genuine excitement particularly in areas around neural networks and deep learning, where there’s been dramatic progress,” says Oberlander.
Another exciting field is probabilistic machine learning in natural language processing, which according to Oberlander, “uses Bayesian Inference for unsupervised language acquisition; basically, just throwing the machine in the deep end.”
The article adds that with Bayesian Inference, there are no target prediction examples that predicate statistical learning. Oberlander explains how his colleague from the University of Edinburgh’s School of Informatics, Dr. Sharon Goldwater, used Bayesian Inference “to explain how you can build automatic speech recognition from first principles.”
Oberlander also mentions deep reinforcement learning, a crossover point between cognitive science and deep learning that takes a reward-punishment approach to AI learning.
Talking of Google’s Deepmind’s success at learning several Atari games by retaining past experience rather than following separate programming for each game, Oberlander says: “There’s a very clear reward function….The numbers that constitute the reward, I think, are what the systems themselves discover.”
Artificial General Intelligence (AGI)
The article points out that while there’s clearly a lot of excitement about the cutting-edge of AI research, Oberlander isn’t particularly bullish about AGI, believing we’re still “a long way off” from the theoretical singularity whereby artificial intelligence equals human intelligence across the whole spectrum of human intellect.
Despite Deepmind’s skill at Atari games, which ostensibly implies some sort of generality of intelligence, aka AGI, Oberlander believes that “pulling together the narrow intelligence we have now isn’t necessarily the route to that destination.”
He takes a pragmatic view towards what’s going to happen over the decade: “My feeling is that there’ll be a lot more AI there, but you won’t necessarily notice it.”
The article adds that AI ubiquity, therefore, may pass without much fanfare as far as the reality goes, while regulations could well push back against how fast exciting applications like driverless vehicles and robot assistants become socially acceptable.
In July 2017, The Guardian reported on researchers’ calls for robots to be fitted with an “ethical black box” to explain an AI’s decisions if accidents happen in scenarios like healthcare, security, customer assistance, and driverless vehicles.
The enterprise innovation.net article concludes by pointing out that the excitement in the industry is thus tempered by a lack of clear regulations not just on liability should an accident occur, but also on both transparency in AI research and on releasing open-source code, which some companies do.