The face of the moon was in shadow
A popular cartoon when I was a kid was The Jetsons. Set in a futuristic world, humans lived in space and flew cars around like spaceships.
A feature of these vehicles was that they were autonomous. While we are not flying our cars around, we are making strides towards autonomy. However, it this beneficial for everyone?
Highly visible
We will gradually see more vehicles becoming autonomous in the future.
The article points out that the migration toward fully autonomous vehicles (AVs) is being enabled by the gradual introduction of multiple, interrelated safety technologies.
Feature by feature, these technologies add new capabilities that assist the safe and efficient operation of new vehicles and move the market toward an increasingly autonomous future.
Looking into the future
The article adds that, according to a new forecast from International Data Corporation (IDC), the number of vehicles capable of at least Level 1 autonomy will increase from 31,4-million units in 2019 to 54,2-million units in 2024, representing a five-year compound annual growth rate (CAGR) of 11,5%.
All light-duty vehicles shipped and retrofitted during the 2020-2024 forecast period will have a specific level of vehicle autonomy. To determine the autonomy level for each of these vehicles, IDC utilises the industry definition established by the Society of Automotive Engineers’ (SAE) J3016 “Levels of Driving Automation” standard.
Classification
The article points out that these definitions can be summarised as:
· SAE Level 0 — No Automation. All dynamic driving tasks are performed by the human driver;
· SAE Level 1 — Driver Assistance: Autonomous driving functions may situationally assist by providing either active steering or braking/acceleration for certain dynamic driving tasks. The driver remains responsible and in control of the vehicle and all other dynamic functions;
· SAE Level 2 — Partial Driving Automation: Autonomous driving functions may situationally assist by providing both active steering and braking/acceleration support for specific dynamic driving tasks. The driver is expected to remain attentive, be in control, and is responsible for all other dynamic functions.
· SAE Level 3 — Conditional Automation. The autonomous driving system (ADS) will perform all dynamic driving tasks on behalf of the driver under certain environmental and roadway situations. The driver must always remain prepared to intervene and take active control of the vehicle, should the ADS deem such a request necessary.
· SAE Level 4 — High Automation. When enabled, the ADS can perform all dynamic driving tasks under certain environmental and roadway situations (often referred to as the system’s operational design domain [ODD]). There is no requirement for a driver to be present or attentive.
· SAE Level 5 — Full Driving Automation. When enabled, the ADS can autonomously perform all dynamic driving tasks across all combinations of roadway and environmental conditions. There is no requirement for a driver to be present or attentive.
Commitment to change
The article points out that, while fully manual, driver operated vehicles (SAE Level 0) currently represent the majority of units manufactured worldwide, it remains the only autonomy level that will decline over the forecast period.
This decline is a result of the automotive industry’s commitment to advanced driver-assistance systems (ADAS), as well as advancements across the areas of technology, cost efficiency and scale, consumer choice, and government regulation.
As a result, vehicles with some degree of automation (SAE Levels 1-5) will represent more than 50% of all vehicles produced by 2024.
The article adds that vehicles capable of SAE Levels 1 and 2 autonomy represent the two largest area of autonomous growth and will receive the largest share in both investment and advancement during the forecast period.
Because drivers must remain vigilant and in full operational control when these technologies are enabled, their introduction represents a more acceptable level of risk and liability for both vehicle manufacturers and government regulators.
Manufacturer sentiment toward SAE Level 3 technology has recently improved, including support for limited domains, such as high-speed and low-speed highway driving. IDC expects these systems will be deployed in increasing numbers in the outer years of the forecast.
Fully autonomous SAE Levels 4- and 5-capable vehicles will remain the aspirational, revolutionary goal that has fuelled the billions of dollars in investment by new and traditional automotive ecosystem companies.
The article points out that the development and deployment of these vehicles will require significant advances in technology, customer readiness and trust, and government regulation.
Consequently, IDC does not expect any SAE Level 5 vehicles to be available worldwide during the forecast period.
Overall, when reviewing the impact of the consolidated category of highly automated vehicles (SAE Levels 3-5) on global shipments, IDC forecasts that the industry will begin its growth curve upwards from a modest 2019 unit basis to achieve more than 850 000 units by 2024.
“The pathway to increased vehicle autonomy will be largely built on gradual feature and capability advancements,” Matt Arcaro, research manager: next-generation automotive and transportation strategies at IDC told it-online.co.za. “Although SAE Level 4, full self-driving vehicles will capture media headlines and will deliver tremendous value to society, the impact of SAE Levels 1 and 2 vehicle growth over the forecast period remains too large to be ignored.”
AI is a potential problem
The major problem with these vehicles is that they are run by artificial intelligence (AI); and AI is highly susceptible to cyber-crime. Will this kill the AV dream? Hope for the best, plan for the worst, Guidehouse Insights’ Sam Abuelsamid told Motor World adding that the cyber threat to AVs is real, but the industry can take steps today to ensure resilience
The article points out that there was a time when the only real security concern for vehicle owners was that someone would pop their lock and either steal the stereo or hotwire the engine and drive off. However, as we add increasing connectivity and the electronic controls that will eventually lead to full automation, the risks become exponentially greater. Cyber security is a very real concern that all automakers and suppliers deal with daily.
There was never much cause for concern around cyber security until the late 1990s; even then, it was closer to 2010 before most people really started paying attention. In the early days, most electronic control units (ECUs) in vehicles were not even reprogrammable. The algorithms that ran on those relatively primitive microcontrollers, which powered systems like antilock brakes, were actually encoded right on the silicon dies.
The article adds that in some cases, a chip could be replaced with modified calibrations for the engine management or transmission. Even when reprogrammable flash memory became available, someone would need physical access to the vehicle and a proprietary diagnostic tool to make changes. At that point, you were more likely to break—or ‘brick’—the ECU than accomplish a malicious hack.
The article points out that, fast forward to 2020, and the majority of new vehicles have an embedded LTE data modem, Wi-Fi and Bluetooth, and many reprogrammable safety critical ECUs. Within the next few years, nearly all new vehicles will be connected in some way with 5G and vehicle-to-everything (V2X) joining the communication suite. At the same time, more sophisticated, partially automated systems are becoming commonplace.
As we deploy highly automated vehicles (AVs) that can operate without any human intervention, connectivity becomes essential. After all, how can you tell a car to go park itself, or return from the parking garage, or summon a robotaxi if you cannot communicate with it? AVs will also need to download map updates, traffic and road conditions, enable teleassist capability, and more in real time.
Why hack a car?
Who is likely to attempt a hack on a car, and why? There are those who will attack a system just to see if they can do it, and what they can accomplish. Similarly, the vandal may simply be out to cause some seemingly minor trouble, like disabling a friend’s car. The more troubling cases could involve active attempts to steal data or otherwise commit financial crimes, and those involving state actors.
The article points out that the first confirmed hacks shared with the public came out in 2015, and both were executed by security researchers. A team from the University of Washington managed to get into GM’s OnStar telematics system and show how they could manipulate steering, braking, the engine, and other systems remotely. GM was notified of the vulnerability and corrected it before it was made public. A similar attack was famously executed by Charlie Miller and Chris Valasek on a Jeep Cherokee using vulnerabilities in the Chrysler Uconnect system and wireless provider Sprint. That incident led to the recall of more than one million vehicles to have their telematics systems updated.
Imagine a scenario in the not too distant future where thousands of AVs roam around a large city, and millions exist worldwide. Each is continuously connected to the others, as well as data centres. What if those vehicles suddenly came to a stop, and a message appeared on infotainment screens demanding payment of one million bitcoins to release the cars? There would be instant gridlock across countless cities.
The article adds that this is an example of a ransomware attack, which in truth is probably the least of the industry’s worries. What if someone found a way to infiltrate a data centre and send a command to the entire fleet to accelerate as quickly as possible? Or to tell every AV to turn left immediately? The potential casualties in cities around the world could be enormous. This is an unacceptable outcome of the move to take human drivers out of the loop.
What is the solution?
The article points out that the first step to a solution is admitting there is a problem. When the first demonstrations of security vulnerabilities in vehicles occurred around 2009 and 2010, automakers publicly denied a problem existed. By 2015, that had changed. GM appointed its first chief product cyber security officer, Jeff Massimilla, and began creating a team entirely focused on security within its product development organisation.
Several automakers including Tesla, FCA and GM established responsible disclosure or bug bounty programmes, while others had less formalised processes. Responsible disclosure programmes have proven essential in many industries, such as technology, financial services, and aviation. These programmes provide security researchers like Miller and Valasek a pathway to report any vulnerabilities they discover to the manufacturer before they are disclosed publicly. This gives the manufacturer an opportunity to correct the problem, hopefully before bad actors can exploit it. Increasingly, security researchers that have demonstrated an ability to find vulnerabilities receive job offers from the very companies whose products they infiltrate. Miller and Valasek are now responsible for security engineering at Cruise, the GM subsidiary developing its automated driving system.
The article adds that, like many other industries, the auto industry formed an information sharing and analysis centre (Auto-ISAC). ISACs provide member companies with an organisation where they can share information about security threats and best practices in a non-competitive environment. In the auto industry, the challenge with cyber security is the long value chain where potential attacks can happen or vulnerabilities can be implemented. Any given vehicle programme has thousands of engineers working on it, with an ever-increasing number of them focused on software and electronics development.
One of the changes within the industry is the implementation of new development, review, and test processes. Rather than approaching security as an afterthought, it must be designed from the ground up for software and hardware. The new verification tools used to continuously test flaws in the software could be exploited to inject malicious instructions. Access to code repositories must be controlled and changes must be documented, maintaining a chain of trust. That documentation is important for engineers working on the software and for regulatory purposes. In Europe, software is included in the type approval process before vehicles can be sold, as well as for after-sales service. Once a vehicle has received its type approval, any software changes that affect regulated systems must go through an amended type approval process.
Notably, this has affected Tesla, which pushes out regular and frequent updates to its customers for many features including its Autopilot driver assistance system. Some features distributed to Tesla owners in North America are not available in Europe because Tesla has not submitted them for amended approval. New development tools are becoming available to automate this process of documenting what has changed.
The article points out that systems are needed in vehicles to maintain security. With most ECUs now being reprogrammable, it is crucial to establish that only verified updates are ever applied. A number of suppliers now offer systems for encrypting and digitally signing software update packages. In the vehicle, the digital signatures must be verified before the updates are applied. Another solution is to continuously check the software against known encryption hashes to make sure it has not been tampered with.
Monitoring systems embedded in the vehicle can continuously monitor all of the message traffic across the vehicle network, looking for anomalies that might indicate either an attack or even just an error. When these anomalous messages are detected they can be blocked, the system can go into a fail-safe mode, and the driver or control centre alerted.
AVs will feature levels of redundancy and diversity in the actuation, electronic, and software systems never used in automotive industry before. With no human driver in place to take over if something fails, backup compute platforms are required. AVs will likely be using backups with distinct hardware architecture and software algorithms that execute similar functionality. This can be used as a verification that the primary compute is functioning properly and also to get the vehicle to a safe, minimum-risk condition if a serious problem is detected.
The article adds that it is not just the developers and the vehicles that need to be secured: the network infrastructure that manages AVs must be too. Control centres will most likely be the primary attack surface for bad actors. Many networks have been breached over the past decade, from banks and manufacturing to retail and movie studios. If attackers found a vulnerability in a remote operation system or a dispatch platform or map updates, it could spread to the entire fleet.
Best practices need to be implemented at every level of the chain when deploying AVs. This includes designing data and control centres for security from the ground up.
Resilience
The article points out that, ultimately, every honest security expert will admit that it is impossible to absolutely guarantee that any complex system is completely secure. Anyone that says otherwise is lying or deluded. That means that AVs must also be designed to be resilient to attacks. Systems need to be put in place to mitigate the risks if anything goes wrong, because sooner or later it will. Redundant and diverse systems are an important piece of the puzzle. So is constant monitoring and rapid response when issues are detected.
If the industry fails on any of these many fronts, from development to validation to dispatch to updates, it will quickly hamper any enthusiasm that the public and regulators have for AVs. However flawed humans are as drivers, malicious actors rarely take control of them remotely. A hack of a social network, department store, or even a bank is annoying and can be costly, but it is rarely deadly. The same cannot be said of AVs.
As AVs are deployed in the coming years, everyone involved must hope for the best and plan for the worst.
The limitations of AI
I recently read an article which pointed out that there is a tendency to believe that artificial intelligence can solve all problems associated with enterprise security programmes. While there is a lot to gain from AI, there is a danger that companies are overly optimistic about exactly what the technology can deliver. When leveraged for the right use cases, AI has the power to move security teams away from the never-ending cycle of ‘detect – respond – remediate – reprogramme’, towards an approach to security that is more proactive, effective, and less like a game of ‘whack-a-mole’. But if companies invest in AI with the belief that it can fill the resource gap left unfilled by the ongoing cyber security skills crisis, then they are sorely mistaken.
The level of human interface required for AI tools is significant. AI is not able to stop zero-days or any advanced threats, it’s known to give false positives, and it isn’t yet able to learn rapidly enough to keep pace with the break-neck speed at which malware evolves. If the technology promises machine learning capabilities, it is wise to investigate whether the solution uses rule-based programming instead of intelligent machine learning algorithms.
The article points out that if AI is deployed without a prescriptive process and resourcing plan, it is plausible that threats will slip through the cracks undetected. And the resourcing plan will require a lot of heavy lifting to make sure that it is running properly – these are tools that will likely end up using more resource than you are willing and able to spare. When it is deployed, it is also feasible that an AI cyber security tool will be programmed inaccurately. In some cases, this could result in algorithms failing to spot malicious activity that could end up disrupting the entire company. If the AI misses a certain type of cyber-attack because certain parameters have not been properly accounted for, there will be inevitable problems further down the line.
Automation generates efficiencies, AI creates resource drain
The article adds that AI should not be used to paper over the cracks. It will struggle to solve existing cyber security issues if the organisation deploying the technology is not set up with flawless foundational security. At a time when budgets are under great scrutiny as companies fight to ride out the recession, it is technology that can squarely be considered to be a ‘luxury’. Many of the problems that AI claims to fix persist, and experienced security analysts who can make impactful decisions with proper context and insights about their attack surface cannot yet be displaced by AI. Organisations are still challenged by a multitude of complexities. For example, there is an influx of new vulnerabilities with another 20,000 new known flaws predicted to occur by the year’s end. As these numbers of vulnerabilities continue to add up, future exploits are inevitable, but in many cases, with the correct protocols in place, are avoidable. To address this, firms need to introduce more effective remediation strategies and gain more insight into their fragmented environment so that they can build security programmes that will give them a competitive edge.
While AI can be time and resource-intensive, there are more efficient means of tackling these pressing issues. Automating processes – like change management, for example – will lighten the load of already-stretched teams. Introducing automation will free up resource and enable the development of a more considered cyber security function.
Context-aware automation tools can be used in a host of useful ways. They can clean up and optimise firewalls, spot policy violations, assess vulnerabilities without a scan, match vulnerabilities to threats, simulate end-to-end access and attacks, proactively assess rule changes, and more. With the right tools in place, all of these processes can be automated, and organisations can develop more efficient and informed ways of working. And while the great leaps forward promised by AI may be enticing, it is advancements like these offered by analytics-driven automation that will deliver the greatest tangible benefits.