/
Analysis

The Google Self-Driving Car Becomes a Reality

Tue, 09/01/2015 - 17:00

Imagine if you could jump in a car, press a button, and then relax while it takes you to your destination. Imagine if you could sit in the front seat playing video games or watching TV without worrying about the traffic. Imagine if you could spend the time it takes you to get back from work talking and engaging with your loved ones, just as you would at home. Now imagine a world where the roads are filled with other vehicles just like yours, all communicating with each other to ensure safe journeys for everyone. This may have once sounded like the setup for a science fiction novel, but the reality of automated vehicles is now within our reach.

The Google Self-Driving Car Project began life in 2009 when the company first started testing the technology with the Toyota Prius, moving on later to the Lexus RX450h. After lobbying for the creation of robotic car laws in Nevada, Florida, and California, Google went on to complete over 480,000km of self-driving freeway tests by the end of 2012. At that time, the company still used existing cars by adapting them to include all the necessary technology to function without a driver.

Thanks to the large amount of electronic systems in today’s vehicles, getting the cars to move on their own was not difficult. Nevertheless, in order to drive at the same level of awareness as a human being or better, a car needs enough sensors to see everything happening around it. With that technology, it can process this information and store it for future similar situations. This alone would be ideal if road situations were always predictable, but, as any experienced driver knows, there are innumerable variables that can change in a fraction of a second, turning a pleasant drive into a stressful experience. With that in mind, Google came up with four main questions that the car should be able to resolve. Firstly, the vehicle needs to know exactly where it is, which, thanks to GPS, Google Maps, and Waze, is not a problem in terms of global positioning. While these technologies are incredibly useful, they only give an approximate location of the car, which is not sufficient for autonomous vehicles, as they need the exact position within a margin of centimeters. Naturally, this poses a greater challenge, but, with the right LIDAR sensor and a proper 3D mapping solution, it is entirely possible. A LIDAR sensor, or Laser Imaging Detection and Ranging technology, is a system capable of determining the distance between the vehicle and any object around it. By placing a rotary LIDAR on the car’s roof, Google’s self-driving car was able to build a fragmented image of its surroundings. After driving several times on the same road, the system is able to construct an accurate model of the street, picking up on any alterations or anomalies each time the car traverses it. This may sound like a simple process, but the amount of processing the computer needs to do is massive, and it must be quick enough to determine the car’s position as it moves on the road.

The second question to answer is what objects are surrounding the car, which is partially addressed by the LIDAR as it continuously detects everything around it. Accompanied with a complete array of sensors and cameras, the computer is able to detect sizes, shapes, and movement patterns in order to determine the nature of nearby obstacles. After acknowledging its surroundings, the vehicle needs to predict what these objects are going to do next. The combination of external sensors and some basic electronics can help with this to some extent, but the real work is done by the central computer. First of all, Google engineers need to teach the car what to expect from each object in basic situations. If there is a person by the side of the road, the car needs to know they might want to cross. Comparatively, if there was a traffic jam and the path suddenly clears, the vehicle has to understand that it is possible to accelerate. This sort of reasoning comes naturally to humans, but it is a much more complex process for a computer. Nevertheless, there are ways to teach this behavior to a machine, leading to the idea of artificial intelligence.

The car’s computer is designed to make decisions based on the complexity of the road conditions. If the vehicle is driving at a constant speed on a highway, then the computer only needs to process basic control algorithms to stay in its lane. For example, if it detects that the car is too close to the right hand lane, it steers slightly to the left. The computer needs to control many of these simple tasks at the same time, while also regulating speed, acceleration, and direction. However, there are much harder challenges once the car starts to face other drivers or objects that interfere with its path. In the case of another vehicle signaling that it wants to turn, the computer will act accordingly, but sometimes people forget to put on the indicators, so the autonomous car needs to be aware of these possibilities. For these types of situations, the car uses advanced learning techniques such as neural networks and fuzzy logic. These tools allow programmers to teach basic rules of behavior to the computer, training it on what to do in specific situations. After a process of trial and error, the computer can logically determine what to expect according to certain stimuli. As a result, the vehicle is able to predict what other drivers will do, when pedestrians are going to cross the road, and even detect cyclists’ signals by measuring the distance from a person’s hand to their head. 

Finally, after this intricate reasoning process, the car has to determine the best way to act depending on the situation. The vehicle needs to slow down to let pedestrians or other drivers pass by, as well as understand when to move away from cyclists and motorists that want to advance between cars. These commands are easily passed on to the vehicle, thanks to the computer’s control over the car’s electronics. Additionally, the central directives controlling the vehicle dictate the safest possible actions in case of any uncertainty. This means that when someone wants to cross the street, the vehicle is able to determine exactly when to stop, and exactly when it is safe to continue.

All these features were previously included in the Toyota and Lexus models, but given that they were already completed vehicles, it was a complicated and inefficient process. In light of this, Google decided to build its own car with specific requirements. Obviously, this required another kind of expertise, considering that it was something completely outside of Google’s core business, which is why the company decided to consult with as many automotive partners as possible. Alongside Google’s own research, companies like Bosch, Continental, and LG Electronics brought all their expertise to the project, helping to build a prototype with all the personality you could expect from the technology giant.

Google’s car is powered by an electric motor, which provides an approximate range of 160km. The car’s computer system can detect objects, people, road signs, traffic lights, and other vehicles, with enough power to see up to 180m in any direction from its 360o rotary sensor. Additionally, the computer in charge of managing the autonomous system was especially developed for automotive applications. The car’s design was planned to enhance the sensors’ capacity as much as possible, and to make the user experience as stress free as it could be. While it still remains fairly similar to a regular vehicle in terms of seating, there is no steering wheel, no pedals, and, given that the routes are currently fixed, the system is activated with just one button. However, Google did make some changes to add more safety to the vehicle, including redundancy in the braking and steering systems, a fixed speed limit of 40km/h, and an emergency stop button, as well as a foam bumper and a flexible windscreen modeled to absorb as much energy as possible in case of an impact with a pedestrian.

The project is still in its prototype phase, and the vehicle still has some limitations in terms of speed, especially considering all the possible risks that could arise if the system fails. Furthermore, the car still needs to learn how to behave in certain situations where previous knowledge is unavailable, and where visibility and sensor capacity is impaired. Nevertheless, Google has continuously pushed for new regulations in terms of self-driving cars, and it has already accumulated more than 1.6 million kilometers of autonomous driving experience, combining highway and city road tests in California. Even though this project is currently based in the US, it has huge implications for what mobility could mean for the entire planet. While there is still some road to cover, the future of mobility is right at our feet, and Google is making historic steps toward leading the world into a new era of transportation.