Update on driverless cars

Update on driverless cars

What LiDAR?

LiDAR is the practice of using electromagnetic radiation not visible (eg, infrared) to detect and measure the distance from the objects. LiDAR is an acronym for Light Detection And Ranging. This article grew out of a discussion about security on emergency braking and autonomous driving car.

LiDAR Applications

LiDAR is used throughout, agriculture meteorology, from biology to robotics, and the application of the law to the spread of solar photovoltaics. You could see the LiDAR referenced in the report on astronomy and space flight, or you might feel its use in mining operations.

Even LiDAR systems that projecting moving objects or can move by themselves. These non-static systems can become the most common form of LiDAR because they are used for artificial vision in military systems, Airline detection and autonomous car prototypes.

However, other forms of LiDAR are not used for images of solid surfaces: NASA uses LiDAR in atmospheric research, LiDAR systems while others are designed to operate underwater and image surfaces.

Clearly, this technology is flexible enough.

Similarities with SONAR and RADAR

Then, if LiDAR is the detection and close distance, It is similar to SONAR (SOund Navigation And Ranging) and RADAR (RAdio Detection And Ranging)? Yup, a bit '. To understand how the three are similar, we start talking about how SONAR (echolocation) and radar work.

SONAR emits a powerful pulsed sound wave of known frequency. Then, calculating the time required to return pulse, it is possible to measure a distance. Doing this repeatedly can help you have a good feeling with your surroundings.

The key element of SONAR is the sound wave, but there are many different types of waves. If we use the same principle, and change the type of wave from a sound wave to an electromagnetic wave (radio), get RADAR (RAdio Detection And Ranging).

There are many different types of RADAR using different spectrum frequencies. The light is only a special range of wavelengths detectable by the human eye, which it is one of the things that differentiates LiDAR from RADAR. There are other differences between the two, but I will focus on the spatial resolution as the main difference.

The radar uses a wide wave front, and long wavelengths, giving a poor resolution. For comparison, LiDAR uses laser (in front of narrow wave) and much shorter wavelengths. The wavelength directly determines the resolving power of an imaging system: of shorter wavelengths (corresponding to higher frequencies) increase the resolving power.

Under a picture obtained with a weather radar

Measurement with pixels

LiDAR uses a laser to measure the distance, but it is no different from a laser sight: I pointed at something and measured a distance. But what would happen if that distance measurement was interpreted as a single pixel? Therefore, you could make several measurements at a distance and arrange them in a grid; the result would be an image that conveys depth, similar to a photograph in black and white pixels in which the transmitted light intensity. It looks pretty interesting, and it could be quite useful, vero?

But we still have many questions to answer before you can fully understand how this system works.

How many pixels we need?

If we compare the first digital camera with LiDAR, we need one or two megapixels (or from one to two million pixel). So let's say we need two million laser, so we should measure the distance indicated by each of those laser, then another two million sensors, and then the circuits to make the calculations.

Maybe use a lot of lasers together is not the best method. And if instead of trying to take a “photo” whole at once, we do what we do scanners? That means, we could take the image and then move to the next pixel, and the image is captured? It seems a much simpler device that could be implemented with a single laser and a detector. However, it also means that we can not take a “photo” instant how we can do with a camera.

As we shift our pixels?

In a scanner, the physical pixels is moved along the image. But that would not be practical in many situations, then we probably need a different method.

LiDAR, we are trying to do a '”in-depth image”, you could say that we are trying to make a “3D model”.

Recalling the math lessons taught in their youth can also use the Cartesian coordinate system (X, Y, FROM) and the spherical coordinate system (r, theta, phi). I mention this because, assuming the LiDAR is the source in a spherical system, then we only need to know the horizontal angle (theta), the vertical angle (phi) and the distance (r) to build our 3D model

To obtain a complete 3D model we only need a laser and a sensor to render our image using the spherical coordinate system.

Some LiDAR systems are mobile and using GPS or other positioning systems to map all the readings together in one image.

In that change so quickly laser corners?

We have two million “pixel” to measure. How can we adjust our lasers to measure them all?

Admitting 5US in pixels between emission and would require processing 10 seconds for each individual image, definitely too much for a moving vehicle.

Then, how to regulate the pixels so fast? The key is that you never stop to adjust, using a rotating mirror or a prism that rotates at a very precise speed and well-known. The laser is reflected from the mirror or prism in such a way that its position is constantly changing, but at a known speed. This makes it very quick and easy to set one of our corners, o Non-theta, to scan our image. For the scanning of the other angle, we could use a much slower system as a step-precision stepping motor.

But there is still one last problem to solve.

How do we measure the distance?

There are several ways to measure the distance with a laser, depending on what the system is trying to get. And a single system may use multiple methods simultaneously to increase the accuracy. All methods require very precise equipment.

The easiest to understand is the flight time (Cool). This is also most often listed as the method used to measure the distance with a laser. If you calculate the time taken for the light to travel 2 mm, the distance necessary for a resolution of 1 mm, you get the time of 6,67 picosecondi. This requires some very specific equipment to be measured, but it can be done.

Another method, you triangulation, It uses a second rotating mirror to redirect the receiver signal; measuring the angle change, you can get a distance, It should be noted that for long distances the use of the rotating mirror could complicate the system.

This system was perfected and made replicable in 1850, Léon Foucault, in short, nothing new under the sun.

Finally, modulando laser it, it is possible to measure a phase shift in the laser modulation. Due to the periodic nature of the modulation it can not be used alone to measure the distance. Rather, It produces a list of possible distances that can be used in addition to another method to increase the precision.

The laser LiDAR

One last thing to talk: The laser itself. Despite its name, the majority of LiDAR systems employs infrared radiation instead of visible light.

Since the interaction of electromagnetic radiation with matter is regulated by the wavelength, some wavelengths work better for certain applications. One day the microwave laser (maser) or x-ray (xaser) They could be used to build LiDAR systems greatly enhancing their usefulness.

Tesla vs Google

Elon Musk has questioned Google's decision to use LIDAR sensors in their cars to autonomous driving program. A comparison between Google and Tesla approaches to autonomous vehicles can explain why.

While the CEO of Tesla, Elon Musk, and Google's co-founder, Larry Page, they are friends, Musk has criticized Google's use of LIDAR in their first autonomous driving car.

last October, Musk said that the LIDAR technology “it does not make sense” to implement in a car that is not autonomous, “a big fan of LIDAR”. Although he not directly stated that Google is using the wrong technology for their machines, He believes it is not something that Tesla will implement in their autopilot systems.

Musk has made a name by creating and spreading ideas such as PayPal, SpaceX e Tesla. However, Google could be wrong on the approach to the creation of the first autonomous driving car. Google has taught their cars navigating the streets of the city using sensors and software that detect and avoid objects like animals, cyclists, pedestrians, vehicles and more.

Below is a short video that explains how it behaves car in autonomous driving all over town:


The use of LIDAR by Google

While Google and Tesla have the common goal of making of safer road transport and in the end without operator, their techniques are very different.

Cars with Google's automated driving using LIDAR, that maps the car's surroundings using lasers. LIDAR measures the shape and contour of the terrain from the sky. It reflects the laser pulses outside the objects around the car and measures the distance and time that each pulse has traveled.

From these measurements, The LIDAR system can provide accurate information on the height and distance of objects.

While this system is making great strides in creating a car completely without driver, It has a heavy cost. Google used LiDAR sensor 64 channels of Velodyne , which has a price of about 80.000 $.

This could be one of the reasons why Elon Musk refuses to use LIDAR technology in Autopilot systems Tesla.

The use of Tesla of optical sensors and RADAR

Musk believes that the passive optical sensors and a radar system can realize the same function as the Google LIDAR system.

Tesla vehicles equipped with 12 Long-range ultrasonic sensors that offer a vision 360 degrees around the vehicle. Furthermore, each vehicle has a forward facing radar system. The integration of these components helps to enhance the automatic pilot system Tesla.

As a LIDAR system, a RADAR system sends signals, but in the form of periodic radio waves that bounce off objects near the machines. Once they hit an object come back, by measuring the time taken by the radio waves to travel to the object and you know the distance.

The advantage is that the radio waves can be transmitted through the rain, the snow, fog and even dust.

different tools for different tasks

While Elon Musk has made some bold comments on Google's methods, most people do not realize that they are working on two completely different uses. Google is using the LIDAR to 64 channels (64 rays) of Velodyte to position with an accuracy of 10 cm on an existing map. Google is also using the LIDAR system is not only to create a model 360 degrees around the car, but also to predict the movement of pedestrians and vehicles in the vicinity.

On the other hand, the Tesla autopilot system uses the front cameras produced by ( Mobileye ).

These cameras are able to accurately detect the position and curvature of the highway lane markers that help to keep the vehicle in its lane and to carry out basic lane changes.

While the technology of each company is outstanding, each does things different results. The autopilot system Tesla is cheap and will prove useful for the initial goal of Elon Musk: automate the 90% driving within a couple of years. However, the other 10% of driving scenarios it is rather difficult to implement.

Google began working for the same goal for some time and is now focusing on something other than: fully autonomous car that will completely eliminate any human error.

But it is an expensive system that eventually will have to be changed if Google wants to make it accessible to consumers. However, Google is working to eliminate the need to drive a car while Tesla is working to arrange removal of part of everyday driving we need to do. From that perspective, LIDAR could be the right choice, after all.

LiDAR solid

In recent years there has been a revolution in LiDAR. Largely, This has been stimulated by the growing boom of autonomous vehicles, where LiDAR is increasingly used as a method to detect obstacles. There was also a consequent drop in price, making it more accessible to a wide range of designer.Il next step in the evolution of LiDAR is, according to some, the use of LiDAR solid state.

The problem with the rotating LiDAR sensors

The majority of current generation LiDAR technology bounces a laser pulse from a rotating mirror to distant objects, and then records the pulse return time reflection. This technique offers a full coverage 360 ° di un'area.

With many moving parts, high precision, these are expensive mechanical LiDAR, sensitive to vibration and difficult to miniaturize. Furthermore, when mounted in non-flashy way to the front of a car (rather than at the top, as for the current test), the car body to eliminate 50% the LiDAR field.

To completely incorporate a unit in a car, automakers need more economical and robust option.

One way to address this problem is to incorporate less expensive sensors. Many companies are now researching and developing LiDAR solid state that are robust and inexpensive. Although they have a limited field of vision, their low prices allow vehicle manufacturers to integrate multiple drives in a car for a fraction of the cost of a single rotating LiDAR.

Methods of LiDAR solid state implementation

The LiDAR solid state using a system, a detector and sometimes a MEMS ( https://it.wikipedia.org/wiki/MEMS ) mounted in a non-rotating housing to illuminate a scene with pulsed laser light and recording the reflected light pulses. There are several ways to do this:

Scanning cross matrix

With this method, the laser light is pulsed sequentially from a vertical array, while it is detected sequentially from a horizontal array. This cross configuration produces a resolution which is the product of the number of laser emitters and the number of photodiode detectors. The hardware keeps track of the position of the laser, the position of the detector and the time in which the light returns to generate a cloud of three-dimensional points of the environment.

LiDAR solid state based on MEMS

With a MEMS-based solution, a single powerful laser pulse is used in conjunction with a mirror to create the same effect of the laser array shown above. As the mirror is manipulated to create a scanning laser pulse, an array of sensors to detect the reflected light and generates the three-dimensional point cloud environment.


The Innoviz Israeli company has a solid called LiDAR InnovizPro that you can purchase and place immediately on your own car. It is based on MEMS technology and has an expected price of a few thousand dollars.

Their next InnovizOne , LiDAR an automotive level, It should be available in 2019 for a few hundred dollars. A press release issued Innoviz for the first time at CES in January, said that customers with high volumes could have them with prices starting at $ 100 per unit.

When interviewed by email, Omer Keilaf, CEO and co-founder of Innoviz, said that their LiDAR “It is designed to provide an extremely rich output and high density in the form of a cloud of high-resolution points, allowing to extrapolate advanced object detection capability with an intensity of extremely high data, The LiDAR Innoviz is able to create additional levels of information and thus offer higher detection levels, classification and tracking of objects.

Innoviz provides a complete solution software stack that includes the detection of objects, classification and tracking. The classification and tracking of objects are particularly important for automotive applications where a changing environment poses serious threats to the safety of pedestrians and drivers.

LeddarVu of LeddarTech.

https://leddartech.com/ has also LiDAR solutions solid state designed to be placed at several points of an automobile to provide a full coverage.

The platform uses LeddarVu “patented signal algorithms” to create LiDAR sensors in the solid state with versatility.

The implementation of LeddarTech software also enables the detection and classification of objects.

Quanergy’s S3

Quanergy claims to have “the first LiDAR sensor solid state open to the world” in his model S3, which it was presented in January far 2016. While the original LiDAR modules of half a decade ago generally cost tens of thousands of dollars, The price of the S3 was originally announced for $ 250.

The S3 can fit in the palm of one hand and his lasers have a span of 120 °. Quanergy underlines the fact that the S3 has no moving parts that affirm improve reliability. This is just one of many ways in which companies like Quanergy hope to facilitate the design and implementation in the automotive industry LiDAR, that requires high levels of reliability for safety reasons.

MapLite allows navigation using only GPS and sensors

The recent fatality Uber underlines the fact that the technology is not yet ready for widespread adoption. Companies like Google only test their fleets in major cities where they have spent countless hours meticulously tagging the exact 3D position of lanes, curbs, ramps and stop signs.
In fact, if you are traveling on roads that are not paved, or unreliable so marked, there is a problem. One way around this is to create fairly advanced systems to navigate without these maps. In an important first step, un team del Computer Science and Artificial Intelligence Laboratory (CSAIL) MIT has developed MapLite, a new framework that allows driverless cars to drive on roads never used before without 3D maps.

MapLite combines simple GPS data that you find on Google Maps with a set of sensors that observe the road conditions. In tandem, These two elements allowed the team to drive autonomously on more rural roads unpaved Devens, in Massachusetts, and reliably detect the road with more than 30 meters in advance. (As part of a collaboration with the Toyota Research Institute, the researchers used a Toyota Prius that have equipped with a range of LIDAR and IMU sensors.)

How does it work

Existing systems are still very reliant on maps, using only vision sensors and algorithms to avoid dynamic objects such as pedestrians and other cars.

Unlike, MapLite uses sensors to all aspects of navigation, relying only on GPS data to obtain a rough estimate of the car's position. The system first sets both a final destination that what the researchers call a “target of local navigation”, which must be in view of the machine. His perception sensors then generate a route to get to that point, using LIDAR to estimate the position of the roadside. MapLite can do it without making physical directions of the basic assumptions about how the road will be relatively flat compared to the surrounding areas.

The team developed a model system “parametrizzati”, which means that describe the situations that are somewhat similar. Eg, a model could be large enough to determine what to do at intersections or what to do on a specific type of road.

MapLite differs from other approaches driving without a map that rely mainly on machine, by training on data from a series of streets and then tested on other.

MapLite is still limited. It is not yet reliable enough for the mountain roads, since it does not take account of changes in the elevation radicals. As a next step, the team hopes to expand the variety of roads that the vehicle can handle. At the end aspiring to reach levels comparable to their system performance and reliability of the mapped systems, but with a much wider range.

As the cars to autopilot and humans face the risk of collision

In 1938, when there were about one-tenth of today's cars, a brilliant psychologist and an engineer joined forces to write one of the most influential works ever published on driving. The killing of a pedestrian in Arizona by car in autonomous driving highlights how their work is still relevant today, in particular as regards the safety of autonomous vehicles. James Gibson, the psychologist in question, and the engineer Laurence Crooks, They evaluated the control of a vehicle by a driver in two ways. The first was to measure the so-called “stop zone minimum”, the distance that would be necessary to stop after that the driver has pressed the brake. The second was watching the psychological perception of the driver of potential dangers around the vehicle, they called the “field of safe travel”. If someone was driving so that all potential hazards were outside the range required to stop the car, that person was driving safely (so it seems that in my opinion the discovery of hot water).

However, this trip for sure the field is not the same for driverless cars. They perceive the world around them using laser, radar, GPS and other sensors, in addition to their on-board cameras. So their perceptions can be very different from those presented to human eyes. At the same time, their active response times can be much faster or sometimes even excessively slow, in cases which require a human intervention. To me it is clear that if the people and the cars drive only according to their respective capabilities and significantly different perceptual and response, conflicts and collisions are almost inevitable. To share the road safely, each party will need to understand each other much more closely than it does now.

Interaction of movement and views

For human drivers, the vision is the key input. But what drivers they see depends on how you move the machine: to brake, accelerate and steer change the location of the machine and then the driver's view. Gibson knew that this mutual interdependence between perception and action meant facing a particular situation on the road, people were expecting others to behave in specific ways. For example, a person watching a car come to a stop sign expects the driver stops the car; it is established the oncoming traffic, of pedestrians, cyclists and other obstacles in general; and resume the march only when the way is clear.


An autopilot system Tesla steer directly to a barricade.

There is clearly a stop sign for human drivers. It gives them a chance to look around carefully without being distracted by other aspects of driving. An autonomous vehicle can scan the entire environment in a fraction of a second. Not necessarily it has to stop or even slow down to navigate the intersection safety. But independent car running through a stop sign without stopping will be seen as dangerous to humans neighbors, they presume that human rules still apply.

Here is another example: think of the cars that are entered from a side street in a busy street. People know that eye contact with another driver may be an effective method to communicate with each other. In a section divided, a driver can ask for permission to enter and the other driver will behave in a manner that facilitates the operation. How exactly should people have this interaction with autonomous driving car? It is something that has yet to be established.

Pedestrians, cyclists, motorcyclists, car drivers and truckers are all able to understand what they can do other human drivers and to express their intentions to another person appropriately.

An automatic vehicle is another thing. He will know little or nothing of “I can immettermi?” “Yup, OK” types of informal interactions everyday. Since few algorithms are able to understand these implicit assumptions human, They behave differently from how you expect people. Some of these differences may seem subtle – but some transgressions, as the non-compliance with the stop sign, They could cause injury or even death.

Furthermore, The driverless car may actually be blinded if their various sensory systems fail, They fail or provide contradictory information. In fatal crashes 2016 a Tesla mode “Automatic pilot”, for example, Part of the problem may have been a conflict between some sensors that were supposed to detect a trailer across the street and others who probably they had not done so because it was backlit or too high off the ground.

As with all new technologies, there will be accidents and problems. But this kind of problem is not unique to autonomous driving car. Rather, It is perhaps inherent in any situation in which human beings and automated systems share space.

Developments and Research

What a coincidence this morning I received via e-mail advertising from a known manufacturer of semiconductors of a driver specifically designed for very high speed mosfet for LiDAR


The economic LiDAR drives and solid state are now on the market from different manufacturers. With the passage of time, their costs will continue to decline and their capabilities continue to improve.

How long will it take before this technology is modified for home use? From electric wheelchairs and drones motorcycles and other mobile platforms, There are a lot of applications, in addition to the autonomous car, that would benefit the awareness and environmental mapping.

Frankly at the moment I believe that the only limit to the rapid development is given by the recent incidents occurred with this type of vehicles, the guilt of which attributable to a too young technology and mainly software not yet up to make quick and effective decisions, as the lane change to avoid an imminent collision.

In short, touching only hope in the evolution of artificial intelligence though that frankly scares me a bit.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply