Self-Driving Cars — Part 1

Mark Subra
3 min readAug 29, 2020
Image recognition and segmentation is essential for self-driving cars.

Autonomous vehicles could contribute to a paradigm shift in the economy. As we have seen with the ongoing pandemic, transportation is vital to maintaining the supply chain. In the United States, trucking is the most important part of the supply chain, delivering goods to market as well as delivering goods from business to business. The American Trucking Association estimates there are 3.5 million truck drivers in the USA, and truck driver is the most common job in 29 states. A shift to automation will almost certainly impact the economic landscape.

How Does a Car See?

Cars need to be able to distinguish different objects and stay within a lane. If humans are to be replaced, the car needs to perform better than a human. Different types of sensors are required such as cameras, radar, and lidar. The sensors also must be redundant; they must confirm the same data while being different modes of sensory information. This best confirms what the car sees as accurate.

These methods must also detect the speed and distance of objects as well as their size and shape. The sensors also have to track the car’s own speed, acceleration, and location.

Sensors must be able to distinguish people from other objects as well as from the background.

The Camera

The most accurate way to create the visual representation of the car’s environment is the camera. Cameras must capture a 360-degree representation of the car’s environment. They must also be able to zoom in and out to be able to focus on short range and long range visuals.

Cameras have limitations as well. They can distinguish the details of the environment, however, they have limits when calculation distances and speeds. Low visibility situations such as fog, heavy rain, and night driving are also limitations for cameras.

RADAR

Radar is actually an acronym for “Radio detection and ranging.” Radar can see what the camera cannot see. Most of us are probably familiar with a speed measuring radar. This type of radar transmits radio waves in pulses which return back to the sensor once they hit an object. The speed of a car or baseball can be measured with the returning waves.

Low visibility conditions and night driving can be better navigated and detection improved. Radar sensors placed around the car can detect speeds and distances but cannot distinguish objects.

LIDAR

Lidar was a pormenteau of light and radar, but now it stands for “light detection and ranging” or “laser imaging, detection, and ranging.” Lidar has several applications in a diverse spectrum of industries. For self-driving cars lidar makes it possible to have a 3D view of its environment. Shape, depth, and geography become apparent, and it works in low visibility conditions.

Combining The Sensors — Redundancy

Using the data from the different types of sensors can paint a picture of the car’s environment is space and time. These methods can replicate the way humans perceive their environment as a constant flow of information. This sensor data is essential for redundancy.

Redundancy is key to a successful AI for autonomous vehicles. Absolute certainty about size, speed, distance, and other environmental factors is essential for success and to eliminate mistakes and errors. Absolute certainty is required to avoid catastrophic situations and accidents.

--

--

Mark Subra

I am a data scientist having recently graduated from the Flatiron School Immersive Data Science Bootcamp