For self-driving cars to be a boon of technology they need to have a system which canย intelligently handle the surrounding environment like complicated traffic scenarios, roadblocks, potholes, drive paths, lane markings or any vehicle passing by on road. This can only happen when they will have a sense of the outer world just like humans. self-driving need to communicate with other cars, passengers and surrounding traffic participants, so that they can determine the exact position on the road and decide how to behave in a given situation.
Car-to-car and car-to-infrastructure communication is essential for enabling autonomous driving. And, this can only happen when they have a centimeter-level accurate, digital 3D representation of the physical world on a map. The data on the map is the main source of guidance for autonomous vehicles. It is just like eyes which give situational awareness to self-driving cars.
As an integral part of the system, High Definition Maps bring functions such as high-precision localization, environment perception, planning and decision making, and real-time navigation cloud services to autonomous vehicles.
How HD maps help self-driving cars to communicate
A HD map that supports autonomous driving constantly detects, verifies, and updates changes that happen in the world. It is created in four simple steps which are collection, aggregation, creationย andย publishing which helps cars to communicate. So let’s navigate into each section to understand how the complete process helps autonomous cars to run on road immaculately.
ALSO READ:ย 6 ways autonomous vehicles could impact our lives
Collection
Autonomous cars collect data with the help of various sensors fitted in them like cameras, LiDAR and radar. This data is transmitted back to the cloud. This crowdsourced data can be anything from lane closures and barriers to road signs to pavement markings which provide important information for the functioning of automated vehicle systems and the decisions a vehicle makes. Though, this sensor data alone is not accurate and sufficient enough to remove the driver from the equation completely.
Aggregation
Autonomous cars are of different shape, sizes and their sensors are placed at different places. Along with they travel to different locations. In all these conditions they perceive objects and in their own way thus the captured data in differently as well. Now with the help of machine learning algorithm, data captured by different cars are fused together so that they can have accurate features.
Creation
Once the data has been fused and accurate data has been generated, then with the help of refined algorithms and unique features map is created. On that map, all accurate information about the physical world is represented. It determines the accurate position of the object which could be anything. The process is done using advanced algorithms that take into account various variables collected by cars and then create the feature for the map. For certain features, ten observations may be needed, or twenty, or even a hundred. It all depends on when the algorithm starts converting the many features into one accurate feature.
Publish
Once all the data has been generated and the map has been created it is then updated and published. To ensure the most efficient data transmission is taking place, only the updates that occur within the specific tile for the specific layer – the Road Model, HD Lane Model, and HD Localization Model is sent to the OEMโs cloud and the vehicle. With the tiled format, over-the-air updates can be sent in a more condensed package for efficiency and optimization of data exchanges. Once a new feature is published, there may be a specific area of a road where enhanced sensor data is needed. For example, a car might not identify a stop sign due to an obstruction. In this circumstance, the map will request that the next vehicle in that area takes a video of the environment, and then the system can better validate the data. This happens through the Sensor Data Request Interface (SDRI).
Three-layer information system
The map which is published has three layers. Each layer provides detail accurate information that supports autonomous vehicles to connect and communicate with the outer environment and other cars. ย The first layer is road model that offers global coverage and helps vehicles to understand local insights beyond the range of its onboard sensors such as high-occupancy vehicle lanes, or country-specific road classification. Second is HD lane model for more precise lane-level detail like lane direction, lane type, lane boundary, and lane marking types. These details help self-driving vehicles make safer and more comfortable driving decisions. The third one is HD localization Model this layer helps the vehicle localize itself in the surrounding environment and helps the vehicle to identify an object like guard rails, walls, signs and pole and then uses the objectโs location to measure backward and calculate exactly where the vehicle is located.
ALSO WATCH: