This patent portfolio is available for sale:
COMBINING DRIVER ALERTNESS WITH ADVANCED DRIVER ASSISTANCE SYSTEMS (ADAS)
Object-Specific Detection Augmentation (OSDA) for Vulnerable User Detection in ADAS
One of the emerging challenges of ADAS is the reliable detection of vulnerable road users such as pedestrians and cyclists. These objects are smaller and often lower contrast than larger objects such as vehicles, and therefore are typically more difficult for an ADAS system to detect. The goal of object-specific detection augmentation (OSDA) is to provide a seamless integration of the driver’s behavior and the varying probability of detections computed by an ADAS system to optimize safety and driver experience simultaneously (US patent 10,137,893, US patent app. 16/192529). For example, warning alerts or collision avoidance measures may be suppressed or initiated based on combining information from an object detected only marginally by an ADAS system, together with information from the driver’s gaze pattern. A block diagram of the OSDA system is shown below.
Object-Specific Detection Augmentation (OSDA) System
In complex scenes ADAS systems can have difficulty determining whether a detection is sufficiently certain to initiate warning alerts or collision avoidance measures. Complex scenes in this context can mean busy streets with many vehicles, pedestrians, and illumination artifacts such as shadows and reflections. At the same time, it has been found that drivers become more engaged in the environment when driving through dynamic scenes like these. In other scenes where the road is relatively clear and where there are few artifacts, drivers can become less engaged but these are also the types of scenes where ADAS systems can provide detection results with the highest certainty. OSDA aims to optimize safety and driver experience by combining the probability of detection of an object by an ADAS system with the driver’s behavior and awareness of the object, resulting in fewer false warning alerts and incorrect collision avoidance actuations across all scenes.
The table below shows 3 example OSDA output states. The icon in the first row indicates that there are no uncertain ADAS object detections. The icon in the second row indicates that there are uncertain ADAS object detections, but OSDA has avoided false warning alerts and unnecessary collision avoidance by reconciling the driver’s gaze pattern with the location of the uncertain objects. The icon in the third row indicates that there are uncertain ADAS object detections, and OSDA could not reconcile the driver’s gaze pattern with the location of potential objects detected by the ADAS system, and therefore warning alerts and collision avoidance have been initiated.
The video below shows a simulation of OSDA. The icon at the top middle shows the instantaneous OSDA status. Automatic braking of the vehicle is simulated by reducing the video playback speed. Potential obstacles detected by the ADAS system are shown in red, and objects detected by the simulated driver are shown in green. Note how the virtual speed of the vehicle is modulated seamlessly based on the results of both the ADAS system and the driver’s gaze pattern.