January 08, 2020

The idea is to catch issues with perception algorithm

Phantom objects--where the system perceives an object when there is none--were also common. Crucially, because the method relies on a library of "sanity conditions," there is no need for humans to label objects in the test dataset--a time-consuming and often-flawed process. Typically autonomous vehicles "learn" about the world via machine learning systems, which are fed huge datasets of road images before they can identify objects on their own. The same way cars have to go through crash tests to ensure safety, this method offers a pre-emptive test to catch errors in autonomous systems,” he explained.


The logic successfully honed in on instances of the machine learning tools violating "sanity conditions" across multiple frames in the video. While we wait for self-driving cars to become a part of our everyday reality, safety issues could bring the autonomous dream to a screeching halt.The team of researchers formulated a new mathematical logic, called Timed Quality Temporal Logic, and used it to test two popular machine-learning tools--Squeeze Det and YOLO--using raw video datasets of driving scenes. This can lead to devastating consequences in safety-critical systems like autonomous vehicles.


We thought, clearly there is some issue with the way this perception algorithm has been trained.But the system can go wrong. This is one of several &China conveyor belts Factory39;sanity conditions' that we want the perception algorithm to satisfy before deployment," explained Jyo Deshmukh, co-author of the study. A recent study tackles a long-standing problem for autonomous vehicle developers: testing the system's perception algorithms, which allow the car to "understand" what it "sees. This could cause the car to mistakenly slam on the breaks--another potentially dangerous move."As the result of the study, the researchers’ new mathematical method is able to identify anomalies or bugs in the system before the car hits the road. If it does, it violates a "sanity condition," or basic law of physics, which suggests there is a bug in the perception system.Findings of the study were discussed in Design, Automation and Test meeting.. In the case of a fatal accident between a self-driving car and a pedestrian in Arizona last March, the software classified the pedestrian as a "false positive" and decided it didn't need to stop.


The idea is to catch issues with perception algorithm in virtual testing, making the algorithms safer and more reliable. In this case, the system might fail to correctly anticipate the cyclist's next move, which could lead to an accident.Perception algorithms are based on convolutional neural networks, powered by machine learning, a type of deep learning.Typically autonomous vehicles "learn" about the world via machine learning systems, which are fed huge datasets of road images before they can identify objects on their own."Making perception algorithms robust is one of the foremost challenges for autonomous systems,” said Anand Balakrishnan, lead author of the study.


These algorithms are notoriously difficult to test, as we do not fully understand how they make their predictions.For example, an object cannot appear and disappear from one frame to the next. When a human being perceives a video, there are certain assumptions about persistence that we implicitly use: if we see a car within a video frame, we expect to see a car at a nearby location in the next video frame. Most commonly, the machine learning systems failed to detect an object or misclassified an object.For instance, in one example, the system failed to recognize a cyclist from the back, when the bike's tire looked like a thin vertical line.The team's method could be used to identify anomalies # or bugs in the perception algorithm before deployment on the road and allows the developer to pinpoint specific problems. Instead, it misclassified the cyclist as a pedestrian."Using this method, developers can narrow in on errors in the perception algorithms much faster and use this information to further train the system.The researchers’ new mathematical method is able to identify anomalies or bugs in the system before the car hits the road

Posted by: compositemachine at 01:19 AM | No Comments | Add Comment
Post contains 665 words, total size 4 kb.




What colour is a green orange?




13kb generated in CPU 0.0066, elapsed 0.0247 seconds.
33 queries taking 0.0207 seconds, 46 records returned.
Powered by Minx 1.1.6c-pink.