If a Driverless Car Goes Bad We May Never Know Why

If a Driverless Car Goes Bad We May Never Know Why It’s incredibly difficult to figure out why the AI used by self-driving cars does what it does. Two recent accidents involving Tesla’s Autopilot system may raise questions about how computer systems based on learning should be validated and investigated when something goes wrong. A fatal Tesla accident in Florida last month occurred when a Model S controlled by Autopilot crashed into a truck that the automated system failed to spot. Tesla tells drivers to pay attention to the road while using Autopilot, and explains in a disclaimer that the system may struggle in bright sunlight. Today the National Highway Traffic Safety Administration said it was investigating another accident in Pennsylvania last week where a Model X hit the barriers…


Link to Full Article: If a Driverless Car Goes Bad We May Never Know Why

Pin It on Pinterest

Share This