Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's the point of the investigation. Many things could have happened. Maybe the driver fell asleep, maybe he tried to move the wheel and it got stuck. Maybe the car slipped on a patch and had nothing to do with autopilot. We don't know yet. It takes time to review witnesses, multiple sources of information and that's the NTSB's job.

Tesla seems to think they know exactly what happened after only reviewing data collected from the car. They see what they want to see and have become blind to any other possibility. I think that's terrifying. I couldn't trust an automaker with that kind of arrogance.



Given that this video exists, I'm putting my bet on the Tesla actively swerving into the barrier: https://www.reddit.com/r/teslamotors/comments/8a0jfh/autopil...


In that video, it looks like the driver had about 1 second to react to the Tesla's mistake, to avoid hitting the barrier.

Dumb question: if I have to constantly stay vigilant with 1-second reaction time because at any time my assisted-driving car might try to kill me (exactly how changes every time the software updates) -- isn't it less effort to just drive my car myself, so I mostly just have to worry about the drivers around me?


Not dumb at all, it is the big unspoken elephant in the room. It is clear that the current, and next iteritave version of autopilot, self driving, etc. are woefully behild the hype. While in theory a "perfect" ai driver is better, is it true in the chaos of real world traffic? Will drivers even want to be subject to NTSB level scrutiny when an accident happens? There are more questions than answers still.


Hell, I don't even like letting cars shift for me. /sticksnob

;-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: