Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm pretty sure that the visualization is only showing highly confident classifications (not sure about the SUV/Pickup thing). Under the hood the algorithm is locating all kinds of objects that could be but are not displayed on the screen as some kind of unknown box. Probably the reason Tesla isn't showing this is because the location and size of objects are uncertain and people would freak out if they saw all that traffic (some of it quite close) jumping around.


If it’s a debug visualisation, why would it not be displaying everything? Of course, it’s in a public release so it’s probably, er, ‘tidied up’ a bit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: