Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A different, darker way to interpret this is computers cannot be held accountable today

If systems (presumably AI-based) were conscious or self-aware they would very much be incentivized not to make mistakes. (Not advocating for this)



But what would be the deterrent there? You should program the AI with some sort of "fear of death" or "fear of consequences", and if that's the case, wouldn't it be straight up slavery?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: