Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

e.g. Google's Gemini's output had some rather embarrassing biases in its outputs; to the point where asking it to "draw a 1943 German soldier" resulted in images of women and black soldiers. https://www.nytimes.com/2024/02/22/technology/google-gemini-...

I wouldn't put that on the same level as "refusing to talk about massacre of civilians"; but I wouldn't put it to the level of "free and unbiased" either.



The irony is that this was probably straight in a system prompt to provide diversity in order to AVOID biases in the training data.


I'm not sure it's avoiding biases so much as trying to have the currently favoured bias. Obviously it got it a bit wrong with the nazi thing. It's tricky for humans too to know what you are supposed to say some times.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: