Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your take is that you’re wrong a lot, so it’s OK others to be wrong? If the stakes are high, we should be learning how to not make mistakes rather than how to make excuses after the fact.


I think you misunderstood what I was saying, so I will try to be more clear:

I was describing myself as an observer and participant in different waves of AI tech. I was not admitting to being wrong, rather, just someone who was lucky enough to be a small part of AI.

What I don’t know right now: if AI will help solve difficult societal/energy/medical/scientific problems and propel humanity into a better future (I bet on this outcome) or that the AI-doomers are correct (not my view).

I try to absorb the AI dangers arguments and keep an open mind. Good resources: https://podcasts.apple.com/us/podcast/your-undivided-attenti... and I also think that privacy advocates (see books Surveillance Capitalism, and Privacy is Power) are useful in deciding what we should and shouldn’t do with AI.


In my purely layman view of someone on the outskirts of AI development, meaning that I knew about the techniques from a basic concept view, saw the rise of ML and NN into the mainstream software development the past 15-20 years (and not from an academic background), and now seeing the latest generative AIs and LLMs.

My biggest anxiety with AI development rapidly increasing in pace and capabilities is that we don't work in a system which will distribute the efficiency gains brought by those developments to people, they will be captured as profit, we don't have anything in place on how to absorb large swaths of workers out of a job when an AI can help to automate 90% of most office jobs.

I really try to not be a Mennonite about it, technology invariably causes splash damage to jobs, from looms to robotic arms in factories, we increase our efficiency to produce but for some reason it feels like an AI boom will bring such a shift much quicker and broader than many past inventions. And I don't think we are prepared for the aftermath of that.

Let's say if in 10-20 years some 50% of white-collar jobs can be made redundant by AI, there would be large swaths of workers with non-marketable skills, how do you retrain so many people so quickly? We already struggle to retrain small pools of the labour force from jobs that are disappearing, like coal miners or factory workers, progress always come at a cost and so what's going to be the cost when that many people are out of jobs, out of prospect of jobs with their skills, and the value added by AI is captured by the few owners of capital?

That's my main anxiety. I have so far embraced AI as a way to make my job less tedious but I started to have this tingling feeling that I will see myself outskilled at my job by AI during my lifetime... And I have no idea what will be left to do. Given that predicting what will come next is nigh impossible, I don't think we will be prepared for the aftermath when it comes, just like we were not prepared for the aftermath of social media and just live with the malaise nowadays.


I worry about this also: how does society function when only half the workers are required? We will need some sort of minimal social safety net that still allows people to do some work and get paid.

I view future AIs as being partners, working with people. That said, much less human labor will be required.

Not really addressing your good comments, but as an aside, I really hope that open AI models “win” rather than opaque AI systems owned and run by 3 or 4 giant tech companies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: