The offered discourse on AI is “isn’t it cool how this can replace the work that you do? Eventually we won’t even need you and you can go get a hobby to spend the rest of your life.”
That doesn’t exactly leave a lot of room for people to feel the need to be involved in a discourse about it. For one thing the majority of people aren’t all workaholics looking for extra hobby time.
The author mentions ChatGpt can search the web. Okay calling a search engine and retrieving a result has been possible for a while. Llm companies just slapped statistical response on top as the UI.
Maybe the discourse sucks because the reality of it sucks?
If anyone wants to know exactly how that would be achieved, just look at how Google does "support" right now. No need to predict the future.
Google's "support" is a robot that sends passive aggressive mocking emails to those who were screwed over by another robot that made up reasons to lock them out of their digital lives [1]. It allows Google to save a ton of money while evading accountability.
It's the same thing with the latest overhyped robots. It won't even matter whether or not it's actually competent at the thing it's supposed to do. It will replace people regardless.
> I think the problem is that the statement is more like:
> "Eventually we won't even need you and you can go die in a ditch somewhere while we party on our very large boats."
Exactly. If...
>> “isn’t it cool how this can replace the work that you do? Eventually we won’t even need you and you can go get a hobby to spend the rest of your life.”
...was even remotely true, we'd have already had that outcome, before AI.
“Isn’t it cool that I don’t have to write boiler plate and can prototype quickly? My job isn’t replaced because coding is not my job — it’s solving domain specific problems?”
I’m in my late 40s, have written code for 3 decades (yes started in my teens) and have always known that the code was never the point. Code is a means of solving a problem, mostly unrelated to computers (unless you work on pure software tooling).
This is why I chose not to study computer science. I studied something else and kept coding. I’ve always felt that CS as a field is oversubscribed because of $$$ dangling due to big tech.
So many fields are computational these days and the key is to apply coding to these fields. For instance, a PhD in biology gets you nowhere these days so many biologists these days are computational biologists or statisticians. Same with computational chemists, etc.
For most of my career I’ve written code, but in service of solving a real world physical problem (sensor based monitoring, logistics, mathematical modeling, optimization).
> My job isn’t replaced because coding is not my job — it’s solving domain specific problems?
I would wonder why you are so complacent as to think next year's models won't be able to solve domain-specific problems? Look at how bad text generation, code generation, audio generation, image generation was five years ago versus how capable it is today. Video generation wasn't even conceivable then.
As an equally middle-aged person with children I'm less worried about myself than the next generation. What are people still in school right now, with dreams of being architects or lawyers or artists or writers or doctors or podcasters or youtubers, actually going to do with their lives? How will they find satisfaction in a job well done? How will they make money and feed and house themselves?
The economy only works because people consume goods and services. If they can't do that, then capital can't make any money. So whatever the case is, capital needs to ensure that the ability to consume is ensured.
This is the same conversation that happens decade after decade.
I agree with you, but no one listened back then, why would they ever think about listening now.
Capital formation comes first before everything else, not the other way around, when you have nothing to trade that's of value it simply can't happen, and inevitable hyper-inflation/deflationary cycles begin which once started can't be stopped.
These people think, survival is guaranteed, jobs are guaranteed, the how doesn't matter; it happens because some politician says it does; reality doesn't matter.
That's the line and level of thinking we are dealing with here. How do you convince someone that if they do something, they and their children may die as a consequence; if they can't make that connection.
Communication ceases being possible in a noisy medium at a certain point according to Shannon. Pretty sure we've crossed that point, and where we may have been able to discern and separate the garbage previously, now through mimicry its all but impossible.
Intelligent people don't waste their efforts on lost causes. People make their own decisions, and they and their children will pay for the consequences of those choices; even if they didn't realize that was the choice they were making at the time they made it.
> I agree with you, but no one listened back then, why would they ever think about listening now.
Because we lead vastly better lives today than 100 years ago, when everyone was also raging about technology stealing jobs. The economy has to adapt to technology changes, there is no other way. It is a self healing system. If technology removes a lot of jobs, then new jobs are created. It has to be this way, don't you see?
It can be a self-healing system, and capitalism is generally self-healing, but the former is not necessarily the case in all economic systems.
There is a critical point where factors, and producers leave the market because requirements cannot be met in profit in terms of purchasing power (invariant to inflation). You might think those parties are all that there is, but that's not the case, there is a third-party, the state and its apparatus.
With money-printing, any winner chosen by the state becomes its apparatus. Money printing takes many forms but the most common is debt, more accurately non-reserve debt.
That third-entity is not bound by profit constraints and outcompetes rising in the wake of the destruction it causes, and this is not self-healing, its self-sustaining, and slow, and it does collapse given sufficient time.
New jobs aren't being created in sufficient volumes to provide for the population. If anything, the jobs have been removed en-masse on the mere perception that AI can replace people.
You seem to rely heavily on fallacy in your reasoning. Specifically, survivorship bias. Things are being done that cannot be undone. There are fundamental limits, after which the structures fail.
You're saying I rely on fallacy, survivorship bias, but you have no way of knowing what is coming, and yet you state it so authoritatively.
I resort to evidence from history, because these same arguments happen decade after decade, and the doom scenario has not manifested yet. I also find the anti-AI view narrow minded. You're only able to imagine one scenario, the dystopian scenario. And yet none of know this is the likely outcome. It could well be that AI actually does increase the means of productivity, we invent new medical cures, we invent new ways to grow food, we clean up our energy generation, work becomes more optional as governments (who desperately want people to keep electing them) find ways of redistributing all the newly created wealth.
I don't know which will happen, and neither does anyone else.
This is naïve, the government and corporations are already working towards the dystopian result. Just because we don’t “know” doesn’t mean people can’t make an educated guess. You need people to put Llms on the good path before you can say the bad path won’t happen. Right now people are loyal to corporations that offer it, that’s the bad path.
Its like predicting avalanches in avalanche prone areas.
You may not know the individual particle interactions and forces that will inevitably set the next avalanche off, but you know it will happen based on factors that increase the likelihood dramatically.
For example the event of an avalanche increases the more snowpack there is, and it goes to zero when snowpack is gone. The same could be said of LLMs.
You know corporations will do absolutely anything even destroy their business model, so long as they make more money in the short term. John Deere is a perfect example of this, and Mexico just finally took action because we couldn't, that culminated in ~14bn drop in capex on Wall Street for the the stock. It was over 10 years in the making, but it happened.
The more concentrated the marketshare to decisionmaking, the greater the damage, and the more impact bad decisions have compared to good decisions. You tread water until you drown.
> You're saying I rely on fallacy, survivorship bias, but you have no way of knowing what is coming.
Just because you happen to be blind in this area, doesn't mean all people are blind. In the day after tomorrow, you had that group at the library that chose to follow the police officer despite warnings that going out into the storm would kill them. What happened? They died.
That is how reality works, it doesn't care about belief. Its pass fail, live die.
The thing about a classical education (following the greeks/roman western philosophy) is that you can see a lot more of reality accurately than someone who hasn't received it, and an order of magnitude more than someone that's been indoctrinated. You know the dynamics and how systems interact.
The dynamics of systems don't just disappear, there is inertia, and you can see where that is going even if you cannot predict individual details or a timeline. It is a stochastic environment, but you can make accurate predictions like El Nino/La Nina weather patterns with the right know-how and observation skills. Everything we know today originated from observation (objective measure), and trial and error.
This framework is called first principles, or a first principled approach. Its the backbone of science, and it ties everything that is important to objective measure, and the limits of error. When dealing with human systems of organization, you can treat the system in predictable ways at the sacrifice of some of the accuracy, but that doesn't negate it completely.
These are things that matter more than other things, and let one predict the future of an existing system, if carefully observed. Like a dam where the concrete has started cracking might indicate structural weakness prior to a catastrophic collapse.
It is not governments job to redistribute wealth. That is communist/marxist/socialist rhetoric, and it fails for obvious reasons I won't get into. Mises sums it up in his writings back in the 1930s. You like to claim you base reasoning on history, but you have to include parts that you don't agree with to actually be doing that.
Just because you don't know what will happen doesn't mean others can't. These are fundamental biases to your perception that rigorous critical thinking teaches you to avoid so you are not dead wrong.
There are people that see the trends before others because they follow a first principled approach, and they save themselves, or may even profit off that when survival is not at risk.
The blind will often cause chaos to profit, thinking no matter what they do individually they can't end it all. The exact same kind of fallacy that you seem to be falling into, survivorship bias.
There are phase changes in many systems. The specific bounds may not be known or knowable in detail ahead of time, but they have been shown to happen, and in such environments precursor details matter.
The moment you start dismissing likely outcomes without basis, is the moment you and those you care about go extinct when those outcomes happen and you are in the path of that outcome.
No one knows everything, but there are some people that know more than others.
It is a fairly short jaunt in the scheme of things from the falling dominoes caused by elimination of entry level positions (and capital formation as a whole), to socio-economic collapse (where no goods are produced or can be exchanged).
The major problem is no one is listening to the smartest people because they are no longer in the room, only yes people get into the room, the blind leading the blind. That has only one type of outcome given sufficient time. Destruction.
> I would wonder why you are so complacent as to think next year's models won't be able to solve domain-specific problems?
If your domain is complex enough as well as have a critical people-facing component you generally still have some runway. If it’s not then it’s ripe for disruption anyway, if not by LLMs then by something else. I pivoted at age 32 because of this. I pivoted again at age 40 (I took a two level title drop (principal engineer to midlevel), but I got to learn a new domain - and got promoted back to one level below and now make more money).
I always treat my marketability not as a one and done but a perishable quantity. I’ve never taken for granted that I’ll have job security if I don’t strategize — because I grew up in a time of uncertainty and in a society where a high paying job was not guaranteed (some jobs like grocery clerk were however). People talk about “job security” as an entitlement of life are the first ones to be wiped out.
That said, not everyone is capable of constantly upgrading their skills and pivoting — we need some cushion for economic disruption for folks who have limited retrainability. But suspect this is not everyone — most people just haven’t had to do it so they think they can’t.
Americans have not had to face this en mass in the last 30 years but many people around the world have had to. If you’ve lived in competitive societies where there is job scarcity you get quickly used to this reality.
> What are people still in school right now, with dreams of being architects or lawyers or artists or writers or doctors or podcasters or youtubers, actually going to do with their lives? How will they find satisfaction in a job well done? How will they make money and feed and house themselves?
I think those jobs will still exist in some form but there will be a painful period where everyone figures out how to be differentiated. I’m a hobbyist YouTuber in my free time (YouTuber was a job that didn’t exist before) and I think it’s hard to replace parasocial relationships — AI slop already exists on YouTube which gets views but few subscriptions.
The scope of jobs will also shift, and we will see things moving toward realms requiring human judgment — delivering things that require interpretation. Job scopes today are actually already much more than people think. Again no guarantee against disruption but job security was always an illusion anyway and the sooner we realize this the sooner we can adopt a preparatory mindset. (In a way, Americans are actually well positioned due to our relationship with capitalism)
Even the demise of radiologists has been overstated because being a radiologists is much more than just detecting disease from an image.
Writers will still be around — they might not be able to charge per word, but they’ll pivot to a new model. The transactional model will be gone but I’m convince something else will replace it.
I’m not sure about any of this because I can’t predict the future, but I have seen the past and the doomsday scenario doesn’t seem to me the inexorable one.
There are things being done which cannot be undone, and there are issues that were long predicted, and ignored, and the consequences are now bearing fruit.
If you haven't heard a real doomsday scenario that's likely, you haven't been listening to the right people, and you rely far too much on the fallacy of survivorship bias.
If you don't have a plan to replace a fundamental societal model, there are two potential outcomes, someone comes up with something because they've been working on it (and it works, which is rare), or all dependencies that rely upon that system fail, and the consequences occur. In other words, everyone starves.
Think about what no exchange being possible suddenly would mean, overnight, for our supply chains with logistics delivering just in time. We've seen it, during the pandemic, but that was just a small disruption, and not a continuing one.
Imagine it. Nothing on the shelves. No amount of money that will let you get what you need (toilet paper). No means that would let this occur in the short timetables of need. What happens. Prior to 2020, people would call you crazy if you said those things would happen.
Bad things happen if you don't have a plan to make sure they don't happen.
I think this is hilarious, because it's exactly the type of low effort response that tends to dominate general conversations about AI.
You are making the author's point.
I think there's a lot more nuance in
> "isn’t it cool how this can replace the work that you do? Eventually we won’t even need you and you can go get a hobby to spend the rest of your life."
than you'd like to admit, and some conversations that are worth having in earnest instead of simply resorting to trivial things like
> "Maybe the discourse sucks because the reality of it sucks?"
Maybe the reality of it doesn't suck?
In the same way that a reality where we have things like bulldozers, printing presses, looms, and a cotton gin doesn't actually suck, at the end of the day.
It absolutely sucked for some people, some of the time - and that's an important part of the conversation, but it's not the conversation "end of sentence".
> than you'd like to admit, and some conversations that are worth having in earnest instead of simply resorting to trivial things like
Sure the author wants to talk about technical specifics of Llms. Yet Llms enable a lot of people to avoid understanding even the technical points of it. That would disincentivize people from understanding enough to have discourse which the author considers valuable.
> In the same way that a reality where we have things like bulldozers, printing presses, looms, and a cotton gin doesn't actually suck, at the end of the day.
I really don’t care about the grand scheme of things type responses to criticism of Llms. But for the sake of argument why should I care about discussing Llms and their technical aspects if in the grand scheme of things we’re all to eventually die?
It is the end of the sentence because most people can’t imagine what comes next besides not having a job. It’s not that they won’t be fine if a super AI takes over tomorrow, it’s that is literally the limit of their concerns today is making money for themselves.
It might be different if Llms actually made the users richer but it doesn’t it makes the corporations richer.
> But for the sake of argument why should I care about discussing Llms and their technical aspects if in the grand scheme of things we’re all to eventually die?
Why do anything, then? This is the laziest possible retort I can imagine.
> In the same way that a reality where we have things like bulldozers, printing presses, looms, and a cotton gin doesn't actually suck, at the end of the day.
So you’re allowed that type of rhetoric, but when I use it, it’s lazy.
My point has been that it sucks, now. Right now, it’s hysterical on both sides of the conversation. So yes it sucks. In the grand scheme of things it may not suck or it could get even worse. Again one side of the conversation is choosing to promote only one of those ideas. Even though there is no evidence we will end up in a utopia from it. In fact there’s a lot of evidence to the contrary. So yes the conversation sucks. The reality right now sucks.
Yes, because - to be blunt - yours is so much lazier.
I picked machines that were undeniably controversial at the time they were introduced, because they did all the things you're claiming to be upset about here: They put people out of work, they enriched capital owners, they changed social structures, they altered governments.
Essentially - they are relevant discussion items for the topic at hand (if you're unaware, the general term "luddite" for use as "anti-technology" directly comes from the english textile workers getting replaced by looms, which they tried to destroy repeatedly, and were eventually suppressed with military force, with sentences including execution and exile to penal colonies).
That's not some blasé "waves hand 'technology good'" reference I'm making, and I think your response is partially so annoying because we likely agree on a lot of things about the potential negative impacts of AI.
I just think the way you're articulating it is relatively low effort, and I think the original post is absolutely allowed to say that. You'll get dismissed because you're so obviously wrong about the easily verifiable things that it's hard to take you seriously about anything.
Which is exactly the impact of comments like "Why talk about this because we'll all eventually die" - they alienate your allies because they are trivial and trite trash.
Okay well as long as we’re delivering low effort attacks, I totally agree and think the same of you. I can’t take your response or ANYTHING you say seriously. Good talk, you’re right there’s plenty of good discourse on AI between people. This conversation is a winning example.
No - I picked it precisely because it's a machine that improved efficiencies but undeniably had negative impacts as well.
I think that's my whole point - I'm not saying that the person I initially responded to is incorrect in not liking the impacts AI might have. I think it's a perfectly reasonable take to be concerned about how AI might impact you, and to express that, along with negative sentiments.
I'm saying that the argument they are currently making
> "Maybe the discourse sucks because the reality of it sucks?"
and even the slightly better
> "Okay calling a search engine and retrieving a result has been possible for a while. Llm companies just slapped statistical response on top as the UI."
Is a guaranteed way to be ignored and dismissed because it's a low effort emotional response - not an actual argument.
Those technological advancements with Llms are low effort advancements. So you only get low effort responses.
Do you understand why maybe no one’s wowed by browser automation/automated web search? Can you extrapolate why no one’s stoked to talk on Llm bots replacing them with low effort inaccurate “good-enough” fly-by-research summarization?
These are obvious for most people that’s why it’s low effort. You shouldn’t need to expound high-effort discussion just because you feel the low effort discussion doesn’t make a clear point or makes Llms look bad. The points are well discussed, and obvious. Hence low effort, hence sucky discourse.
Feel free to ignore and dismiss my perspective that doesn’t make me wrong or you right. It just makes you a bully.
That doesn’t exactly leave a lot of room for people to feel the need to be involved in a discourse about it. For one thing the majority of people aren’t all workaholics looking for extra hobby time.
The author mentions ChatGpt can search the web. Okay calling a search engine and retrieving a result has been possible for a while. Llm companies just slapped statistical response on top as the UI.
Maybe the discourse sucks because the reality of it sucks?