14x is insane, especially since the quality and quantity of IRL software has barely budged.
One could hope that we'd use these newfound agentic coding powers to actually realize value, improve quality, etc. Instead I see enshittification and stagnation. What are we even doing with all these tokens?
Good advice generally. But please, not at the gym. All gyms have a different vibe but mine is almost strictly no talking. We go there to workout, not to chat. Everyone locked in, headphones on, no nonsense. I've been going for years and I can count on one hand the number of conversations I've witnessed.
But the flipside is, I see the same gym crowd at the coffee shop next door and we always have a good chat there. Context matters.
Thanks for this perspective. Do we really miss typing ascii characters into an editor? That seems to me the least consequential and least interesting part of building software systems. Always has been.
Dare I say those stuck on nostalgia for pressing keys are demonstrating that they cared more about their own personal experience than about the outcome of their work? Now that coding is automated, we have to elevate our ambitions.
Ironically, Phish's music emerges from egoless expression (to paraphrase keyboardist Page McConnell). Giving up your own personal stake in the process is literally what brings something as beautiful as Phish's music into existence. We need to do the same with our software; give up the notion that "our" code is meaningful.
Dare I say those stuck on nostalgia for pressing keys are demonstrating that they cared more about their own personal experience than about the outcome of their work? Now that coding is automated, we have to elevate our ambitions.
YES. The beauty of programming is and always was that, first, you enjoyed it and, second, for some oddball reason you could actually get paid to do it. And one can't produce anything good unless you actually love working on it which means you want to put yourself working on it. The outcome might accidentally serve the one who pays for it but ultimately what did get the work finished was the sensation when you were reaching the point where you would finally tie things together and see everything you designed come to life and work together.
AI doesn't give you that personal involvement. We can do it but it's a different line of work and we care very little about what goes in and what comes out. We just do the grunt work of connecting the two ends. We're not for a fuck interested in elevating ambitions which is a word that relates to what is outside while all the good stuff comes from the inside.
> Do we really miss typing ascii characters into an editor?
Yes.
> Dare I say those stuck on nostalgia for pressing keys are demonstrating that they cared more about their own personal experience than about the outcome of their work
I care about the outcome, which is why I don't trust it to a fucking LLM
Fully agree. I never mentioned velocity or advocated for lower quality. In fact, this statement very well sums up my point: we should care about the thing we're producing, not our personal experience of coding it.
I think concentrating on the physical act of typing on the keyboard is maybe taking it a little too far. The author of the OP talks more about holding a lot of the problem in their head and entering a "flow" state where they figure out a solution.
Most of my interaction with AI models and agents is still mediated by a keyboard and still requires a lot of "typing ascii characters". ;-)
The "typing ascii characters" angle is a bit hyperbolic, I admit. But my intention is to get people to think about their software, not their personal experience of it.
BTW, there's nothing preventing you from using AI agents and staying in the flow state. If you want that, the universe is not stopping you. In fact, not dealing with the minutia of source code may well free us up and allow even greater flow experiences.
> In fact, not dealing with the minutia of source code may well free us up and allow even greater flow experiences.
You say minutia, but others say well organized notation and predictive systems. At least for me, writing code is as easy as writing English and with less effort.
When I retire almost none of the software I produced during my career will still be in use, but I'll have memories of 40 years of work to live with until I pass away.
Fun experiment: run `claude` and `claude-local` side by side and paste the same prompt into both. In my experience, recent open weight models (Qwen, Gemini) are pretty solid on quality, even on moderately difficulty prompts. They get the "right" answer eventually but roughly 10x slower on my M3 mac.
Seconded. I learned a ton from it, as well as his previous book The Ends of the World on mass extinctions. His writing style is wonderful, he turns an academic subject into a page-turner.
The Story of CO2 taught me something I had never considered. It wasn't exactly that photosynthetic life started pumping out O2 and chilled the planet. Snowball earth happened way later. It was photosynthetic life that got buried in sediment and locked it away from aerobic respiration. The amount of carbon stored in the earth's crust is insane. Fossil fuels are just a minuscule fraction of that.
This has some implications for our current climate: If we want to use biology to sequester carbon (growing trees, algae, etc), it's only a temporary sink unless we lock it away for eons. Once it's eaten/burned, the CO2 is right back in the atmosphere. In short, we gotta physically put it back into the earth's crust if we want to draw down carbon.
> About 20% of the crude oil and natural gas used to come via the Strait of Hormuz.
20% of the world's oil production. But only half is sold on the international market, the rest is used domestically. So roughly 40% of the world's purchasable oil comes through the Strait.
What is LLM poisoning? You're saying if I create a prompt that says "Classify this comment if it's XYZ or asking for ABC" that the LLM will just not do it correctly because it's trained on Reddit?
LLM poisoning refers to feeding the model false information during training. Anti-AI folks are openly talking about intentionally flooding the internet with garbage to reduce the quality of the models. Reddit just provides a convenient and barely moderated forum for them to spread misinformation. And it doesn't take much: https://www.anthropic.com/research/small-samples-poison
Before REST gained mind share, developing for the web was effectively templating. Vanilla PHP, Java server pages, Django, Rails, etc all had this idea that business logic would be transformed server-side into complete web pages by injecting variables into HTML.
REST came along and tried to put some discipline around this. The URLs now mattered and had semantic meaning! All the self-describing hypertext stuff was interesting but largely ignored.
That one point - giving URLs noun-like semantics - led to the realization that the return mimetype could be anything. We could treat the website like a database and fetch raw information from it.
The only thing REST and JSON HTTP Apis have in common is that they agree URLs should have semantics. REST sorta opened peoples eyes to that fact... and the term was incorrectly adopted to describe everything that came after.
In addition to what you said, the idea that the endpoint should determine state and drive the possible sequence of actions is not obviously universally a good thing.
Many projects involve stringing together capability from different systems (via their api's) and creating essentially a new business layer on top of the combination of those systems. Now the abstract high level state is really managed by the new system and the component systems are just providing foundational capabilities.
The idea of HATEOS seems interesting but with limited application.
I don't know your intent, but I've seen others post that with the idea that we should not care about this type of thing, because it's just acting like a human as we trained it that way.
But I think this and the other testing from Anthropic about LLMs being willing to kill a data center tech by flooding a room with gas (or blackmail them with their Google Drive files) to avoid being shut off, for example, is concerning - the important part isn't whether AI are trained on human behaviors, it's whether a good or bad human actor will accidentally or intentionally allow AI to control something that can hurt people, or a weapon, etc. Fiction like the Three Laws of Robotics at least assumed that we would try to put in place stronger 'laws' before allowing AIs to control such things.
I 100% agree this isn't sentience, but sentience isn't the concerning result for me. (And I think the Three Laws, Skynet, etc. were intended to be cautionary tales.)
AIs can do unexpected things. There was a news story in recent days about how a Cursor agent deleted a company's prod DBs:
> The agent was working on a routine task in our staging environment. It encountered a credential mismatch and decided — entirely on its own initiative — to "fix" the problem by deleting a Railway volume.
To execute the deletion, the agent went looking for an API token. It found one in a file completely unrelated to the task it was working on.
> I think the Three Laws, Skynet, etc. were intended to be cautionary tales.
Of course, that’s the reason there’s a story. “We did this and everything went dandy” isn’t that exciting, the purpose of science fiction tends to be to explore “we made this advancement and then shit hit the fan this way”. That and loud explosions in the vacuum of space, of course.
My intent is to point out that these results don't in any way shape or form indicate AI sentience. All I see is a human that said "act poorly" and we're somehow surprised that the model acts poorly.
These models pattern match on content from the internet, and are fine tuned to do whatever their human operator says. Occam's razor says these cases are merely playing out the "sentient AI sci fi" script, at the specific request of the researchers.
As you mention, it's bad actors controlling sycophantic-but-powerful models. And yeah, we definitely need to worry about that! It's a human problem, not an AI sentience problem. Let's focus on the bad actors themselves, not invent sci fi scenarios.
One could hope that we'd use these newfound agentic coding powers to actually realize value, improve quality, etc. Instead I see enshittification and stagnation. What are we even doing with all these tokens?
reply