Hacker Newsnew | past | comments | ask | show | jobs | submit | soulofmischief's commentslogin


The first-gen macbook shared a lot of internals with certain Dell laptops of the era. In 2010 I was homeless and attending high school at a boarding school and didn't have nice macbooks like my classmates, but I cobbled together my first laptop that summer from a few different old salvaged Dell models.

Dual-booting into a hackintosh was a breeze. I eventually salvaged an old T60 and it was a similar case, enough crossover in components that it wasn't any trouble running macOS.

This was in an era where you wanted Apple software even on non-Apple hardware. Today, it's the opposite.


A cornerstone philosophy behind the American legal system is that we must view every single increase in State power as a potential slippery slope, and must prove that it isn't.

In this case, it's a slippery slope; if we're normalized to this, what other incursions into our 1A rights to free speech, religious freedom and public gathering will we allow?

And I say religious freedom, because these kinds of laws are largely peddled by religious folk or people who otherwise have been deeply influenced by early American Puritan religious culture.

I, nor my children, should be forced to subject to such religiously-motivated laws. I can decide for myself and for my child what is appropriate.

I, nor my children, cannot be compelled to enter personal information into a machine created by someone who is also illegally compelled to require it.

I, nor my children, can be compelled to avoid publicly gathering on the internet just because we don't want to show identification and normalize chilling surveillance capitalism.

I thought this was fucking America.


Claude Opus does this constantly for me, no matter how I prompt it or what is in my AGENTS.md, etc. It is the bane of my existence.

Seeing the source for a project doesn't prevent me from ever creating a similar project, just because I've seen the code. The devil is in the details.

Agreed, but the courts can conclude that all LLMs who are not open about their decision, have stolen things. So LLMs would auto-lose in court.

Or they can conclude otherwise.

Not to mention the danger of energy production, even nuclear, becoming resource-constrained to the point where datacenter power plants leave no room for municipal plants. We're seeing it happen with consumer hardware; make no mistake on who will get preference.

The README has clearly been touched by an LLM. Count the idiosyncrasies:

“chardet 7.0 is a ground-up, MIT-licensed rewrite of chardet. Same package name, same public API — drop-in replacement for chardet 5.x/6.x”

Do people not write anymore?


I finally had to mute r/isthisai on Reddit because there’s now a subset of people who see the hand of AI in everything. Could that be generated by a clanker? Sure, but it’s also exactly what I would write if I wanted a quick pitch for a library that addresses some immediate concerns. It’s also what I would focus on if the fact we had just finished a rebuild from scratch.

As Freud famously said, sometimes an em dash is just an em dash.


FWIW, I don't think there's even a room for interpretation here, given the commit that created the README (and almost all commits since the rewrite started 4 days ago) is authored by

> dan-blanchard and claude committed 4 days ago


Sure, I just could use a break from the needless side tracks.

We are both trying to steer an internet towards a place we want to be.

The em dash is just a bonus, the grammatical structure is the giveaway. I'd invite Blanchard to argue that it wasn't LLM-generated.

I use AI tooling all day every day and can easily pick out when something was written by most popular modern models. I welcome an agentic web; it's the inevitable future. But not like this. I want things to get better, not worse.


For me, some projects I start by writing a readme.txt by hand. That saves me time in cases I realize I'd be making something pointless. (I don't use chatbots when coding though)

This post smells of LLM throughout. Not just the structure (many headings, bullet lists), but the phrasing as well. A few obvious examples:

- no special framework. No library buy-in. Just a URL

- Advance clock. Fire callbacks. Capture. Repeat. Every frame is deterministic, every time.

- We render dozens of frames that nobody will ever see, just to keep Chrome's compositor from going stale.

- The fundamental insight that you could monkey-patch browser time APIs ... is genuinely clever

- Where we diverged

The whole post is like this, but these examples stand out immediately. We haven't quite collectively put a name on this style of writing yet, but anyone who uses these tools daily knows how to spot it immediately.

I'm okay with using LLMs as editors and even drafters, but it's a sign of laziness and carelessness when your entire post feels written by an LLM and the voice isn't your own.

It feels inauthentic and companies like replit should consider the impact on their brand before just letting people write these kind of phoned-in blog posts. Especially after the catastrophe that was the Cloudflare Matrix incident (which they later "edited" and never owned up to).

And the lede is buried at the very end: This is just a vibe-coded modification of https://github.com/Vinlic/WebVideoCreator, and instead of making their changes open source since they're "standing on the shoulders of giants", the modifications are now proprietary.

In the end, being an AI company is no excuse for bad writing.


Their whole product is about vibe-coding unmaintainable "apps", not surprised they put the same level of (dis)attention in their blog too.

Also yikes for the proprietary modifications. AI companies: "what's yours is mine, and what's mine is mine only"


Unfortunately, people seem to organically love this sort of writing, since at least one or two of these get to near the top half of the front page here every day.

I'm not even against using AI per se, but when something is obviously written in ChatGPTese I'm not going to read it if I don't have to.


Yes, this kind of writing is rampant on X. Once you know it's coming from an LLM (mostly ChatGPT in my opinion as it uses this style often) you can't unsee it. And that immediately makes me skip it.

> - We render dozens of frames that nobody will ever see, just to keep Chrome's compositor from going stale

what's the issue with this one? it sounds like something I might write, tbh.


I don't have an issue with any particular form of writing, it's just that the current generation of LLMs often write this way and it's an indicator of possible LLM use.

"We X, just to keep Y from Z" and its variations are a pattern I've seen come up a lot.


You forgot the first part. the famous x,y, and z: "by virtualizing time itself, patching key browser audio APIs, and waging war against headless Chrome's quirks.


Thanks, it's helpful to put a name to it.

Yep, that's good one. "Virtualizing time itself" itself is such a dead giveaway. What a nonsensical phrase.

Virtual Time is a feature of Chrome to fast forward when rendering.

See --virtual-time-budget

https://peter.sh/experiments/chromium-command-line-switches/


Yes, but "virtualizing time itself" as phrased is meant to be superfluous, LLMs do that kind of thing a lot. It makes it sound like some kind of mystical or novel approach even though the actual pattern is already common knowledge / explicitly supported.

Yeah it does have that flowery turn of phrase.

Lesson: if your going to do LLM assisted writing, say to it “make sure this has a distinct tone that consistent and clearly quite different”.


I think specifically, certain psychological modes require different levels of articulation, and language is one way to get there in a bandwidth-limited system.

See also: https://en.wikipedia.org/wiki/Newspeak


People are fascinated by controlling the vocabulary for political purposes but I think it mostly doesn't work. "Illegal Alien" is the exception that proves the rule.

Usually it results in an "equal and opposite backlash". Once they started calling children "Special" in school, "Special" became the ultimate insult.


It is a wordcel problem, i.e. the belief that language is all there is for modeling reality, even though this is obviously false and has been clearly disproven by decades of research in psychology, cognitive science, and neuroscience. At best we can say that sometimes language has a strong influence on our perceptions of reality.

EDIT: For a neuroscience reference that also argues why the general perspective is obviously false: https://pmc.ncbi.nlm.nih.gov/articles/PMC4874898/. But really, these things ought to be obvious from introspection.


Also in my dealing with birds and animals of all sorts I've come to believe that they are very capable in many forms of cognition without the use of language.

There was a fad called "structuralism" that liked to imagine that such and such is "structured like a language" but then when we got a paradigm for language it was one of those "normal science" paradigms that Kuhn warned you about, like you could write papers grounded in the Chomsky theory for a lifetime but it wouldn't help you learn to read Chinese more quickly or speak German without an accent or program a computer to parse tweets. That is, the structure of language is absolutely useless except for writing papers about linguistics -- and the "language instinct" becomes some peripheral that grafts onto an animal but you need the rest of the animal for it to work.

Now LLMs may not be a model for how we do it but they are certainly going to bring back structuralist and "wordcel" positions because they do seem to show, somehow, that "language is all you need" to accomplish whatever it is LLMs accomplish.


> Now LLMs may not be a model for how we do it but they are certainly going to bring back structuralist and "wordcel" positions because they do seem to show, somehow, that "language is all you need" to accomplish whatever it is LLMs accomplish.

People will try to bring back these obviously false models of cognition, but, so far, the dismal performance of LLMs on e.g. SpatialBench [1], and, almost certainly ARC-AGI-3, or e.g. the kind of data and effort required to get something like V-JEPA-2 [2], will be strong counter-examples to this. And, yeah, obviously animal cognition, esp. smart animals like birds, or the crazy stuff we see in chimp and gorilla ethology (border patrols, genocides, humor, theory of mind, bla bla bla).

[1] https://spicylemonade.github.io/spatialbench/

[2] https://arxiv.org/abs/2506.09985


Location: Southern US

Remote: Yes

Willing to relocate: No

Technologies: JavaScript/TypeScript/Web, Python, C, Full-stack, AI/ML

Email: hello @ bad-software dot com

GitHub: https://github.com/soulofmischief

Full-stack JavaScript-focused engineer/entrepreneur with lots of experience in building scalable single page apps in different frameworks, web3, multiplayer web games, building transformers and other networks, all sorts of things. Product-oriented, executive and leadership experience, comfortable in both autonomous and collaborative settings, capable Linux sysadmin. I know how to ship, end to end.

I've focused for several years now on agentic and generative work such as simulating networked LLM-augmented embodied agents, interface research, too much to list here but always happy to talk more over email. While the rest of the industry is just starting to latch onto agentic systems, I can offer years of experience, product insight and leadership.

Available for consulting, startups, projects, anything web or agentic.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: