> DigitalOcean App platform[…] only connect to GitHub
They also support deployments from GitLab (so long as you're using the gitlab.com-hosted instance and not a self-hosted GitLab instance). If you've deployed your own self-hosted forge, then you can connect DigitalOcean App Platform to it by using gitlab.com as a bridge—register an account on gitlab.com once and instruct your self-hosted forge to replicate copies to gitlab.com. You don't really need to actually use GitLab.
Having said that, considering that DigitalOcean is in the business of selling IaaS/PaaS, it's loony that they don't let you connect to, say, your own self-hosted Forgejo running on their infrastructure…
(Indeed, considering how many people would like to self-host their own forge but how few people want to actually set up and do admin for it, it's loony that DigitalOcean doesn't pick up, say, Forgejo and/or an alternative and offer a sharply discounted (e.g. $20/year) quasi-managed one-click deployment option with first-class support for connecting to their App Platform.)
This post highlights but never explicitly addresses the current language trend that involves the corruption of the word "agent" to imply ChatGPT- and Anthropic-era AI.
It also speculates on the practices at Microsoft in its opaqueness but doesn't recognize the development methodology used for Linux despite its transparency or acknowledge how it differs from the pilloried code review that the audience is most familiar with—even though the kernel development project is the cradle for the VCS that everyone decided to use (but also to never use correctly).
Overall, there aren't really any insights here. The solution described just highlights that high-trust teams are composed of members that (are|can be) trusted by one another. But being on that kind of team is a luxury. The introduction of coding agents doesn't change anything. Take out the LLM-powered patch iteration, and it works for all the reasons it already worked before the advent of coding agents. It's a little like the obliviousness-to-privilege of folks who try to address the problems of people who experience poverty with advice that reduces down to the question, "Have you tried <thing that precludes someone who is poor>?"
> PS: Really cool static site generators that shoot for simplicity don't require you to create extra template files written in a new, made-up template language. When you want to create a new post, you give it (a) the static files from your existing site and (b) the markdown for your new post. The "templating" engine inspects your existing posts (incl. e.g. class attributes) and then copies the same document structure into a new file, except with the right stuff (timestamp, title and heading, post content...) substituted in to the places where it's supposed to go.
> In this case what’s needed is „npm ci“ instead of „npm install“ or better „pnpm install —frozen-lockfile“.
The grugbrain developer says, "I can use git-add to keep a version controlled copy of the library in my app's source tree with no extra steps after git-clone."
(Pop quiz: what problem were the creators of NPM's lockfile format trying to solve?)
Lock files were begrudgingly introduced after people who aren’t playing around with move fast and break things cried foul about dependencies being updated unexpectedly. The “semantic versioning” dogma and the illusion of safety that it brings was the original motivation. At NPM’s creation time, mature dep management ecosystems did not have floating versions, they were always pinned.
When you are talking about checking your dependencies in the source tree, you are effectively pinning exact versions, and not using floating/tilde versioning syntax.
> setting up native binaries, or native modules linked against the specific Node version
So the majority of projects—those that don't use binary NodeJS modules—don't have a reason for sidestepping the primary VCS and going along with npm's shoddily designed overlay version control approach?
> However, I would just stick with regular pnpm and installs.
You're not answering the question. npm isn't bedrock, and pnpm certainly isn't. If you're going to introduce (mandate) the use of a tool in the workflow, you should be able to justify it by explaining your rationale for introducing it (and making everyone deal with the associated costs). You should at minimum be able to provide a lucid explanation of the tradeoffs. For good measure, you should be able to disprove the "NPM Null Hypothesis"; you should be able to state a straightforward answer to the question, "What problem is this supposed to be solving?"
Since this is largely/almost entirely for private data with occasional publicly accessible content through share links, this is a good candidate to be rebased onto remoteStorage, which gives you auth and storage for free, and there's always an escape hatch for the user to have access to their data—and permit other apps to access it, too.
> Wasn't this specifically some lame-ass attempt to combat some click fraud or something these extensions were doing?
No. That you believed that was just an unfortunate consequence of HN's kneejerk tendency to upvote middlebrow dismissals to the top comment, which resulted in people rushing to craft apologetics for what is in reality bonafide scumminess on LinkedIn's part, which itself resulted in confabulations like the claim that, "It was all extensions related to spamming and scraping LinkedIn last time this was posted"—which is simply untrue.
> Lets say on the low side for their specific use case they spend ~10 hours. That’s $1k of their time
Only if there is perfectly elastic demand for their time at the given rate. It's often the case that for people putting in a typical 35–45 hours Monday through Friday, the number of hours in a week is not the limiting reagent, though (else they would be putting in 50, 60, or 70+ hours instead).
> Atproto isn’t “many servers sending messages to each other”. It’s structured more like RSS
Except that, crucially, RSS/Atom plays well with static nodes (e.g. personal websites generated with Jekyll/Hugo/whatever—or even written by hand[1]), and Atproto does not. (Nor does Mastodon; previously: <https://news.ycombinator.com/item?id=30862612>.)
It'd be great if the complexities needed to support the "Atmosphere" were widely recognized/acknowledged to be overkill and soon enough ended up going the way of things like CORBA and WSDL while in its place a resurgence of interest in the Atomsphere emerged.
Atom was designed for news, before social media existed, where 15+ minute polling times were (borderline) acceptable. Atproto was designed for social media, in an age of Twitter users getting their news in seconds, to the point of being able to comment on live events play-by-play. There's no coming back from that world.
With that said, I wish both Mastodon and Atproto supported opt-in pull-based, static sources.
> Atproto was designed for social media, in an age of Twitter users getting their news in seconds, to the point of being able to comment on live events play-by-play.
And this is widely recognized by now to have been a very bad thing, even/especially those most susceptible to its draw. It's strange that you're framing it as a strength and not a lament.
> There's no coming back from that world.
You can't say that when everyone just begs the question and shoves application-server-needed-here protocol designs to the fore.
> I wish both Mastodon and Atproto supported opt-in pull-based, static sources.
Bridgy web (brid.gy) does that somewhat. Converting static sites into site.standard atproto records and whatever the corresponding activitypub format is.
There's always some Gemini protocol faction that shows up to yell that everything is wrong and we have to keep hand assembling our packets by hand or it'll never work.
Atproto's PDS is the root idea that everything extends off of, is the "social filesystem" that you control. There's a protocol objective to be able to spread your data around widely and for folks to be able to cryptographically check that that data came from you (even if you have to change hosts or even if someone sneakernets your data around). That's going to have some complexity! But it allows aggregation, is essential to how we are able to syndicate data so widely in atproto. It's so important it's in the name: Authenticated Transfer protocol.
And that in turn enables systems like Tangled here to be built, that layer stop the personal data servers, and relays. These work because there is identity.
If you need your static site to be on atproto (yay!), you can just have one of the various PDS hosts (such as Bluesky or eurosky or black sky or npmx) host the PDS for your. Since it is authenticated and user sovereign, you can permissionlessly move to a different host whenever you please, should that go awry. It's unclear to me why static site needs are an interesting or useful target that social networking ought conform to.
If you want to make a simpler network where we don't have those guarantees, please go right ahead. It feels to me like a snap reaction though that doesn't bother weighing what we have gotten or why things are this way, that is reflexively demanding.
> If you need your static site to be on atproto (yay!), you can just have one of the various PDS hosts (such as Bluesky or eurosky or black sky or npmx) host the PDS for your. Since it is authenticated and user sovereign, you can permissionlessly move to a different host whenever you please, should that go awry.
These seems to defeat the purpose of the relative amount of sovereignty that hosting a static site gives you compared to depending on a PDS.
> It's unclear to me why static site needs are an interesting or useful target that social networking ought conform to.
Your data is still signed by you, and you still have the keys to move your PDS no matter what happens to your host. Do you have an actual threat model or reason why you are so afraid / unwilling to accept any compromise?
Your lack of a reply at the end, refusing to support basically your entire ask with even a modicum of supporting cause, feels a bit vindicating, that indeed you are a hostile agent & not here to engage or discuss, but to throw bombs.
They also support deployments from GitLab (so long as you're using the gitlab.com-hosted instance and not a self-hosted GitLab instance). If you've deployed your own self-hosted forge, then you can connect DigitalOcean App Platform to it by using gitlab.com as a bridge—register an account on gitlab.com once and instruct your self-hosted forge to replicate copies to gitlab.com. You don't really need to actually use GitLab.
Having said that, considering that DigitalOcean is in the business of selling IaaS/PaaS, it's loony that they don't let you connect to, say, your own self-hosted Forgejo running on their infrastructure…
(Indeed, considering how many people would like to self-host their own forge but how few people want to actually set up and do admin for it, it's loony that DigitalOcean doesn't pick up, say, Forgejo and/or an alternative and offer a sharply discounted (e.g. $20/year) quasi-managed one-click deployment option with first-class support for connecting to their App Platform.)
reply