I wrote this in 2012 - almost exactly ten years ago - and it remains true.
Looking at freebsd.org right now I see that my options for production, release systems are 12.3 and 13.0 ...
... which means that if 12.1-RELEASE came out at the end of 2019 and 13.0-RELEASE came out in April of 2021, you had all of 16 months to make investments in the 12 branch before it was hopelessly passe and all queries regarding it would be answered with:
"That's fixed in current ..."
The 4.x nostalgia is justified: it was polished and polished and polished over many years and many organizations were incentivized to invest in that branch
> ... which means that if 12.1-RELEASE came out at the end of 2019 and 13.0-RELEASE came out in April of 2021, you had all of 16 months to make investments in the 12 branch before it was hopelessly passe and all queries regarding it would be answered with:
So basically the same as Debian and Ubuntu LTS? Debian releases every ~2 years with 5 years of Security+LTS support, which is the same as Ubuntu (LTS):
- Resolving our arbitrary (and unofficial) 5-year branch lifetime
guarantee. The support policy is that the stable/X branch will be
supported for 5 years (minimum) from the point X.0-RELEASE is released.
We now guarantee a 5-year lifetime on the branch, regardless of how many
releases are built from the branch. Additionally, a "last minute"
release from the stable/X branch does not constitute expanding the support
lifetime for the branch as a whole for an additional two years.
[…]
- A new stable/ branch release will not occur before two years after the
X.0-RELEASE from the prior branch. This limits the number of
simultaneous supported branches, which will greatly reduce the overall
number of branches that must be maintained and build-tested for
Security Advisories and Errata Notices, reducing turnaround time.
1. GP makes me wonder whether the FreeBSD definition of LTS is less than people expect.
2. Maybe to properly admin a modern FreeBSD box the admin is supposed to be making the underlying system version easily replaceable. i.e. Make the actual OS files changing have no effect upon hosted system/application stability.
> Maybe to properly admin a modern FreeBSD box the admin is supposed to be making the underlying system version easily replaceable.
This is how things worked at Yahoo and WhatsApp.
For the most part, a server kept whatever version of FreeBSD it was installed with, and the application logic lived in /usr/local/ or /home/y/ or /home/whatsapp/
Keep your 'build' machine to compile production executables on the oldest OS release of your fleet, and everything should run everywhere. (You'll need to use the compat-X stuff so your FreeBSD X+1 machines can run FreeBSD X binaries, but that's easy)
You can use stuff from the base system, but not if you expect recent changes.
I don't remember ever doing an OS upgrade at Yahoo, although it must have happened. In my group, we only got updates with new hardware; which was fine, hardware was changing rapidly.
At WhatsApp, I drove the OS updates towards the end of WhatsApp's use of FreeBSD. A fair amount was just not wanting to have 4 different major versions out there, but there were also forced updates to get important network bugfixes or performance improvements. Sometimes it was easier to find the handful of boxes that were too old to use something nice in a script and update them, rather than fixing the script to not use the nice feature.
When we ran into bugs on older versions, upgrading to a version that already had it fixed was sometimes an option, but sometimes it'd need to be backported, and sometimes the bug was still in -current. We didn't run into too many of those though, our local patch set was never more than about 10 patches.
At Yahoo, there was a tool that did in-place OS upgrades for FreeBSD, but you're right - in most cases, people installed whatever was current of the day and then relied on yinst to make it work. ybsdupgrade did work and was mostly a non-event. I don't recall whether we had it work across major version upgrades though.
Not surprised there was a tool, but I don't remember it. :) most of my time at Yahoo was on 4.x, so minor upgrades only would have been fine. (Some 6 and 7 towards the end, but also a little bit of RHEL, which I was happy to get away from)
I wonder if FreeBSD would benefit from some documentation or packaging to make this more of a covered area?
Personally, the times I upgraded a FreeBSD box the process felt very smooth. To the point that I’m trying to understand @rsyncs point and seem to be missing something critical.
FreeBSD has a tool for binary upgrades now, and has had a process for source based upgrades forever. This is all documented in the FreeBSD Handbook, and I'd be surprised if the source based upgrade wasn't in there since the beginning of the Handbook, which I think came out with FreeBSD-3.
But, darkhelmet and I were discussing updating FreeBSD at Yahoo; doing the update from the Handbook wouldn't be the right way to do it, because you would miss out on Yahoo specific patches, which were likely required for some key Yahoo software. I don't think it was common knowledge how to do it, so some documentation might have been nice, but mostly teams at Yahoo just didn't do it at that time for better or worse.
The tooling at Y! sounded quite good from the descriptions. Kind of confirmed my thoughts that the proper way to run a FreeBSD system is to make hopping versions not affect hosted applications through specific tooling and/or infrastructure. Especially the bit about building on older versions and such. Not really sure what more would address the version complaints from @rsync.
There could be policy reasons why someone may decent not to do it: if you have a large fleet of cattle machines it may be easier to just redeploy from scratch. Home(lab) users and pet machines at companies may do otherwise.
I would still have built rsync.net on FreeBSD because of how familiar I was with it and how much I had personally (and institutionally) invested in it from JohnCompanies.
Since ZFS is now so core to what we do, it was a very natural evolution.
I have zero experience with Dragonfly BSD.
I love that FreeBSD is still a very simple UNIX without complications like systemd. I was recently tinkering with a raspberry pi and had to work out wifi connectivity and had to interface with ... what is that weird wifi subsystem that linux uses ? Supplicant ? Wow.
I just want to pipeline commands and edit config files. I don't care that it's inefficient or doesn't scale. It has scaled just fine for me.
I believe wpa_supplicant is still needed for wifi access on freebsd (or, it was in 2017 when I was last using freebsd on my laptop).
OpenBSD has better integrations with ifconfig and wpa though, it’s possible they are ported over now, but wifi config is the same as in Linux (or had been for quite a significant amount of time) on freebsd.
Completely agree about the opaque and incidental complexity of the systemd ecosystem though, I’m glad people love it but it’s not for me.
As someone without skin in this particular game, I'm curious to hear your view of the OS landscape today. Have you considered (or perhaps even tried) migrating rsync.net out of FreeBSD given these issues?
Either there is a code contributor pushing for functionality (Example sendfile/nginx) or a donor suggesting a path forward or a volunteer given free time.
Every foundation goals can be driven via sponsorship. At the end everything is simplify as who pays the bills at the end of the month?
This raises the old question: Who pays for the free in free software?
> "It's difficult to escape the notion that FreeBSD is becoming an operating
system by, and for, FreeBSD developers."
I don't know how many FreeBSD devs are paid full time and how much of it is voluntary, but this is the outcome that I would expect absent a traditional customer-vendor relationship. This is basically every open source project ever where most of the work being done is non-commercial.
We could quibble about what exactly "primarily" means, but that's not the phrase he used which is "by, and for" without the qualifier. So here's two reasons to make FreeBSD for others as well:
- They use a lot of code that they don't develop. If FreeBSD is not for others then those external projects and developers would be disinclined to make their stuff work on FreeBSD.
- Every new FreeBSD developer comes from a non-FreeBSD developer who is interested in FreeBSD and probably uses it. More developers ~= better FreeBSD for FreeBSD developers.
- More users (very roughly) means more money. Whether that's money to pay for more FreeBSD developers, or incentive to make your hardware work with FreeBSD or port your software to it, there are some positive effects on the system and possibly your wallet.
- Personal satisfaction to develop software lots of people use. Also the recognition that comes with that can get you a job or help you meet people and go places.
So lots of reasons. Even being purely selfish and hoping to extract the most from it, there are plausible reasons why some amount of focus on others might be the best way to go about developing a the project.
If you have a large installation, why depend on FreeBSD releases at all instead of choosing (arbitrarily) a newer STABLE snapshot, testing it and eventually rolling into production same way you’d do with a release?
The transition from the 4.x branch of FreeBSD to 5.x.
5.x had all kinds of problems and then 6.x was the first release that barely got out into public use before 7.x undercut it ... and that is the cycle we have been in ever since.
4.x had minor releases going all the way to 4.11. Nowadays, major versions generally stop at X.3 (for instance, 11.3).
It also seems like 10 years ago folks were a lot more willing to have discourse, where today… if you have difference(s) of opinion, it's not received as well.
Is this true? When I was a kid using the internet to learn to code on the 2000s people seemed a lot meaner in general than they are now. It was hard to ask a question on a forum without being made to feel like an idiot.
When I was much younger I once wrote some random person, whose email I found in newsgroups, about my troubles getting Linux to compile. They responded politely and quickly, with a fix.
That person was Linus.
I've found my share of trolls and assholes, but likewise I've found that being forthright, polite and respectful tends to yield answers.
It's difficult to quantify that but I feel that you are right.
Imagine a term like "RTFM" being coined and used in popular tech formus today. Back in the days "RTFM" was a valid one-word response to any number of questions by a n00b; if said question was in an IRC channel you'd often get kicked or even kick-banned for good measure.
>Imagine a term like "RTFM" being coined and used in popular tech formus today
or "flamewar", which almost sounds a bit anachronistic now. I don't think I've been in one in a long time actually. The early 2000s were a lot rougher and often toxic compared to what you have today.
Internet discourse is still hit and miss but it's more mature.
I think today it's a lot less explicitly aggressive, but it's actually more meanspirited. It might be controversial to say this, but online discourse is a lot less male and a lot more female than it was. Is this any great surprise?
Strongly agree. It used to be the accepted norm that you’d ban relentlessly evade to get back at the admin who flamed you in their ban message. However, there was the tacit understanding that it was the internet and none of it was real: just log off and go outside once you got sick of it.
On one hand, I see so many occasions where someone typically new to something even if they aren’t new to programming asking questions that could be trivial found in a second.
On the other hand I’ve seen enough handwavy RTFM copypastas from people who clearly neglected to read the question being asked as well as enough manuals [documentation, etc.] that wants to make me bash my head in with how usuals, unspecific, difficult to navigate it is.
There’s been many times I’ve been hand waved in IRC channels to RTFM when the “manual” is multiple wikis or documents lacking essential points between them. The best ones are the wiki migrations where you can tell someone got in over their head trying to migrate to the new hotness.
Absolutely right, and you even provided a meta-example. Words like “feel” to describe thoughts would rarely be used, except to indicate a conclusion reached with minimal or no reasoning. “Think” vs. “feel” still rages today in some circles, and it mostly functions as a proxy for the person’s age.
I’ve always believed that much of emergent online behavior can be explained by innate human social processes seeking a path to goal when the more likely paths aren’t present. Absent the normal cues of physical aggression or dominance/submission signaling, emergent processes bubble up and get adopted by the group based on utility. Our brains are constantly engaged in ways to determine our place in the hierarchy. For better or worse, most non-topical subtext in forums falls into “I’m better than you” or “we’re better than them”.
As far as the internet being nicer now than it used to be, I’m not so sure. As someone active on BBSs in the 1980 onward to the internet, there is a lot more passive aggression these days, most of it via leveraging of unspoken community norms. The notion of an “online community” couldn’t predate the concept of “online”, and the early BBS/networking felt more like operating a radio in isolation. The fact that you were interacting with someone far away was still a novelty. e.g., “someone from Singapore called my CatFUR line!”.
Because most people know the documentation exists and people responding with RTfM aren’t doing so to be helpful but to be impolite. If you’re gonna respond with RTFM just don’t respond and it’ll have the same effect.
Yes but some people ask questions rather than try to look for answers in the documentation, since it is perceived lower effort than looking in the documentation themselves. It is quicker to ask a question than look through the documentation, especially if you are not sure about the right keyword for your query. Plus if you ask a question you get to define the criteria for a valid answer to include an example that almost exactly matches your situation.
Edit: I feel like I should point out that I agree that RTFM is an incredibly offensive term, but some form of answer like: “it is covered in the documentation” actually conveys one important bit of information: The answer can actually be found in the documentation.
When I was learning to code in the late 90s early 2000s people seemed much nicer and more patient then than they do now… I try to avoid asking questions on SO because of the negativity there.
You could learn how to write a better question. I've got ~140 questions on SO and none of them ever had hostile responses. Usually questions are downvoted because they're:
* vague
* no code example
* pretty basic that a google search could answer your question
* same old, same old repetitive questions answered many times over
Back in the days of usenet there would be a FAQ and likely a set of HOWTO's, you had to read this so as not to fall foul of pushback from the group (probably considered hostile).
If you are asking what might be a repeat question but didn't find a solution either on SO or elsewhere, then quote the sources that came close but didn't help. That shows you at least did some research. Both usenet and forums from the 1990/2000's were no different.
I quit answering questions on SO because I felt people were lazy and wasting my time, and I was done with that, mostly for the reasons above. I still monitor the same tags I was active in, see the same old crap, and really cannot be arsed because it's clear the OP's did no research. Also when you do assist some users become quite indignant that you must solve their exact use-case. Many users never even say thank you for an answer that might take 30-45mins to produce. So feck that shite. I don't have the time or energy to be hostile, I just went dark. And sadly many early enthusiastic answerers that helped bootstrap the site in the early years did the same.
I’ve seen really good answers who were despite being correct were downvoted in favor of someone with a higher SO score.
I posted an answer to a question like 10 years ago before dynamic or anonymous types existed in .NET. It was valid and I later prefixed it to say another answer is now correct but mine was correct for the version of .NET in question.
Still has people downvoting today.
It really sucks answering questions and investing time when people downvote for silly stupid reasons.
I’ve also seen questions asked which were good questions with examples and such. But downvoted because “that’s now how you do it” despite “I can’t do it that way because of constraints in the system”
Part of SO is really good! It’s been a game changer. But it’s really a different community now than when it started. I miss the early days.
I once answered a question on SO and included a link to some sample code (production-quality actually) that I had written several years before. The answer was (I thought) on-topic, clear, and helpful (it gave the questioner everything they needed to solve their problem). It was downvoted to oblivion, apparently because I included the link. Then I removed the link and just pasted the 150 lines or so of sample code directly into the answer, but my “reputation” is still negative as a result so I can’t answer any questions. As a result, I have zero interest in ever trying to contribute to SO again.
> I posted an answer to a question like 10 years ago before dynamic or anonymous types existed in .NET. It was valid and I later prefixed it to say another answer is now correct but mine was correct for the version of .NET in question.
> Still has people downvoting today.
Aye that sucks. I have a PHP answer that is the accepted answer, but the PHP wonks got all up in arms about it because the immediate solution for the OP was to use the global keyword in their class method to get access to an array that's being created/modified in the global scope. I did caveat by letting them know this isn't a fantastic solution. But it looks like their app had a shedload of bigger problems that weren't going to be solved in my answer.
I also miss the early days.
Apropos my previous reply, I wasn't having a go at you, though re-reading it does seem that way, so I do apologise. I'd had a couple of wines and whenever stuff like this comes up I do feel slightly hacked off at all the effort I'd put into help build that empire (I was a <500 beta user, and was also a diamond mod). And all I got was a lousy mug and some crappy pens :)
My biggest gripe with SO is when someone asks "How do you do X?" and someone responds, "you don't need to do X to solve your problem, you can do Y."
Y solves their problem and is marked as the answer.
But X is exactly what _I_ am trying to do, and would solve my problem. But any other question regarding X is marked as a duplicate of this first question which only solves Y and not X.
StackOverflow isn't really for beginner questions/"help me learn" kind of stuff. In the course of trying to come up with answers to most questions, they knocked out the easy/fundamental ones early.
I used to answer question on the askprogramming subreddit sometimes, I generally didn't see a ton of negativity there.
I wonder if I’m the only one who read your comment and flashed back to a time in the distant past when some genuinely, maliciously, irredeemably batshit crazy people used to post on the FreeBSD mailing lists. One memorable guy who I will not name went on later to become of all things an investor and finance blogger and apparently claims to have helped found the Tea Party, which is all just absurdly hilarious.
Some do, I'm sure, and I guess someone who posts normal stuff under their name and insane stuff under a pseudonym probably counts as "self-censoring," in a sense.
I don't see how this can be true overall, though. Ten years ago, the time the grandparent poster mentioned, was about when I was realizing that even though the election was over, people I'd known all my life weren't going to stop posting (under their own names) racist Obama memes on Facebook or Twitter, implicitly daring the still-sane people to call them on it. Things haven't improved notably since then. It's not something I think about a lot but my sentiment about internet dialogue over the past decade is definitely not hey, look at all the self-censorship going on. People are clearly concerned with how their comments will be perceived in the future.
I've been self-censoring pretty much since Dejanews ca.1995. Ironically Google have made the Deja archives progressively harder to discover since acquisition.
> ... which means that if 12.1-RELEASE came out at the end of 2019 and 13.0-RELEASE came out in April of 2021, you had all of 16 months to make investments in the 12 branch
A worst-case 16 months sounds like an absurdly long support window by the standards of most actively developed projects; anything but the largest and most corporate of projects is unlikely to do better than that.
And it seems contradictory to complain that stable releases aren't supported long enough but you're waiting too long for new functionality. Of course everyone wants daily releases that are supported forever, but realistically with finite development effort most projects have to settle on something equivalent to "you can run -CURRENT or you can run LTS, pick your poison".
This has not been my experience over the last 20 years across multiple industries. Generally I'd expect (and have written into an SLA) a minimum of 2 years support for anything even vaguely critical. If using Open Source I'd be buying from a vendor who would guarantee that level of support. I acknowledge it's different if you're working for an early stage startup designing the next tinder for marmosets or whatever in which case you probably don't care about support windows because you're likely to crash and burn within your support window.
I've worked across industries in everything from a 2-person startup to an F500 investment bank, and I've very rarely seen a 2 year SLA for anything. (Rolling 6 months was the standard at the investment bank). Buying all your open source frameworks is an expensive luxury (and rarely worth anything), at least if you're in a position where you actually need to deliver.
So you're saying that you've never worked somewhere where the base OS was an old version of Red Hat that was still supported, probably 5-7 years after release?
Now that I think about it I have seen computer running an OS like that, but only for the machine that was running the Oracle database that let the CTO get his golf buddy a $50k/month sale, never on one that was actually being used to do something.
This is standard operating procedure for financial institutions in Europe, at least. And many, many other types of enterprises.
For core banking and other mission critical systems. Nobody upgrades an OS for fun, especially when the checklist for doing so is longer than both my arms put together.
Not every financial institution in Europe, I assure you. The one I worked with actually had a policy that anything that hadn't been updated in six months had to be signed off as it was considered a risk factor.
Hmm. The policy was "a release published in the last 6 months", which sure, could theoretically be an old LTS branch. There certainly wasn't a culture of doing that though. I remember a huge scramble to move everything over to Java 11 (honestly a lot scarier than an OS upgrade in that context) which I'm pretty sure was based around 6 months after the first release of Java 11 rather than the EOL date for Java 8, but it's possible those were the same thing.
Well, I'll tell you how I know the policy I'm talking about is more popular than the one you're talking about: LTSes themselves, and vendors offering extended, enterprise support for old software in general and OSes in particular.
If enterprises would consistently and massively upgrade early (for example within 1 year of a new release), vendors wouldn't need to offer this super long support on such a widespread scale. There's a reason Windows Server is supported for so many years, or RHEL, or SLES, or Solaris, or whatever.
As others have pointed out LTS exists elsewhere - the truth is LTS is minimum level of support necessary for most users. Everything else is essentially bleeding edge - too much work to maintain, constantly bugged, etc.
The previous model where a major change could have support last for decades, where the system was modular in that patches could apply to any major version - that was gold.
Unfortunately today we have this "yesterday's tech is garbage" attitude, and honeslty it is mostly making all software garbage.
Do you ever think about your toaster? No of course you don't. It turns on when you need it, toasts the bread, and then turns off. It doesn't require your attention. Ever.
Then you look at your computer and wonder what buggy hell you'll end up with if you roll the dice with the latest updates.
Something is fundamentally wrong about how we are using computers.
Well, now your toaster is slowly getting wifi modems and processors so it can be "cloud connected" and require a subscription. So don't worry - the same excitement you have updating your computer today will soon come to your toast and bagel situation.
If you're running Debian Stable or RHEL, updates aren't buggy hell. If you're running Arch, Gentoo, or Alpine: of course updates are buggy hell, that's what you signed up for.
Here's a summary of what happened: Firefox got a new password database format. Originally, it would write the new database file and use it if present, but leave the old one there indefinitely. A little while later, people realized this was a security vulnerability: if you create or change your master password, the old database will still let attackers in without one or with the old one. The vulnerability was fixed by having Firefox delete the old database on startup.
The code to read and write the new password database was a new feature, so Red Hat deemed it unworthy to add to their "stable" distro. The code to delete the old password database was a security fix, so Red Hat cherry-picked it into their distro.
But now the problem should be obvious: Red Hat's version of Firefox now deleted the old password database on startup, but lacked the code to write out the new version of it. As a result, as soon as you started Firefox, your saved passwords would be completely wiped from disk. This problem was inherent to the "stable" model that RHEL uses, and would have never happened with distros that just take new upstream versions and avoid cherry-picking patches, like Arch.
> but in practice the opposite often turns out to be true
One backwards compatible issue in how many years, builds and packages? I would say _often_ is highly exaggerated. I've had many more issues with rolling release distros than I've ever had with CentOS, I can basically do a yum upgrade and reboot with my eyes closed and not have any issues.
One issue that eats your data is worse than 100 issues that just take up your time to fix. And sure, you can say backups are important, but we know that in practice, a lot of people don't keep good backups.
And besides, it isn't just one issue. https://bugzilla.redhat.com/show_bug.cgi?id=2039993 is another example of a problem caused by incorrectly cherry-picking a security patch: OpenSSL was affected by CVE-2021-3712, which they quickly and correctly fixed upstream. But RHEL still uses OpenSSL 1.0.2, which has been EOL for more than 2 years, so Red Hat had to backport the patch themselves. When they did so, they made a silly typo (forgot the "!"), which made Web servers all crash with a double free error very shortly after every startup.
https://bugzilla.redhat.com/show_bug.cgi?id=1861977 was yet another, in which trying to fix a Secure Boot-related vulnerability rendered computers unbootable. A fourth one, which I unfortunately don't remember the bug ID for, was when a patch added to NSS in a security update made programs hang when you tried to use a smart card.
To be clear, I didn't go searching for these problems. I know about them because I was affected by them all.
OpenSSL 1.0.2 is a library used by third party software. the value proposition of SLES/RHEL/ubuntu LTS is binary compatible drop in replacements so you don't have to rebuild your user land.
Firefox is a run time environment for the UI for POS/kiosk systems. With horrific customer applications running on top. I can only imagine the internal discussions inside these distros wether to upgrade firefox/chromium or wether to backport patches.
That's why each of the profesionally backed distributions has some strategy to mix the frozen version maintained baseline with some more recent subsystems "from the future", SUSE calls it "Leap", for example.
So I'm not sure if you suggest to completely abandon frozen version supported platforms? or do you suggest improving upon the model? so fewer issues like the ones you listed occur? "even fewer" I should say, because the events are rare. painful, but rare.
Arch at least is super stable in terms of bugs. The problem comes with changes in software versions. Too often people think that stability means only bugs. I would even argue that Arch has even less bugs than Debian.
Arch and Debian user here. I find that hard to believe. I love using Arch and I am fine with the occasional (though rare) update problem, because I'm the only user of my system and I'm learning something when something stops working. But Debian stable is really stable, because other Distros ironed out the bugs before the version came into stable.
I can count myself as hardcore user as well. Mainly using Arch Linux, but for servers and some build environments using Debian and Alpine. It is always Debian which has a package with a bug and it is not updated for some reason. I work with quite complex systems like using libguestfs and somehow it has always broken dependency in some narrow use case. These are often fixed on Arch already, but in Debian you might need to wait a year for official fix.
Based on my time with Arch, I’d agree. The fact is bugs exist even given N amount of time to find them, what matters is how quickly they are fixed - and it doesn’t get much faster than Arch.
An operating system dedicated to toasting bread would be fully possible to make in a way that did not need updates or attention, as long as it had enough testing first.
Is this partly because there is no (or very little) commercial support for FreeBSD and therefore nobody is all that interested in the boring bits like long-term stability releases such as the 5 year release, 5 years of support suggestion in that thread? The volunteers want to do new and interesting things. They don't want to backport fixes for 5 year old bugs anymore when most likely the subsystem that contained that bug has already been replaced in current anyway, so "what's the point?". And if nobody is being paid to do it, if there's nobody _to pay_ to do it, then nobody is going to do it. This has always been the problem with FreeBSD. Despite some great technology the operating system itself has mostly fallen by the wayside in the grand scheme of things. There's way too much FreeBSD "religion" in that community IMO.
There is plenty of commercial support, though, nobody publish anything publicly thanks to the BSD license, thus everybody re-invent the wheel internally.
Btw, I'm an ex proprietary-solution-based-on-FreeBSD developer.
Differentiation from Linux is a hot topic whenever FreeBSD is discussed. I could get behind “converge to a fixed scope, and polish relentlessly” as the primary ethos.
Wouldn’t it be great for FreeBSD to be the first OS to have a release that was actually, truly finished? At the moment, all OS releases are just waymarkers on an endless, goalless roadmap of perpetual development.
>Wouldn’t it be great for FreeBSD to be the first OS to have a release that was actually, truly finished?
I don't mean this in a rude or dismissive way but these comments make me want to pull my hair out. Deciding your software has a fixed scope and is finished doesn't stop the wheels of progress from turning. Worst case, the rest of the world will decide the scope you chose was crap, for reasons beyond your control, and then your project is dead. I get that it is an engineer's fantasy to build a theoretically "perfect" system but such a thing can't exist in practice. I think you'd have a better chance of proving P=NP or something.
I don’t mean finish FreeBSD forever and go home - I mean finish a release.
Set some goals, release software that achieves those goals, and keep fixing bugs that are found.
If you have some new goals (support new hardware, add new feature), work on those in the next release. But don’t abandon the old release, because it still works to achieve the original goals.
What I’m suggesting to avoid is the situation where a bug is found and the user is told “it’s fixed in the next release”. If the bug compromises the goals of a release, it should be fixed in that release!
Sure you’d have to support old releases for much longer, but that’s not the hard part.
The hard part is deciding what the goals are in a precise and coherent way (for this purpose, goals should be the set of promises made to the user, which when broken give rise to a bug). I don’t think any large software project has good discipline here. I don’t think it’s been done before for an OS. But I’m talking about differentiation here.
Thanks for the clarification. In practice what that means to me is you need more people backporting fixes to the old branch. My experience: it's boring, thankless work and it doesn't pay unless there is some serious investment behind it, think heavily-audited systems deployed in government offices that have to stay the same for decades.
There is a good reason for new features. But it might be nice if those features came as part of an official set of extensions around a stable core, to get the best of both worlds.
Not necessarily. They're all features that currently exist in the core product, making that product larger and more error prone. Thus, the support story is either the same, or smaller if one of those features isn't activated.
Having them run as extensions doesn't mean that there's a public extension API, you can keep that interface all to yourself. It means that from the developer's perspective, there's a core offering that the most senior members can handle, but the smaller features are both bundled, and can be more easily passed to other teams.
It's the same idea behind using libraries, instead of shoving all the code into one place. But it forces you to have a single structured interface to extend the product, instead of touching every other file when you want to add something.
Spend any amount of time in the FreeBSD kernel, and you'll figure out that "polish" aspect is just a myth. The Linux kernel is MUCH cleaner.
Kernel subsystem are written by a couple developers at most, in private, and submitted as a +20k/-5k single file diff on a public ML for "discussion".
Some developers I spoke with privately were able to make things move a bit by getting their commit bit and start making aggressive changes without discussions / consensus. That wasn't my style.
> Wouldn’t it be great for FreeBSD to be the first OS to have a release that was actually, truly finished?
Hardware eventually fails. Replacement hardware is eventually incompatible with previous hardware. "Finished" would in this case be a synonym for "dead".
> I could get behind “converge to a fixed scope, and polish relentlessly” as the primary ethos.
> Wouldn’t it be great for FreeBSD to be the first OS to have a release that was actually, truly finished?
By OS do you mean just the kernel or also the userspace?
Then I think the challenge is to determine what gets included under the purview of relentless polishing.
Assuming you mean just the kernel, then probably bug fixes are included. But what about drivers for new hardware? And when researchers at Stanford develop a new file system algorithm with foo bar baz properties? Or a new scheme for sandboxing user programs?
Maybe all new features in new Major version? New FS=Next. But drivers should be buildable by the HW vendor right? And shouldn't the next kernel version be not crazy-hard to port the driver?
I don’t think I understand the (many) purpose(s) of OS versioning. Vendors use them to make specific maintenance commitments? “We will continue to make bug fixes for version XYZ until 2025.”
And if end users care about a particular program, “Okay, version XZY of my favorite OS supports programs A and B.”?
> But drivers should be buildable by the HW vendor right?
The vendor may not create a driver for this OS. “Only 10k people use this OS, therefore it’s likely not profitable for us to make a driver for it.”
All the companies making tivoed devices with walled gardens on BSD/MIT-basis would absolutely love it, if the worldwide group of enthusiasts would finally come together to eliminate most of their in-house software development costs by delivering the perfect free solution, that only needs some branding, marketing and lock-in applied.
Sarcasm aside, I agree, that lots of software could feel more "finished"/refined. I'm trying to think, which of the xkcd comics fits the best here...
The non-portable part of MS-DOS essentially is implemented in the BIOS. When PCs no longer support BIOS booting then MS-DOS will not work, but until then it is sustained by a stable interface.
Am I understanding this correctly? FreeBSD y releases (x.y) are not snapshots of CURRENT, but rather are backports that are maintained separately from the really LTS x release?
I always assumed FreeBSD had a major release and then minor releases that were snapshots of CURRENT like Ubuntu's non-LTS releases were of Debian Unstable.
Ubuntu's numbering scheme is year.month, not major.minor. FreeBSD's major releases are branches of CURRENT. Minor releases are branches from the stable branch.
FreeBSD creates a branch off current very close to a major release. So work for a new major release happens on current. At some point current blocks new features and focusses on fixing bugs. Then a branch for the major version is created and current can work on the next major release.
After that, minor releases mostly contain backported material.
Who uses FreeBSD those days? As a company that runs on the cloud, AWS/GCP/Azure you run windows/linux, and on desktop people mainly run MacOS/Windows/Linux.
I am really asking as for what is the main use case of FreeBSD in 2022?
We[1] have our entire infrastructure on FreeBSD - and always have.
That's why this has been frustrating - we have a history of committing real money[2][3] to the project in an attempt to make investments ... but there is never a fixed target to make those investments in.
It is a matter of fact that our own use of FreeBSD - in live production for a business - is completely divorced from the experience and day to day usage of FreeBSD developers.
MacOS isn’t FreeBSD. I believe they forked the FreeBSD userland a long time ago but used GNU for their shell and toolchain (and the kernel etc is completely different.)
>This is as much a myth about macOS as about FreeBSD; that macOS is just FreeBSD with a pretty GUI. The two operating systems do share a lot of code, for example most userland utilities and the C library on macOS are derived from FreeBSD versions. Some of this code flow works in the other direction, for example FreeBSD 9.1 and later include a C++ stack and compiler that were originally developed for macOS, with major parts of the work done by Apple employees. Other parts are very different.
>Darwin - which consists of the XNU kernel, IOkit (a driver model), and POSIX compatibility via a BSD compatibility layer - makes up part of macOS (as well as iOS, tvOS, and others) includes a few subsystems (such as the VFS, process model, and network implementation) from (older versions of) FreeBSD, but is mostly an independent implementation. The similarities in the userland, however, make it much easier to port macOS code to FreeBSD than any other system - partially because a lot of command-line utilities were imported along with the BSD bits from FreeBSD. For example, both libdispatch (Grand Central Dispatch in Apple's marketing) and libc++ were written for macOS and worked on FreeBSD before any other OS.
>Apple's kernel programming guide goes into more extensive detail about the similarities and differences.
>The BSD portion of the OS X kernel is derived primarily from FreeBSD, a version of 4.4BSD that offers advanced networking, performance, security, and compatibility features. BSD variants in general are derived (sometimes indirectly) from 4.4BSD-Lite Release 2 from the Computer Systems Research Group (CSRG) at the University of California at Berkeley.
And those core utils are generally seen as a hindrance by developers using it.
Plus how long did MacOS diverge from FreeBSD? 20+ years ago? Does it even resemble current FreeBSD enough that this observation makes sense, except from a software history perspective?
Edit: Actually, considering that the divergence started with NextStep in 1988, from 4.3 BSD and not FreeBSD, and that Unix was created in 1969, this becomes a bit like comparing Unix 1969 to Linux 1999... so not really relevant anymore.
> Plus how long did MacOS diverge from FreeBSD? 20+ years ago? Does it even resemble current FreeBSD enough that this observation makes sense, except from a software history perspective?
You'll always find people who'll say that Android and ChromeOS are Linux distributions and MacOS is based on FreeBSD. I guess it makes them feel good.
They are, but they are not GNU/Linux...Alpine is not Gnu/Linux too, but BusyBox/musl/Linux, maybe it makes others sad that you don't know the difference.
>MacOS is based on FreeBSD. I guess it makes them feel good
For years exactly that was written on Apples own macOSX page, but you know it better right?
> Alpine is not Gnu/Linux too, but BusyBox/musl/Linux, maybe it makes others sad that you don't know the difference.
I'm more concerned with the practical significance of calling Android or ChromeOS a Linux distribution than being pedantic for the history books.
Sure, Alpine doesn't use the usual userland you'll find on most Linux distributions but it isn't an alien experience and you can use it mostly like any other Linux distribution out there. You'll still find a POSIX shell and a package manager to install packages. There's Xorg or Wayland for the GUI and you'll find familiar desktop environments and window managers.
Unlike Android, you won't find absence of root access on Alpine. There's no restriction on placement of data on directories like /usr and /var. You won't find dozens of UIDs for different processes running at the same time on a single user system. You can't just slap Android on any hardware x86_64 hardware you want and expect it to work fine. Hell, you can't even do that on ARM devices if you're not using out-of-tree patches and firmware blobs. The bluetooth and audio stack on Android is completely different than what you'll use on any Linux distribution.
So yeah, if we're being pedantic, sure, Android is a Linux distribution because it uses the Linux kernel. Good luck using it like a typical Linux distribution though.
> For years exactly that was written on Apples own macOSX page, but you know it better right?
What I wrote above. Sure, pages written decades ago indicate that the Apple used FreeBSD as its base for its kernel but I doubt their kernel is anything close to upstream FreeBSD at this point. The same goes for their userland and graphics stack. Would you call OrbisOS, used by PS4, a distribution of FreeBSD? Can you do anything meaningful with it like you can with FreeBSD?
> You'll always find people who'll say that Android and ChromeOS are Linux distributions and MacOS is based on FreeBSD. I guess it makes them feel good.
A "linux distribution" just requires the kernel, and the version they use isn't that modified.
If you want a wrong statement, it would have to be more like saying they're the same distribution as the internal builds used by Android Inc. in 2005.
Yes Mac is a BSD but isn’t FreeBSD as it was forked two decades ago and has a completely different architecture. You wouldn’t say Dragonfly is FreeBSD, why would MacOS be?
>And those core utils are generally seen as a hindrance by developers using it.
I don't like Mac nor do i like gnu-coreutil...but that's my personal taste, and not my problem.
>Plus how long did MacOS diverge from FreeBSD? 20+ years ago? Does it even resemble current FreeBSD enough that this observation makes sense, except from a software history perspective?
I said what kernel it uses (FreeBSD and Mach) and i don't know if Apple re-bases their code on current FreeBSD-Code...you can look that for for yourself. Don't start twisting facts because you didn't knew better, at least now you know.
>but when millions of developers like GNU coreutils over BSD coreutils
I don't care if they cannot install Gnu/Linux on their machine, but being forced to use a proprietary system like Mac. Not my problem. Don't want bsd-core-utils? Don't buy apple, it's so easy.
>There's a reason GNU became popular when there were other tools available before it appeared.
That's not the reason and you know it, stop with that half-knowledge you think you have.
It's funny you wrote that because when browsing HackerNews, for me the 3rd item above this was:
We're migrating many of our servers from Linux to FreeBSD (dragas.net)
The (currently) top-voted comment of that thread is somebody describing, how they went back to Linux after the honeymoon-period with FreeBSD, among others because of systemd.
The BSDs always had a following in network-related roles, since Linux has had some issues in the past (e.g. accept filters used to be useful, iptables more complicated than pf), and OpenBSD cultivated a security-minded reputation.
Have you done any writing about that experience and setup? Both Postgres and ZFS are COW, I seem to recall some warnings back in the day about conflicts between the two systems but I have no first hand experience.
I've seen ZFS being used with Postgres in a few different environments. Seems to work fine for the most part- surprisingly good compression (~8X in one case, usually lower), with the major downside being increased CPU usage when taking advantage of said compression.
I think that only one or two of those environments were heavily used production instances, so if there is a serious gotcha here it might not have been apparent to me.
No we’re kind of a lean shop so we don’t do much tech blogging - it’s something we’re thinking of doing soon though.
As for CoW, we just turn it off on the postgres config and rely strictly on ZFS. We also turned off checksumming and compression in postgres and use Zstd:3 on the file system. Beyond that we just followed your run of the mill tuning guide you’d find on Google.
It's used to build and distribute closed source operating systems and devices to users. Every Mac and iPhone user has a little bit of 2003-era FreeBSD running that provides XNU its BSD "flavor".
SwitchOS is not based on FreeBSD. It has a net stack forked from FreeBSD code but if that makes it a FreeBSD than Windows and Linux are both BSDs I guess.
It's both worse and better.
openbsd is unashamedly an operating system for openbsd developers. but they don't mind if you use it. I find openbsd far more comfortable than freebsd, freebsd has more features, but openbsd feels more useable. It's a subtle thing.
Openbsd makes no guarantee of a stable abi, and they are not shy about changing the abi. however the api is remarkably stable, that is, you have to recompile your projects every time you upgrade.
Openbsd sort of plays it's own game with regards to releases, they develop on "current" if you use this it is dynamic like arch. every 6 months(like clockwork literaly) they cut a release. if you use a release it's stable and out of date like rhel. Note that you can't make any assumptions about what changed from the release number, openbsd 6.8 to 6.9 to 7.0 was just three releases, nothing special changed between 6.9 and 7.0.
Personally I run my desktop on current and find it usually takes around 3 weeks before they break abi, at which time I upgrade to latest current. I run my servers on a release and update them every 6 months.
If you want to keep a stable system longer than a year, go for it, but it is up to you to make sure you have a copy of all the packages you may ever need, note that you will not find any security patches or sympathy online after two releases.
OpenBSD is much more strict than FreeBSD with regards to release cycles. You get N-1 releases, period, which ends up being any version gets you about a year of support.
FreeBSD release notes[1] include pointers to EC2 AMIs, Google Compute Engine deployment, as well as links to download generic virtual machine images. I personally run a FreeBSD vm on DigitalOcean, but I suspect they should work most places.
Looking at freebsd.org right now I see that my options for production, release systems are 12.3 and 13.0 ...
... which means that if 12.1-RELEASE came out at the end of 2019 and 13.0-RELEASE came out in April of 2021, you had all of 16 months to make investments in the 12 branch before it was hopelessly passe and all queries regarding it would be answered with:
"That's fixed in current ..."
The 4.x nostalgia is justified: it was polished and polished and polished over many years and many organizations were incentivized to invest in that branch