I ran this for many years (back in 1997/1998 and onwards) on recycled System 7 hardware[1]. The hardware was very reliable. Mac OS pre-X was horrible, just constantly crashing. MkLinux made the hardware pleasant for web servers and firewalls.
The 8.5 and 8.6 branches of Mac OS were great, I feel. I hosted lots of servers on them, especially 8.6.2. However, every single other version of the OS was a buggy, crashing nightmare if you were looking for extended uptime. 7.6.1 was halfway OK, but man, I never kept a Mac OS running as long as in 8.6.2. I had months of up time, I'm telling you. Months!
There is no Mac OS 8.6.2. Thinking about it, Blue Box running on NuKernel with Copland-style background server apps would probably not have been bad, especially with the use of Pascal instead of C strings which helped security.
I must mean 8.6. I think I am confusing it with 8.5.1.... It has been 16 years..... 8.5.1 and 8.6 were the greatest, though...
Anyway, the great thing about hosting on Classic Mac OS is that there was nothing to exploit. With no remote admin capabilities beyond some Appletalk-related stuff, even if someone did find a way to compromise your system, the only thing they could really do was freeze it. I threw a Mac OS 7.5-ish (obviously I am forgetting numebrs) Mac up on the Defcon LAN and put the IP on the whiteboard for targets, and not a damn thing happened. Not even a hang...
Survives to this day, as the OSF Mach sources in MkLinux were the alleged basis for Mach kernel module emulation in the NextBSD project. This according to Kip Macy's testimony over IRC, at least.
I was working at Brazil's largest ISP at the time and we had stacks of abandoned Power Macs. I grabbed one and installed it as my second desktop. Life wasn't exactly easy and compiling things like a browser were a pain, but it was an excellent X terminal.
I left the company to come back a couple months later as a consultant. Just about every sysadmin had one on their desks by then.
Why is it that Apple apparently is so keen on Mach? As I understand it, both this and XNU/Darwin/OSX have Mach in the lower layers. And if it adds so much value, why doesn't regular Linux distributions build on top of this?
I was big into OS dev when Mach was coming around but I was a newbie. I read all the Mach papers with stars in my eyes, it all sounded so good. I wasn't alone in getting drawn in by the research papers.
I went to Sun and learned how a kernel could work and could perform and the allure of Mach, for me, started to wane. Reading the mach code made it worse, especially compared to the Sun code.
All these years later, I remain underwhelmed. Linux has a nicer VM system, performs better, and is more readable. Not Sun level readable by any stretch, but better than my memory of mach (which, to be fair, is in the distant past).
If someone can point to actual real world data that shows Mach to be better performing on the same hardware I'd be interested in seeing that, I tend to think that's not possible.
If you want to see what a real microkernel done right looks like, go look at QNX before they added the POSIX conformance. You have 4-5 people logged into an 80286 doing work. Very, very light weight and well done. They were the only guys that understood what the "micro" in microkernel meant. I remember Dan Hildebrandt telling me that the microkernel easily fit in a 4K instruction cache (it pretty much had to, it didn't do much so you needed some space for whatever it was dispatching).
As someone that loves to dig into source code to see how things are done...
Solaris source is the state-of-the-art. If there's a "beautiful" kernel, Solaris is definitely the one. QNX also is one very well made kernel(its like a OS created by a zen master).
It's a pity that sun took so long to open the source of Solaris, or else i could be typing this from a solaris kernel, instead of Linux.
The solaris source code was open, i believe in 2005.. then came OpenSolaris in 2008.. and then the Illumos fork, if i recall correctly in 2010, after Oracle bought Sun..
But even in 2005, when they decided to open the source, it was already too late, because Linux was already a big hit by that time.
But, to use Illumos as a first machine, can be really painful, exactly because of the lack of adoption, and the lack of drivers and software as a consequence... that's the reason of my comment about the time to open the code.. and Oracle closing it up again, didnt help it either.
I would definetly consider Ilummos as a server node.. but for a personal machine.. it can get painful really easy, if you use for development like i do.
It's called Darwin: Mach + kernel-mode components for things like graphics and networking + some modifications. Mach used straight-up as a microkernel wouldn't have been fast enough. I've heard people on their developer side admit Mach was a mistake and want to ditch it. Just would be hard at this point.
Animats and I both thing QNX was the best microkernel of the period, used successfully from embedded to [briefly] desktop prototypes. Managed to get microkernel advantages while being fast and deterministic. People working on new one's should take lessons from them. INTEGRITY RTOS's have some nice design choices, too, like making apps donate their own resources to kernel functions they call and using only predictable instructions in critical paths.
Mach is crap compared to the competition. Despite all the attempts to make it not so.
It was lightening-fast but too primitive & had weaknesses. Something like EROS (capability-secure kernel) or INTEGRITY-178B (separation kernel) were more ideal. Back when I was implementing, I did work with L4 (esp OKL4) based on the Nizza Security Architecture:
You can do a lot with that model whether you use a recent L4 kernel or some other microkernel. If you're into that stuff, I'd say check out the GenodeOS project at genode.org. Unlike most, their OS is using as many robust components from academia as possible while attempting to integrate them into something usable. They have a microkernel, a resource architecture, Nitpicker GUI, and so on. It needs help wherever people can get it.
I'm a fan of EROS as well. But primitive is the point, no? Support only what minimal feature set is necessary to implement the rest in higher layers with optimal performance, while adding some sort of security guarantees (even if just about threads & memory).
Sort of. The point is less primitive and more verifiable. A lot of the security kernels of the past were a decent size. So was Dijkstra's THE OS which pioneered the robust, construction processes. Hamilton's team's flawless code for Apollo certainly wasn't small either:
We tend to keep the TCB small as less code = less bugs. We simplify it because simpler code = easier verification. Usually... More important is you know what states or execution effects each component can have in a way that can be abstracted into analyzing those above it. So, the thing can be 4Kloc or 100Kloc, but it should be easily analyzed and composed modules.
One problem I noticed, though, is that systems without needed functionality get continuous, ad-hoc versions of that functionality from their developers. C and UNIX were perfect examples with all kinds of extensions there was a mess trying to cover stuff that came by default in prior languages and platforms. So, there was less safety & consistency anyway despite underlying tool being primitive.
This led me to shift from "simple & tiny as possible" to the simplest version of tools that make it as easy as possible to do right. Bernstein et al's Ethos & NaCl projects are perfect examples where internally they're a bit complex but interface is simple to use & securely. Just got to find the right balance.
A lot of good wisdom on such things is from Karger et al in the link below on high assurance virtualization. See "layered design" especially for verbal and visual explanation:
I agree that it looked good at the time far as open stuff goes. I looked at it in terms of INFOSEC research. I followed all the attempts to make high-security, microkernel platforms based on it and variants. These included Trusted Mach (TMach), Distributed TMach, DTOS, and so on. That line of research led to Flask architecture, bases of SELinux. It also fed into decisions by Cambridge team to do capability-based, app sandboxing with Capsicum vs prior methods like MAC & microkernels. That's been applied to FreeBSD, CheriBSD, and Apple's stuff.
So, there were lots of interesting papers and work on it. Anyone could've gotten sucked in. After a while, though, it became clear that Mach was a bad track with research shifting to L4 and commercial microkernels (esp RTOS, sep kernels). Torch passed to groups like Dresden and NICTA.
Mach had an unusually large amount of fanfare starting from its introduction at USENIX circa 1985 up to the mid-90s. It was envisioned as a generic resource multiplexer to build all sorts of platforms on top of, and particularly for emulating Unix. You had single-server Unixes like Lites and OSF/1 (the latter sold commercially by DEC as Tru64 UNIX), plus multi-server systems like Mach-US, MASIX, MK++ (for high assurance) and GNU Hurd.
In the late 80s, the Open Software Foundation (OSF) was founded as a consortium of seven major tech companies spearheaded by DEC to standardize Unix in response to AT&T and Sun's adoption of SVR4. They adopted Mach as part of this and ended up extending it to add locksets, semaphores and resource ledgers. MkLinux was then a joint effort in 1996 by Apple and the OSF to both have Linux on the PPC and exploit Mach features in the single-server emulation.
As such, the history is convoluted. It's not that Mach is so great, but rather that circumstances made it creep in various places.
Avie Tevanian was a key player in the Mach research project at CMU, and when Jobs brought him onboard at NeXT, he made his work with Mach the basis of NeXTStep.
When Apple bought NeXT, Tevanian became CTO as part of the reverse management takeover. The Mach version underlying Openstep was updated for OS X, but kept around since Tevanian was very much a Mach booster.
Just looking at it from the outside, I have to guess it's because they have had people who know Mach very well going back to the NeXT days, when key Mach figure Avie Tevanian was there. Even if a lot of those original people are gone now I am sure a lot of Mach knowledge and momentum has been built up over the years due to this early decision.
It's my understanding that Apple has, over the years, gradually supplanted the parts of Mach that really matter.
For example, Apple no longer relies on the Mach port mechanism for IPC, instead mostly relying on their own, better XPC, which forms the foundation for libdispatch (aka Grand Central) and OS X's sandboxing.
Of course, XNU is also a monolithic kernel, not a microkernel.
Transitioning to a different kernel would probably be a costly mostly-zero-velocity project for Apple, especially now that they are no longer in the server OS space. A different kernel might require replacing IOKit, unless one could write a compatibility layer for it.
What I still don't understand is, for a project like MkLinux to exist, they must have looked at Linux and decided that a (non-mk-)linux kernel was lacking something?
They probably thought that porting Linux to run on Mach and porting Mach to PowerPC was less work than porting Linux to PowerPC. In retrospect, this was not true.
There was a period of time in the '80s when it was widely assumed by the software cognescenti that microkernels were the future. A lot of companies were experimenting with them. A lot of these experimental OSes were based on Mach. GNU Hurd has always been based on Mach. This makes sense because it began development in 1990, and at the time that might've seemed like both a forward thinking and sensible thing to do.
NeXTStep was based on a Mach+BSD kernel, which at the time would've been an obvious thing to do if your goal was to create a modernized Unix system with all the latest cutting-edge (in the mid-80s!) technologies available.
That kernel lives on in OS X and its descendants. It was always built as a monolithic kernel, AFAIK. It just differed from a traditional BSD kernel in that the lowest layers (VM, process, thread, and scheduler stuff) were replaced by the Mach microkernel. I believe that this kind of monolithic Mach+BSD arrangement was a pretty common thing to do when developing a Mach-based Unix implementation.
From the outside it's almost indistinguishable from a BSD kernel. So why not just use an ordinary BSD kernel? Because it future-proofs the design. It gives you an easy path to gradually moving kernel-space services out of the core kernel and into separate address spaces, as microkernel technology improves and it becomes practical to do so. Of course, that never really happened, but it was a very logical choice at the time.
There were other benefits to Mach- you also got multithreading (this was a big deal, mid-80s remember), as well as Mach IPC. Mach IPC has its pros and cons, but just being able to have user and kernel services communicate via arbitrarily defined protocols rather than having to constantly add new system calls is very convenient.
That's pretty much it. This system has been around for > 30 years now, and even though it never turned into a true microkernel system, there probably was never much of a reason to de-Machify it either. It would be a ton of work and you wouldn't gain anything. It's not that Mach is so awesome (although it has some niceties), it's more that there's a lot of momentum there, and you wouldn't really gain anything by rewriting all that code.
As for why Linux distributions don't use it, the same reasons apply. It'd be too much work for relatively little benefit. It made sense to base your cutting edge kernel on Mach in 1985, not so much 30 years later. It doesn't make sense for OS X to stop using it for the reasons given above, but it also doesn't make much sense for anyone who isn't already using it to base new stuff on top of it today.
After Copland was killed Solaris and Windows NT were the first options Apple considered. They also had discussions with Be before finally going with NeXT.
This was all pre Cathedral and Bazaar (1997) so Linux wasn't really on the radar at most companies.
Back in 2000 I found we had a Power Mac 6100/60 at work which had the Copland developer preview loaded on it. I've never seen a copy of it since. I wonder if anyone still has the media, I looked all around the office for it, I'd love to take a look at it again. One of those rare birds from Apple, like MAE. Although, I think MAE could actually run some programs unlike the Copland DR. I don't remember being able to do anything in it. If I found a copy of MAE I would totally go get myself an Ultra 1 from eBay all setup just for the thrill :-)
I'm pretty sure I have a copy of MAE on cd around somewhere - possibly multiple versions. If you really want it let me know and I'll see if I can dig it up.
Yeah, send me message at s.d.m at ieee.org. The only time I got to play with it, was at the Sun office here in Seattle. We tossed out all our Sun gear years ago now but I saved all the media and books going back to Solaris 2.6 in case I ever wanted to setup a system again.
I remember running this on a Power Computing machine during college. It was a great alternative to System 7 at the time, and easier to be productive in than something like BeOS DR3 (although BeOS was fun in its own right).
I'm surprised to see this got updates all the way up to 2009!
I switched to BeOS on my Motorola StarMax when it got stable around the second release and used it as my primary OS for quite awhile (all the way through version 5 when I switched to x86). I had tried MkLinux, but was using MachTen before that. I had used MachTen on 68k and it was rock solid, I never had the same luck with the PowerPC version. I honestly can't remember why I didn't stick with MkLinux. Much later I ran YellowDog on my G3 machines.
EDIT: I'm thinking back... I was pretty young at the time 14-15 yo. And just learning to program, and what I really loved about BeOS was how easy the API was to learn. I had been learning C/C++ in MachTen and never could wrap my head around making Machintosh GUI applications. But sitting down with the Online documentation in BeOS, it was VERY easy to make a program with 3D graphics and a native GUI. Later on, QNX gained a lot of popularity I think for the same reasons (fantastic documentation and API). I had worked on a project under Solaris which relied on POSIX.4 extensions (yikes) before moving it to QNX.
My first experience with UNIX in general and Linux in specific. In some very real ways I owe my career to playing with MkLinux on my G3 PowerMac in highschool.
Edit: [1] I think the hardware was this: https://en.wikipedia.org/wiki/Power_Macintosh_7600