In WSL1, running "wsl git status" on a moderately sized repo on an NTFS (Windows side) drive or SMB file share is nearly instantaneous.
In WSL2, running the same command takes over 30 seconds. WSL2 is a massive hit to the seemless experience between the two operating systems with filesystem performance from Linux to Windows files orders of magnitude worse. Yes, unzipping tarballs or manipulating and stat syscalls are cheaper now on the Linux side. The downside performance loss is, however, staggering for the files I had in C:\
Don't even get me started on how long an npm install took.
One of the truly wonderous things about WSL1 was the ability to do something like this in a PowerShell window:
Now performance across the OS boundary is so bad, I wouldn't even think of using "wsl grep" in my C drive. Or "wsl npm install" or "wsl npm run test" or any of that.
It's very depressing because WSL1 is so, so promising and is so close to feature parity. WSL2 should definitely stick around and has use cases, for Docker it's unparalleled. But for daily driver mixed OS use, WSL2 has made me very unhappy. I think I'll be deconverting my various WSL distributions because the performance hit was too much, it just was absolutely unbearable to even run simple commands in my Windows side.
My understanding was that there were some hard-to-impossible problems to solve to really accelerate the filesystem access from the Linux side under WSL1.
That meant that people doing disk-intensive workloads on the Linux side noticed a big slowdown compared to a native Linux system - certainly e.g. running a big test suite, or a git checkout felt really incredibly slow.
The switch to a VM flipped this relationship round - so now the formerly native / NTFS side is the second-class citizen, but you get the expected performance when putting your files on the "Linux side". For me (doing fs-intensive Rails development), this was a big win.
WSL2 will also quite happily gobble so much memory that Windows slows to a crawl (especially filling Linux's disk buffers on file copies) - that seemed like an odd default, - you just have to pop a .wslconfig in to restrict its usage.
I agree with the other posters here that the WSL1 approach seemed far more elegant, and probably the only way to "not see the joins"- with WSL2 we're worrying about filesystem boundaries _and_ memory now, probably forever. So I hope someone still working that nice seamless syscall layer for a future WSL3.
Long bet: WSL3 will just be Microsoft dropping the NT kernel altogether and replace it with the opposite compatibility layer (like Wine) running on top of the Linux kernel.
It probably won't happen anytime soon, but to me it looks pretty inevitable in the long run : because of Azure they already spend tons of engineering time on the Linux kernel nowadays, and maintaining their own proprietary kernel won't make much economic sense for long, exactly like maintaining their own browser engine.
That was not my experience with WSL1; I was regularly running into unimplemented features. Some examples: the Z3 solver used clock_gettime for timeouts, and their specific usage was broken in WSL1 so you'd get random failures depending on how long the solve took. And don't get me started on running Chromium.
It felt like WSL1 would be running into the long tail of compatibility issues for years.
But I'll admit that I don't try to use it the way you do, I run WSL precisely so I'll never have to launch cmd.exe.
Some useful network related stuff wasn't implemented either in WSL1 (NETLINK_ROUTE\RTM_GETROUTE, AF_PACKET family). This meant even good ol' nmap was out.
Clearly the best solution is for Microsoft to (a) write a proper ext4 driver for Windows and (b) find some way of embedding SIDs into ext4, then you could just format the drive as ext4, boot off it, and have the improved performance.
(This is mostly a joke, but the performance of NTFS for certain operations has always been abysmal, and having a virus scanner injecting itself into all the operations only makes it worse.)
AFAIK the main problem is that Unix's file permissions do not cover Windows' permission model. That would be tolerable on a data partition, but a system partition is going to use all kinds of very particular permission setups on system binaries etc.
You might be able to model that stuff as xattr, but then it could be problematic to mount that ext4 partition into Linux because applications might be copying files without respecting the xattrs.
>AFAIK the main problem is that Unix's file permissions do not cover Windows' permission model.
Well, since Microsoft has been borrowing more and more ideas from the Linux ecosystem, it would not surprise me that a Windows 10 successor would include some kind of compatibility layers for different file systems.
Why don't they just replace Windows with their own Linux distro? :D WSL2 cannibalizes Windows from the inside out, and all that's left is Sphere. Seems like the most efficient solution.
I wonder if this is like the IBM PC, which was a GOOD THING invented by a sort of an offshoot of IBM culture. Then IBM higher-ups stepped in and tried to control the platform (PS/2, OS/2, microchannel, etc)
WSL is attracting people to windows. But the endgame isn't to lose them to linux. So they have to tie it into windows more. But if they make it too slow and bloated they might lose.
Much of the performance problem comes from layers on top of NTFS itself- it's not just the virus scanner. Ext4 might be faster but I doubt it would be enough to ditch WSL2 for those use cases that need it.
Also, some of the "performance problems" are simply different access models. Windows and NTFS tries to provide some database-like ACID characteristics, including transactions at the level of batches of file updates with commit/rollback support. Ext4 and Linux (intentionally) make few such guarantees and so it shouldn't be surprising have very different performance profiles, just as you might expect between a NoSQL database that makes no ACID guarantees and an SQL database with multiple types of locks and several types of transaction behaviors.
I feel like they don't advertise this enough. I was under the impression that it's a one way street from wsl1 to wsl2. Since wsl2 is not strictly better than wsl1, it's nice to be able to convert and pick the trade-offs you want.
Yeah, the mistake of using numbers instead of names gives the impression that 2 is strictly better than 1 and that migrations are unidirectional upgrades.
Even just letters like WSL-A and WSL-B might have given a better impression.
Unfortunately Git on Windows is also extremely slow. Especially using Magit in emacs, which does a lot of git calls, works much, much faster for me if it's sshing to a Linux VM for each call then running natively on Windows.
Git is not slow for me. Don't know about Emacs, but I'm using git from command line and from Idea and it works just fine, instantly for ordinary tasks. Commiting 1000+ files takes few seconds.
Given the fact that Git is used for Windows development with monster monorepository, I think that something's wrong with your setup rather than Git on Windows in general.
Git itself is decent, the problem is that Magit calls git a lot of times for a single GUI action. For some things, it can call git 5-10 times for a single key press. If every git invocation is around 1 second, that becomes a noticeable delay...
I use Emacs in WSL, along with a suite of other tools like rust-analyzer, and the experience is _lightyears_ beyond trying to run those tools under regular Windows.
I think the popular Windows development tools will get support for remote development with WSL. JetBrains is working on it for IntelliJ and I can't imagine Visual Studio will be far behind.
WSL2 is really not designed for using Linux tools on your NTFS-based filesystem. Store everything on the WSL filesystem, that works perfectly. If you need GUI tools try VcXsrv.
What do you propose if you want to use a windows program to edit those files?
For example I use intellij on Windows but want to compile and test on the Linux machine. If it takes 30 seconds longer than wsl 1, why would I bother changing?
What is the actual point of wsl if not for the cross compatible filesystems
WSL never really got the cross compatible filesystems working though: I eventually found myself giving up on it and just using cygwin. I honestly don't understand why WSL gets so much attention when cygwin is just so much more compatible with everything and includes essentially every package I have ever wanted?
I've wondered the same. A lot of it is PR from Microsoft, including this post here probably.
There is a Reddit sub for WSL (/r/bashonubuntuonwindows) and it's apparent that MS has PR people on it pimping each new release. Reminding these mostly new developers that Cygwin has been around (with all its problems) for years, brings on a flood of downvotes.
And now it's exactly the same with WSL2; when someone has an issue with some esoteric networking feature that is still not supported on WSL2 beta versions, I'll often remind them that VMWare Player and VirtualBox have been around for a decade and will solve their problem, while also including all sorts of nice features like shared folders, drag & drop, copy and paste integration, etc. But they don't want to hear it. They've been fed so much marketing that WSL and WSL2 are really something incredible...
That's not my experience with VS Code on WSL 2. I have been using it for months using the remote extension, hosting my git repos in the Ubuntu subsystem, it works like a charm and feels very responsive.
Maybe wait for an IDE update that handles properly WSL 2?
Then there is no WSL 2 handling in your editor. VSCode remote extension works in the same way for either WSL 2, or a full-fledged Linux VM, or even a remote Linux server.
As someone who run a Linux VM side by side all times, I really don't get WSL 2.
VSCode made some unique design choices which enable them to support connecting to any Linux server, VM or not. In contrast, these design choices may not be possible for other IDEs. So, because WSL 2 is, effectively, a Linux VM, supporting it in editor is harder than supporting WSL 1.
As for "I really don't get" part, I wanted to say that WSL 2 sounds like a regression to me, WSL 1 makes it possible to achieve something (namely, local-ish cross-"os" net/process/file-system integration) that is entirely impossible otherwise, while WSL 2 is a nice packaged-up solution but functionally does not do more than people already get (Hyper-V).
One thing that's very valuable to me in WSL (both 1 and 2) is the automagic network settings that make ports available between systems - so if I start listening on 127.0.0.1:1234 in Linux, I can connect to that on Windows and vice versa.
For wsl2, Vscode has integrations that let you do exactly that. I use python primarily and it lets you use the python interpreter installed on wsl. I assume other IDEs would have something similar or at least let you develop using a remote machine, but in this case you would configure it to point at your VM instead.
As long as your processes and files are from the wsl vm, it is extremely fast. I rather use the wsl shell, so all of my files are in the vm.
The problem is that special integration is required.
Personally I don't like VS Code, I too use IntelliJ IDEA, which will probably end up having support, but it didn't last time I tried.
On my Macbook I also use Emacs and GUI versus terminal shouldn't be an issue. I'd want Emacs from inside a WSL bash, I'd want it from the Windows GUI too. So that's going to be a headache.
Since version 1903, the proper way to access Linux files for writing is to invoke explorer.exe from within WSL. A transparent 9P mount is created for the working directory and files are made accessible through a regular Explorer window.
This has been changed. WSL 2 VM now runs a 9p file server, and on the Windows side it mounts to \\wsl$. Of course, the performance are degraded. It would certainly take longer for Intellij to index your project.
I have my projects in WSL and IDE (jetbrains) in Windows. Works fine, obviously IDE file system responsiveness is lower than native but the execution / build performance of project in WSL makes up for it.
I tried this on WSL1 and it absolutely didn't work for any project larger than the typical Hello World example. Trying to use a polyglot project with a bunch of Java, Scala, Go, various plugins like DB views, etc. would grind Jetbrains on Windows to a halt as it simply couldn't sync with the project files on Linux due to slow IO.
I've used Linux VMs on Windows before - VMWare Workstation has been around for over a decade and has a lot of bells and whistles that make the experience tolerable, but again, the IO is too slow to share Windows and Linux apps between filesystems, so you're basically forced to develop 100% in the VM, IDE included. If you're locked to a Windows laptop because of your employee's IT rules, it's better than nothing, but not optimal, and I wonder why people are so excited for WSL2 when VMs that have more features have been around for over a decade.
I've been trying to find a non-Apple solution for a decade, and it just doesn't exist. And as Apple has been ignoring developers and MacOS itself for the last 5 years, and Linux is still riddled with the same problems it has had for 20 years, the options for developers are becoming less and less.
Deactivating Windows Defender for the subsystem‘s storage folder helps somewhat, but I agree that the situation makes the WSL close to broken for many potential users.
Yeah I really don't get the desire to do unixey development on Windows. Boot up a VM and SSH into it, if you really must have Windows. It's not like you have to buy a license to use Linux. I keep struggling to understand what WSL brings to Windows. First it was a totally incomplete distro, and now it's just a fucking VM. Seems like a gimmick more than anything.
Virtualbox, HyperV, etc will all allow you to access your Windows files on the guest OS. If that doesn't work, just set up an SMB share and map it. Why all the complication? Does clicking one button to install a distro really serve anybody? Why do you want to use the fucking awful Windows Update mechanism to update your kernel? Updating the kernel in Linux is so fast and easy...
You learn so much more about Linux by running Linux. Why are we trying to abstract that away? I think it's Microsoft's desperation to keep devs from continuing to jump to MacOS and Linux.
I too have found this, with one exception: fork/exec is appallingly slow. This means ad-hoc scripts in Cygwin should be written as much as possible as pipelines and not loops; bash string functions should be used over sed, grep etc. whenever you can help it.
But with that caveat in mind, Cygwin turns Windows into an acceptable Unix for command-line purposes.
I don't think it's quite as good as a development environment when you're targeting Linux though. That's where WSL (in either form) makes a lot more sense.
Apparently[1] virtio-fs wasn't mainlined in Linux until 5.4, which was rolled last November. I thought it had been mainlined years ago. That helps to explain why support is lacking everywhere.
But what's the point of that? If I wanted my files to be on Linux I'd use Linux. I'm on Windows exactly because of things like this: the ability to use proper UI tools like Explorer to manage my files. How do I do that with WSL2?
It's not a network path, it's a pseudo-device. Windows uses the \\ UNC file paths for a lot more things under the hood than just network access. There's a bunch of rare device file paths that you'll get UNC paths for, and every folder path canonicalizes to at least one UNC path for multiple reasons.
Though it is powering that pseudo-device by a 9p-based file server under the hood, it's not a network accessible path, it's only available to the local system.
The trade-off between WSL1 and WSL2 (and you can have both on the same system and migrate distros both directions between the two) is mostly how often and where do you expect to need to deal with the 9p file server between your operations. In both versions Windows needs to use the 9p server to access Linux files, in WSL2 Linux also needs the 9p server to access Windows files.
At a high level it's much closer to a Win32 Namespace [1] that appears like a network path. UNC stands for Universal Naming Convention, there's no "Network" in the UNC abbreviation as there are many namespaces other than just network paths. Which is why the $ was chosen for the name because it is a valid Namespace character but not a valid system name in Windows, and they wanted to avoid the problem of people with systems named "wsl" suddenly unable to be accessible over the network, because Namespaces have higher priority than network paths. You could think of it as bypassing the network, but it is maybe more accurate to view it that network access is a fallback of UNC paths after all local Namespaces have been checked if they support the path.
Also, yes the current implementation backing that namespace/path is a Plan 9-based network file server, but that's an implementation detail that could change, seems to handled under the covers of the Namespace a little more directly than usual network access (including avoiding a localhost "loopback"), and probably something subject to change as WSL's needs change.
I don't blame MS for giving up. Think about how complex it is to maintain a custom version of the Linux kernel which isn't a kernel but a wrapper for your very foreign OS. I'm surprised they even went that route.
My guess is they people who put it together were under the assumption that Linux should be as simple as to implement as the old Windows Subsystem for Unix and POSIX API.
So if accessing ntfs from wsl is now slow, you can put the files in wsl instead. The problem then, if you're using this as a dev machine, is how do you edit them in Windows? I want to use my JetBrains IDEs to edit wsl files, if that doesn't work I'd just stick with dual boot.
This was the use case I was really hoping for, also. I've been back and forth with various employers over the years, depending on their requirements - Windows only (it works for Java), Cygwin, VMware Workstation with Linux VMs, MacBook, Linux on the hardware, etc.
WSL1 was unusable as there was no way to run IntelliJ or Eclipse on Windows, and have it work on large projects sitting in the WSL filesystem - the file IO was way too slow, and the instantaneous feedback you expect from Jetbrains products just wouldn't work.
VMs on Windows would work, but again only if you were developing 100% in the Linux VM, IDE included, and just used Windows for Office and whatever else was required. But at that point, you still had to deal with all the Linux issues like ugly fonts and broken plugins, with all the problems of slow VM file IO.
Cygwin also similar to WSL1 - just wouldn't work for anything that required real Linux underneath.
Linux direct on the laptop works, but with all the same problems that have been around for 20 years and never seem to get fixed - broken multi-monitors, ACPI issues, driver support for Nvidia, video conferencing being too slow or unsupported, no MS Office, etc. I just don't have the time or motivation to spend hours every week babysitting a Linux laptop.
Macbooks are definitely the way to go, I'm just worried that my employer will balk at the cost of the new $3K 16" MBPs next upgrade cycle.
In WSL2, running the same command takes over 30 seconds. WSL2 is a massive hit to the seemless experience between the two operating systems with filesystem performance from Linux to Windows files orders of magnitude worse. Yes, unzipping tarballs or manipulating and stat syscalls are cheaper now on the Linux side. The downside performance loss is, however, staggering for the files I had in C:\
Don't even get me started on how long an npm install took.
One of the truly wonderous things about WSL1 was the ability to do something like this in a PowerShell window:
C:\some-code-dir\> wsl grep -R "something" | Some-PowerShell | ForEach-Item { }
Now performance across the OS boundary is so bad, I wouldn't even think of using "wsl grep" in my C drive. Or "wsl npm install" or "wsl npm run test" or any of that.
It's very depressing because WSL1 is so, so promising and is so close to feature parity. WSL2 should definitely stick around and has use cases, for Docker it's unparalleled. But for daily driver mixed OS use, WSL2 has made me very unhappy. I think I'll be deconverting my various WSL distributions because the performance hit was too much, it just was absolutely unbearable to even run simple commands in my Windows side.