When WSL came out I was absolutely overjoyed - finally an actual linux shell on windows! I use windows for my gaming pc, and I wanted to have a unified gaming/dev box. It felt like the solution.
Over time though more and more small issues with it came up. Packages working not quite right, issues with the barriers between the two, etc. It always felt like there was a little bit more friction with the process.
With Valve really pushing Proton and the state of linux gaming, I've recently swapped over to Ubuntu and Nixos. The friction point moved to the gaming side, but things mostly just work.
Things on linux are rapidly getting better, and having things just work on the development side has been a breath of fresh air. I now feel that it's a better experience than windows w/ WSL, despite some AAA titles not working on linux.
WSL 1 was supposed to be like "Windows on NT" where it emulated the Linux kernal to the NT one. they skipped a ton of features then dumped the whole thing for a containerized virtual machine thing for version 2. Wish the NT one worked out but I get it being complicated.
If the WSL 1 ended up working, it would have been one of the best historical coincidences in MS's history. A long forgotten feature in the NT kernel, unique to pretty much any other OS out there, used to push it's dominance in the 90's, is revived almost 30 years later, to fight for relevance with Unix based OS, once again. To quote Gorge Lucas, It's like poetry, it rhymes.
I can tell that if POSIX subsystem in Windows NT was actually a good enough UNIX experience, I would never bothered with those Slackware 2.0 install disks.
And the subsystems concept was quite common in micro-computers and mainframes space, Microsoft did not come up with the idea for Windows.
The original POSIX subsystem was just there so MS could say that it exists (and pass DoD requirements).
It got actually somewhat usable with the 2k/XP version, slightly better in Vista (notably: the utilities installer had option to use bash a default shell) and IIRC with 7 MS even again mentioned existence of the thing in marketing (with some cool new name for the thing).
Indeed, and that is why if I wanted to do university work at home instead of fighting for a place at one DG/UX terminal at the campus, I had to find something else.
I am aware it got much better later on, but given the way it was introduced, the mess with third party integrations, as Microsoft always outsourced the development effort (MKS, Interix,..), it never got people to care about afterwards.
Realistically anyone who cared would be using something like Cygwin (and the original UNIX server market segment evaporated due to Linux and had zero interest in migrating to NT in that form--some did migrate due to application layer benefits like .NET but not for the same workloads.)
There is an alternative universe where Windows NT POSIX is really as it should have been in first place, and Linux never takes off as there is no need for it.
As there is another alternative one where Microsoft doesn't sell Xenix and keeps pushing for it, as Bill Gates was actually a big fan of.
Obviously we'll never know, but I seriously doubt that parallel universe would've had a chance to materialize. Not the least due to "free as in beer" aspect of Linux whilst web/Apache was growing at the pace it did. All proprietary unices are basically dead. Sun was likely the sole company that had the best attitude to live alongside open source, but they also proved it wasn't a good enough business post bubble burst. NT and Darwin remain alive due to their desktop use, not server.
With Microsoft having either Windows NT with proper UNIX support, or real UNIX with Xenix, there would be no need for Linux, regardless of it being free beer.
Whatever computer people would be getting at the local shopping mall computer store already had UNIX support.
Lets also not forget that UNIX and C won over the competing on timesharing OSes, exactly because AT&T wasn't allowed to sell it in first place, there was no Linux on those days, and had AT&T not sued BSD, hardly anyone would have paid attention to Linux, yet another what-if.
IBM z/OS is officially a Unix-a very weird Unix which uses EBCDIC-but it passed the test suite (an old but still valid version, which makes it somewhat outdated) and IBM paid the fee to The Open Group, so officially it is a Unix. (Although somewhat outdated, they recently added a partial emulation of the Linux namespace syscalls-clone/unshare/etc-in order to port K8S to z/OS; but that’s not part of the Unix standard.)
If Microsoft had wanted, Windows could have officially been Unix too-they could have licensed the test suite, run it under their POSIX/SFU/SUA subsystem, fixed the failures, paid the fee-and then Windows would be a Unix. They never did-not (as far as I’m aware) for any technical reason, simply because as a matter of business strategy, they decided not to invest in this.
NT underlies the majority of M365 and many of the major Azure services. Most F500s in the US will have at the very least an Active Directory deployment, if not other ancillary services.
IIS and SQL Server (Win) boxes are fairly typical, still.
I am not suggesting NT is dead on servers at all. I am positing it would be dead had it not been for owning the majority of desktops. Those use cases are primarily driven as an ancillary service to Windows desktop[1], and where they have wider applicability, like .NET and SQL Server, have been progressively unleashed from Windows. The realm of standalone server products were bulldozed by Linux; NT wouldn't have stood a chance either.
[1]: In fact, Active Directory was specifically targeted by EU antitrust lawsuit against Microsoft.
For all large corps, users sit at 1990s-style desktop computers that run Win10/11 and use Microsoft Office, including Outlook that connects to an Exchange server running on Windows Server. I'm not here to defend Microsoft operating systems (I much prefer Linux), but they are so deeply embedded. It might be decades before that changes at large corps.
I don't think it is fair to brush it off under "same bucket; doesn't count." The syscalls are still different and there's quite a bit of nuance. I mean the lines you're drawing are out of superficial convenience and quite arbitrary. In fact, I'd argue macOS/Darwin/XNU are really Mach at their core (virtual memory subsystem, process management and IPC) and BSD syscalls are simply an emulated service on Mach, which is quite different from traditional UNIX. The fact that as a user you think of macOS much more similar to Linux is not really reflective of what happens under the hood. Likewise NT has very little to do with Win32 API in its fundamentals but Win2k feels the same to the user as WinME, but under your framing, you'd same-bucket those.
> Likewise NT has very little to do with Win32 API in its fundamentals but Win2k feels the same to the user as WinME, but under your framing, you'd same-bucket those.
I probably would, in this context. Well, maybe not WinME, because that was a dumpster fire. But any Windows coming down from NT line, which is what's relevant in the past 20 years, sure. Same bucket.
The essential problem was that critical Windows APIs like CreateProcess and the NTFS file system were far too slow to be used in UNIX-like ways. If you tried to run git or build things in WSL1 - a key use case - it was way slower than doing so on native or VM Linux.
Performance was one problem, but imho the biggest was that MMAP semantics were inherited from the NT side and made a lot of applications crash (mmap's created could only be as large as the file's current size as in Windows, while Linux/BSD semantics allows for a mmap larger than the file that's usable without constant remapping as the file grows).
They didn't prioritize it until fixing at a late stage, barely before WSL 2 came out. Sometimes i do wonder if they made a premature decision to move to WSL2 since there was quite a lot of basic applications/runtimes that were crashing due to this fix lacking (Naturally a lot of other new Linux API's like io_uring probably would have made it an api chasing treadmill that they just wanted to circumvent).
> (mmap's created could only be as large as the file's current size as in Windows, while Linux/BSD semantics allows for a mmap larger than the file that's usable without constant remapping as the file grows).
I thought you could do it using ntdll functions, no?
Good to know, still the obscureness of this function or semantics led WSL1 to be incompatible for a long time (Also skimming this article touches upon some 0-sized mappings being an issue?).
Regardless this led WSL1 to have fatal incompatibilities for a long time, iirc basic stuff like the rpm system or something similarly fundamental for some distros/languages relied on it. And once WSL2 existed people just seems to have gone over.
Win32 APIs like CreateProcess suck because they have to spend so much time setting up the stuff that allows Win32's application model to mimic that of 16-bit Windows, which was coopreratively multitasked. The NT kernel is much faster at creating processes when it doesn't need to worry about that stuff.
As for NTFS: it's not NTFS specifically, it's the way the I/O system is designed in the NT kernel. Imagine any call from outside that layer transitioning through a stack of filter drivers before actually reaching the implementation. Very powerful stuff, but also very bad for performance.
Hm. I used Git on WSL1 for many years, with medium sized repos hosted on a Windows drive, and it worked great. When I moved to WSL2 Git became a whole lot slower - it now takes about 5-8 secs to execute 'git status' where before it was instant.
Windows actually created a new process type for this: Pico processes[1]. This allows WSL1 to perform quite a bit better than Cygwin on something like Windows XP.
I know -- I was super excited to see WSL1 and wished it worked. NT when started was the OS/2 personality and back at that time was excited to see NT as the OS to end all OSes (by running them all as a personality).
But WSL2 is freaking incredible, I'm super excited to see this and just wish the rest of windows would move to a Linux kernel and support bash natively everywhere. I was never a fan of powershell, sh/dash/ash/bash seem fine
It's good. But if/when you start using it as your main work platform nagging issues start cropping up. The native linux filesystem inside it cannot actually reclaim space. This isn't very noticeable if you aren't doing intensive things in it, or if you are using it as a throwaway test bed. But if you are really using it, you have to do things like zero out a bunch of space on the WSL disk and then compact it from outside in the Windows OS. Using space from your NTFS partition / drive isn't very usable, the performance is horrible and you can't do things like put your docker graph root in there as it is incompatible. It also doesn't respect capitalization or permissions and I've had to troubleshoot very subtle bugs because of that. Another issue is raw network and device access, it basically isn't possible. Some of these things are likely beyond the intended use of WSL2, in its defense. Just be aware before you start heavily investing your workflow in it. For these use cases a traditional dual boot will work far better and save you much frustration.
The whole point of Windows right now is having a kernel that a) does not shove GPL down the device manufacturer's throat and b) care about driver API stability so that drivers actually work without manufacturer or maintaner intervention every kernel upgrade.
People like to talk like GPL is evil, but it's underpinning more of the world than many people see.
And thanks to no ABI/API stability guarantees, Linux can innovate and doesn't care about what others might say. Considering Linux is developed mostly by companies today, the standard upkeep of a driver is not a burden unless you want to shove planned obsolescence down the throats of the consumers (cough Win11 TPM requirements cough).
The obvious answer: you can't. I work in constrained environment with an IT department that provides the hardware and (most of) the software I develop on. I agree with all the WSL cheering here, it integrates almost seamlessly.
But you're asking the wrong question. It should be "why not use MacOS?" if you need a stable UI with UNIX underneath :).
That's another sound option, but as a person who doesn't like Homebrew and stuffing /usr/local with tons of things, a lightweight Linux VM becomes mandatory after some point on macOS, too.
Other than that, macOS plus some tools (Fileduck, Forklift, Tower, Kaleidoscope to name a few), you can be 99% there.
I use macos as my daily driver, but any real work on it happens on a linux container or VM. Using one of {cursor, vscode, windsurf} with a devcontainer is a much better approach for me.
Current macos is going the windows direction with some architecture choice (default uninstallable software, settings panel mess, meaningless updates,…)
Sure, but consider that some people might not be able to just make that choice in any given context.
I was working as a freelancer wher a lot of my job meant interfacing with files other people made in Software that only runs reliably on Windows or Mac (and I tried regularly).
So WSL provided me with a way to run Linux stuff without having to run a fat VM or dual boot. In fact my experience with WSL is probably why I run Linux as my daily driver OS in academia now, since here the context differs and a switch to Linux was possible.
Whether a thing is useful is always dependent on the person and the context. WSL can absolutely be a gateway drug to Linux for those who haven't managed to get their feet wet just yet.
> I was never a fan of powershell, sh/dash/ash/bash seem fine
It depends on what you're doing. PowerShell is incredible for Windows sysadmin, and the way it pipes objects between commands rather than text makes it really easy to compose pretty advanced operations.
However, if I'm doing text manipulation, wrangling logs, etc, then yes, absolutely I'm going to use a *nix scripting language.
For anyone curious (as I was) the basic difference is that WSL1 implemented the Linux syscall table directly whereas WSL2 actually runs Linux on top of some virtual drivers (hypervisor).
WSL 2 runs a full Linux kernel under Hyper-V. There are some out-of-tree or staging drivers included in Microsoft's Linux kernel derivative and they publish their kernel sources at https://github.com/microsoft/WSL2-Linux-Kernel.
I had the same experience. Even installing linux is easier for me now. And with new spyware features of windows, there is really no incentive to use it
I've been using windows since I was 6 or 7. I currently work in a Mac environment and hate it. I worked in a linux one for 5 years. Nothing feels like the first language you learned I guess?
My home computer is windows and it'll be that way until windows stops existing.
Edit: when I say we I mean the people still on windows.
Definitely not for me. Was in Windows between 95 and XP, never looked back. Same for my first programming languages, glad I am not stuck still doing PHP and Java.
Switched my main Linux and desktop environment multiple times as well.
For the corps ... it's a legacy issue, but that may slip away as a side effect of Trump destroying global soft power and making it a hard sell to remain on a US led platform, purely op sec concerns, the spyware issue will add more weight to that.
Businesses would. The problem with that is you have decision makers in said businesses who don't know any better, so Microsoft-all-the-things gets pushed down the line. Offices are all trapped on Windows 10/11 and using Teams/Outlook with Exchange/Entra/Azure chugging along in all its misconfigured glory. Heck, half the MSPs I work side-by-side with seem to only offer support on Windows machines.
It gets worse. When we go to the manufacturing side of the building, there's a high chance they're still using Windows 7. Yeah, still! And IT or Controls has no idea what to do with it since, well, it's still working. Is it secure? They don't know because the team is comprised of kids who memorized the CompTIA exams and use Windows 11 at home.
Trying to get the business world to switch to Linux with all that in mind is an impossible task. It's the same as asking an American city to rip out all its old infrastructure at once and replace it with new instead of patching the old. The cost and knowledge required for such a task is unthinkable, to them. Believe me, I've tried.
Microsoft was quite brilliant in the way that they shoehorned their way into the fabric of the way we do business, not just in the US, but on a global scale.
I would be very happy with Windows 7 on manufacturing side - lots of CNCs that are still in use and supported by manufacturers are still on Windows 98.
I left some room for myself with "a good reason" :)
When company is forcing you to use something out of inertia, then it's probably not for a good reason.
Actually regarding the "global scale" – I'm not really sure it's true, I think MS has influence mostly in US. Many EU and Asian companies I worked with were using OSX/Linux.
Yeah, I totally agree with what's being said here. It's a tough pill to swallow when you realize just how entrenched Microsoft is in the business world, and how difficult it would be to get everyone to make the switch to Linux.
I mean, think about it - most companies are still stuck on Windows 10 or 11, and they're using all those Microsoft services like Teams, Outlook, and Exchange. It's like they're trapped in this Microsoft ecosystem, and it's gonna take a lot more than just a few people saying "hey, let's switch to Linux" to get them out of it.
And don't even get me started on the IT departments in these places. A lot of them are just kids who memorized some CompTIA exams and don't really know what they're doing. They're using Windows 11 at home, but they have no idea how to deal with all the outdated Windows 7 machines that are still being used in manufacturing.
Microsoft, on the other hand, has been really smart about this. They've managed to get their products and services woven into the fabric of how we do business on a global scale. It's gonna take a lot more than just a few open-source projects to change that.
They're "trapped" because there is no answer to the Exchange/Outlook combo for business purposes and it's very inexpensive for the value it provides. There are of course alternatives to Teams until you pair Teams with SharePoint/OneDrive/Copilot/Exchange/3rd party market.
> A lot of them are just kids who memorized some CompTIA exams and don't really know what they're doing.
Well, this is true throughout IT, even those who went to college for a CS or IT-based degrees. People want to make money, and IT has been a safe haven so far to do so.
> They're "trapped" because there is no answer to the Exchange/Outlook combo for business purposes and it's very inexpensive for the value it provides. There are of course alternatives to Teams until you pair Teams with SharePoint/OneDrive/Copilot/Exchange/3rd party market.
Yep, it's mostly this. Especially for businesses under 300 users, you get Exchange, EntraID, Defender EDR, InTune(MDM) + the Teams/SharePoint/OneDrive/Copilot all integrated for $22/user/month. For a little extra you get a half way decent PBX for VoIP too.
If you tried to piece all that together yourself with different services, then integrate them to the same level, it's going to cost a hell of a lot more than that.
Microsoft is smart too, as none of that requires Windows either. Even of these companies switched to Linux or macOS en masse, they'd still be using Microsoft.
Plus, there's still no competitor to Excel for business types. We might be able to use Google Sheets to great effectiveness, but the finance department at the behemoths can't. The world runs on Excel, like it or not.
> A lot of them are just kids who memorized some CompTIA exams and don't really know what they're doing.
This is true for all fields not just tech/IT. Competent windows sysadmin work nowadays isn't all that different from macOS endpoints or Linux. Everything can be scripted/automated with PowerShell, or just using the Graph API for 365 stuff. You can effectively manage a windows environment and never touch a GUI if you don't want to.
Microsoft usually isn't the best at anything, but what they excel at is being "good enough" and checking boxes.
For larger orgs and enterprises, it is Active Directory/Entra. That is the true Microsoft killer app and lock-in driver. There is no comparable Linux solution that I am aware of.
You're saying it like there is no alternative and you can't just open and edit same excel files in Libre Office Calc, Google Sheets or Numbers without any problem whatsoever.
Can you give me an example of such advanced features? I really don't understand what outstanding feature did they pack in this "Excel" which has no alternative?
If the only problem is migrating from XLSX to some other format I'm sure this is trivial and some tooling must be available.
There are complex reports that every European-regulated finance entity needs to submit to their regulator. They are always complicated, but they are only sometimes well-specified. The formats evolve over time.
There is a cottage industry of fintech firms that issue their clients with a generator for each of these reports. These generators will be (a) an excel template file and (b) an excel macro file.
The regulators are not technically sophisticated, but the federated technology solution allows each to own its regional turf, so this is the model rather than centralised systems.
If the regulator makes a mess of receiving one of your reports, they will probably suggest that you screwed up. But if you are using the same excel-generator as a lot of other firms, they will be getting the same feedback from other firms. If you did make a mistake, you can seek help from consulting firms who do not understand the underlying format, but know the excel templates.
There are people whose day-to-day work is updating and synchronising the sheets to internal documentation. It gets worse every year.
Sometimes the formats are defined as XBRL documents. Even then, in practice it is excel but one step removed. On the positive side - if you run a linux desktop you have decent odds to avoid these projects, due to the excel connection.
The problem is not the "advanced features" within Excel but how they are used. If an excel sheet is basically just a front for a visual basic Programm it doesn't easily open anywhere.
As Google's JavaScript API also doesn't work in open office and whatever else they all have in extra layers.
However i am not sure when and why I encountered such a software last time, but my dad is a Visual Basic guy and has done a lot of these weird sheets for internal business stuff.
VBA is the famous example, but Power Query deserves a shout out. I use it to make tables that pull their data from other tables with custom transformation logic.
Google Sheets didn't even support tables until fairly recently.
LibreOffice still doesn't have tables! Not to mention the new(ish) functions in Excel, like LET and LAMBDA.
Power Query the language is nice, I kinda like it. I've read the UI and engine works quite well in PowerBI, but I haven't used it.
The Excel engine is way too slow though. Apparently they're two entirely separate implementations, for some architectural reason, not exactly sure why.
Excel's Power Query editor on the other hand, is an affront to every god from every religion ever. Calling it an "advanced editor", while lacking even the most basic functionality, is just further proof of their heresy.
You didn't really mention any real feature besides Visual Basic, which clearly has alternatives in other spreadsheet apps. You have to run your VBA through converter script, and the fix incompatibilities in your macros but again, for a Visual Basic guy it is trivial... The rest of the things you mentioned is a good old `rsync` repacked.
But you're right, they surely added a bunch of smaller stuff to keep everything connected, and I'm kind of underestimating it since I never used that ecosystem but heard rumors and complaints from other people who had to use it :)
I'm not dismissing onedrive here but I wanted to say monseur was cheating when he mentioned onedrive/sharepoint as real features of Excel application – they are not directly related to the essence of spreadsheet editing and can be substituted with any solution which does the job, even Dropbox itself.
>There's no serious alternative to Excel for those who rely on its advanced features.
this is just silly, it really means "There's no serious alternative to Excel for those who rely on exclusive Visual Basic macros"
> I'm not dismissing onedrive here but I wanted to say monseur was cheating when he mentioned onedrive/sharepoint as real features of Excel application – they are not directly related to the essence of spreadsheet editing and can be substituted with any solution which does the job, even Dropbox itself.
Not true. Sharepoint and OneDrive are key enablers for real time collaboration. It lets multiple people work on the same file at the same time using native desktop applications. Dropbox has tried to bolt stuff like that on, but it is janky as heck. OpenOffice, etc can't integrate with Excel for real time collaboration (honestly, I'm not sure they support any level of real time collab with anything). Google Sheets won't integrate with Excel for real time. Google is great for collaboration, but sticking everything in Google's cloud system isn't dramatically better than being stuck on Microsoft's stuff. Also Google Sheets just doesn't work as well as Excel.
SharePoint/OneDrive Lists can be directly edited in Excel. The Power platform can directly access/manipulate/transform Excel files in the cloud or on-prem via the Power BI Gateway.
You don't seem to have much of a familiarity with this ecosystem. If you're curious, I'd suggest hunting down these things on learn.microsoft.com, but to dismiss them is only showing your lack of understanding.
So you do all this work, retrain other users, spend a not-so-trivial amount of time and money and risk breaking stuff, all for not paying $22 monthly per user?
I get it, it would be a technically better solution, remove Microsoft lock-in etc, but the cost-benefit analysis isn’t that good in this case.
The percentage difference in usage between the #100 command ("Accept Change") and the #400 command ("Reset Picture") is about the same in difference between #1 and #11 ("Change Font Size")
The commands you mentioned seem irrelevant here. I never use any advanced features, i.e. those not available in LibreOffice or incompatible with MS Word, and I don't know anybody who does.
Not only is it about lack of features on the open source side, it's about workflow.
Sure Photoshop and Gimp both edit pictures, but the workflow is so different that professional users of Photoshop aren't going to switch just because it's FOSS.
The market is getting more diverse (mobile, steam deck alikes, laptops, consoles, etc), but i guess if you want to quickly earn the most money on your (huge) development investment, you better try and take the biggest piece of the pie first.
Personally i don't really believe in AAA (or UbiSoft's AAAA) titles that much anymore. Strange exclusivity for some console or device may bring some money early on, but i have plenty games in my Steam libary that could run perfectly under many platforms. And most AAA games heavily drop in price after a few months, Nintendo being the sole exception.
AAA and AAAA games became (expensive) gateways to microtransaction based money extraction application, in my opinion.
I enjoy older, smaller games nonproportionately more when compared to big titles which require much more resources and time. Yes they look nice, yes they use every documented and undocumented feature of my GPU, yes "it's so fluffy", but it is not enjoyable, esp. with shoved down microtransactions.
If we're talking FPS, give me any Half-Life (and Portal) title and I'm good. Gameplay first, unique art direction, good story, and a well built universe which is almost palpable with lore.
If we're talking RTS, C&C series, Dune Emperor, Supreme Commander and StarCraft is enough.
I have arm Mac and it's the most painful machine you can own as someone who likes games... Supreme Commander FAF I miss the most, unfortunately unplayable online due to floating-point calculation differences between ARM and x64 which are apparently untranslatable.
I have more than 2000 games on Steam and i love my Steam Deck which i got for pretty cheap. It's a very fun game system and you can tinker a lot with it. Upgrading (bigger disk capacity) is very easy.
Just bought Black Mesa for two bucks. Works almost flawlessly. Ten year old game , but much fun to be had. Most games i buy on the very very cheap. Bought Skyrim couple of weeks ago for five bucks.
Sure, i click on the free thursday game on the Epic Games store, but i hate that interface with great passion.
Curious, if you don't mind answering, do you mainly uses Ubuntu or Nixos, and which one do you liked more ATM?
Regarding Steam, do you install it with distro provided or through Flatpak?
What is the spec of your machine that you do Linux gaming on? I've noticed a notable performance penalty (around 10%, even higher on GPU heavy games) when running games with Proton, which is mainly why I haven't dropped Windows yet.
I try to use debian, since it's a bit older (read: stable) than ubunutu and I've found that if something compiles and runs on debian it'll run on ubunutu and others but the inverse is not true.
I quite like CachyOS currently. I see no performance penalty (but I also have only a 75 Hz monitor and I haven't tested VR games all that much yet). Currently I'm playing through Kingdom Come Deliverance 2 on ultra with no issues.
CachyOS provides packages for Steam, handles nvidia drivers for you and they even provide their own builds of proton and wine, allegedly compiled with flags for modern hardware + some patches (not sure how much they help though - before Cachy I used Pop OS and also had no problems with performance).
Cachy is based on Arch though, so unless you're ready for your system to potentially break with an update - maybe used something more stable (again - I quite liked Pop OS, it was extremely stable for me)
I've been using Arch for 1-3 years now, as far as I can remember the only time that my system "break" was caused by pacman lock got stuck somehow. Aside of that it's pretty stable in general.
> I've noticed a notable performance penalty (around 10%, even higher on GPU heavy games) when running games with Proton, which is mainly why I haven't dropped Windows yet.
I don't mean to dismiss your comment at all, but I'm surprised that such a low overhead would be the primary reason holding you back from switching. The difference between, say, 100 FPS and 91 FPS seems so negligible in my mind that it would be pretty near the bottom on the list of reasons not to switch to Linux.
If you don't have an adaptive sync +variable refresh rate) monitor and everything set up to use it, and don't like screen tearing (you enable vsync wait), overrunning the frame budget (e.g 16ms for 60hz) can mean dropping down to half the frame rate.
But I'm hunting for reasons here. A gaming setup should be using adaptive sync so those concerns mostly go away. But there may be problems with Linux support.
I think actually Linux has come a long way and recently I actually dual booted fedora with windows and fedora was easily my main choice unless gaming.. unfortunately when updating from 41 to 42 there was clearly an issue with the GPU not having drivers for acceleration or cuda, updating the drivers bricked the OS immediately and while I could recover, I spent hours and hours on this and could never get the GPU drivers installed again without bricking it.. ultimately I realised how at mercy of drivers Linux is. I hope though that in the next few years things improve as windows is dismal to work on these days
I just had a problem with Windows and Nvidia drivers/CUDA not working properly on a two year old Windows 11 install. I had to reinstall the operating system after days of troubleshooting and attempting different things to get it operational again. It can happen on there as well.
Unfortunately many of the more popular multiplayer games with anti-cheat tend to consider "made working on Linux" a bug rather than a feature. E.g. Easy Anti-Cheat and Unreal Engine both support Linux natively but Epic still doesn't want to allow it for their own game, Fortnite. https://x.com/TimSweeneyEpic/status/1490565925648715781
There are even games like Infinity Nikki with anti-cheat that allows the Steam Deck but specifically detects and blocks desktop Linux. You have to wonder if that gets them any real security since the method they use to detect the Deck is probably spoofable.
There is more nuance to the anti-cheat systems supporting Linux argument than "it supports it but they won't use it". Turning on Linux support does weaken the security posture of the anti-cheat system, so it's not simply a decision of "it works with Linux, but they won't do it". It is moreso a question of whether the security posture changes for the game with this platform support enabled meet the business requirements. It's not a surprise that games with high MTX revenue do not turn this on, as I imagine this would be the biggest concern with this weaker security posture.
One of the boons of console hardware is also the strict execution environment that is presented on the system. While this of course doesn't prevent all cheating behavior in online games, a large selling point of it as a platform to publishers is not only the market segment available, but the security aspects of the runtime environment.
I'm not familiar with what new changes Valve has been working on in the anti-cheat space but historically most major anti-cheat systems, such as Easy Anti-Cheat, already have long included a server-side anti-cheat component. The catch rate (and overall accuracy) with both is just always going to be higher than only going with one approach.
I think you're hitting on ideal vs. constrained wants (or, at least, that's how I've always referred to them). That is: what they want to be able to allow in itself vs. what they want to allow given the trade-offs with other wants.
E.g. "I'm going to go to the beach all day" and "I'm going to keep my job" are both likely the results of ideal type wants whereas "I'm going to go to my job today and then the beach tonight" would likely be the result of a constrained want.
Scrolling to Medals, 50% of all 25.000+ games tracked by the site are playable, either working perfectly or mostly (Platinum or Gold ratings). Another 20% can be alright under specific circumstances, and with compromises (Silver rating).
It's been waffling back and forth but always had a "gold" rating even when I verified it was broken. I haven't tried recently (haven't really played video games in years), but there's a comment from 5 days ago saying it's broken again.
At some point, Proton users reported success using some patch, then that stopped working, then there was a different patch... A lot of user reports say "thumbs up" then have a comment explaining how it goes out-of-sync unless you fiddle with it, so it's hard to trust.
Seems the root of the problem is this game's picky netcode, which is similar to the original 1998 game I played as a kid. If your game state diverges from the other players' at all, it goes oos and ends the game for everyone. And yes this happened often enough that people had an abbreviation for it.
I worked on this problem for a bit. What's going on is the game relies on the OS-provided C runtime libraries ("msvcrt"-style things) to do its math. Wine's implementation of these libraries does not match Windows's perfectly. If all players are using the same implementation, then they will agree, and there are no problems, so people think it is working. But if a player on Wine tries to play against a player on Windows, they will fall out of sync because the math errors eventually add up.
That was as far as I was able to take it. Another much more skilled dev at CW dug in a lot deeper and wrote a blog post about it[1], but as far as I know the problem remains unsolved.
Oh interesting, I always wondered what the underlying issue was and why downloading some obscure looking dll solves it.
For a practical solution, just using the Windows dlls seems to work fine. Without AoE2:DE goes out of sync immediately, with I've played hour long games.
I remember it being interesting to work on. It's been years, but if I remember right, there is some way to convince the game to dump a log of unit positions during a multiplayer match, possibly as part of its desync handling. I enabled that on both Win & Linux hosts, ran a match between the machines until they desynced, and diff'd the game's own logs, then confirmed from the Wine logs that the faulty values were coming from CRT-related math functions. It's always fun when you get to use a game's own debug utils to track down a problem.
Anyway it'd be great if the game devs included their own math libraries instead of relying on the OS's. That would fix the problem quite nicely.
It's been several months since I played but getting ucrtbase.dll always worked for me and it was the only thing I ever had to do for the game. You need to redownload it after every update because it gets wiped though.
Oos can till happen, but as you said it can also happen on Windows, hard to blame Wine for that.
Since gold means "works as good as Windows with workarounds" I think that's a correct rating.
I can only testify to oos being common in the Mac version of the original game, and I've heard it happening in the og Windows game. In DE under Windows, I've never seen it happen, so I'd be concerned if you're still seeing it occasionally.
Also, "gold" should mean that it works by default, not that you have to patch in a DLL. The only place the site even says "playable with tweaks" is in a tooltip if you hover over the gold symbol, right above a separate list of details that doesn't mention tweaks. I didn't even know until now.
We can argue all day over what a rating means, but if it would work without a tweak I'd say it should be rated platinum. (The only other thing I know is missing is Xbox live login, but I don't really care about that)
Yeah there's a lot of random issues with the different games. In case user experience is the main goal, I always recommend going with the main supported ways, which in this case would be Windows 11. I personally try things first on my Linux, but I always keep a backup Windows just in case.
Overwatch is the big one - lots of random issues with it. But basically any game with Denuvo DRM is extremely high risk, resulting in either a ban or the game not running at all.
Can you remember any particular problems in Overwatch? I've been down that road, so there's a chance I might have some info that you would find useful.
One problem that was unsolved last time I checked: Saving highlight videos. It used to work if you told Overwatch to use webm format instead of mp4, but Blizzard broke that somewhere along the line, possibly in the transition to Overwatch 2. (I worked around this with OBS Studio and its replay buffer feature.)
When I ran a two month experiment, Hogwart's Legacy and Anno 1800.
The former ran slowly at low settings, with the occasional complete single digit slowdown. On the same laptop in Windows 10, it ran medium settings and easily twice the frame rate, no issues.
The latter wouldn't connect to multiplayer, and would occasionally just crash out.
Isn’t pop_os shipping ancient components at this point due to their hate brained idea to try and create their own de and pinning their next release to it?
i think everyone tried that. gpu (games etc) are the only thing holding windows relevant at this point.
i have some 2012 projects were the makefiles also build in msvc. never again.
then 2015 projects with build paths for cygwin. never again.
then some 2019 projects with build scripts making choices to work on msys2/git-bash-for-windows. never again.
now we can build on WSL with just some small changes to an env file because we run a psql container in a different way under wsl... let's see how long we endure until saying never again.
It's the other way around. You can do very few productive things with Windows other than software development. Almost all other professional software assume Windows.
It always infuriates me when people say Windows is all about games. Techies are so detached from reality they forget that people have creative hobbies and have to use industrial grade software. Doing creative hobbies on Linux is an act of sadomasochism. And on top of that, Linux and MacOS cannot run software from 3 years ago while Windows can run software from 35 years ago. And on top of that, Linux is completely unusable to Japanese/Chinese speakers due to how hard it is to input the moon runes, and on top of that Wayland breaks the least painful setup that you could have earlier. And on top of that, Wayland people shown a middle finger to all the people who need accessibility features.
No, Windows is not about games, Windows is about being an objectively the most stable pile of garbage there is.
A fair comment, but the argument I'd make against that is a lot of those creative tools are moving to the web. I personally work for Figma, and have seen that first hand. UI/UX design was entirely OSX/Windows centric for the last 40 years, and now it's platform agnostic. Even video editors are just at the nacent stage of looking at the web as an editor surface.
Totally hear you though for things like CNC milling software that's meant to stay static for the lifetime of the mill - that's not going anywhere.
No, it's definitely a win for Linux. I get it. I've dabbled in software minimalism. I love native dev. I know the web "sucks." But the range of mainstream software available for Linux has exploded now that software is moving to the web (including Electron) and I can't see how that's a bad thing from the perspective of a Linux user. Of course I'd rather open a web browser to run an app than change my entire operating system to run an app.
By using non-free software, you're compromising on politics that don't really affect anything directly - not unless great many others suddenly embrace the ideas behind Free Software.
The compromise of using SaaS in the cloud in lieu of regular, native software, is affecting both you and society directly.
Yeah, I really like my Mac, but third-party software isn't its strong suit. It's hilarious how often Apple will wholesale break like half the software in existence.
How many months can you use a Linux desktop to do daily externally mandated processes and not drop down to a bash shell at some point?
Average consumers and users do not want to use the unix utilities that Linux people love so much. Hell, developers barely want to use classic unix utilities to solve problems.
Users do not know what a "mount point" is. Users do not want a case sensitive file system. Users do not want an OOM killer that solves a poor design choice by randomly culling important applications at high utilization.
Users do not care for something that was designed in the 60s before we understood things like interface design and refuses to update or improve due to some weird insistence on unix purity.
Users do not care about ABI stability. They care about using the apps they need to use. That means your platform has to be very easy to support, Linux is not at all easy to support, and at least part of that is a weird entitlement Linux users feel and demonstrate in your support queue.
Hilariously, users DO WANT a centralized app repository for most day to day apps! Linux had this forever, though it had mediocre ergonomics and it was way too easy for an average computer user to manage to nuke their system as Linus Sebastian found out in a very unfortunate timing situation. Linux never managed to turn this potential victory into anything meaningful, because you often had to drop into a bash shell to fix, undo, modify, or whatever an install!
> gpu (games etc) are the only thing holding windows relevant at this point.
I actually switched to Linux full-time when Starfield wouldn’t run on Windows but worked in Proton. We are now in a world where Valve provides a more sable Windows API than Microsoft. The only limitation now is anti-cheat but that’s a political problem, not a technical one.
I was excited about it too, even just having a tmux and using it for grepping and file copying. Then after a year or two on windows, my computer started slowing down. Tale as old as time. I'm not surprised, and some of the issues aren't ms' fault, but nevertheless I see CPU spikes to 100 with several browser tabs open, or the drawing tablet driver goes to 100% cpu usage even though I never even use it. The UX shouldn't degrade like a mechanical system.
Their GTX series cards all used proprietary blobs that required unmanageable device specific interfaces.
Starting from the RTX series cards, they still have proprietary blobs but instead of having device specific interfaces, they all use a shared public interface which makes compatibility and performance much better.
It's not across the board, but there are instances of gaming benchmarks showing more performance under linux than windows.
I'd trade half my GPU performance for the NVIDIA drivers not freezing my system on wake-up. The new half-open ones arguably made it worse, it consistently freezes now.
If you're using DisplayPort, try switching to HDMI. (Really.) For me it made the freezes much shorter. It's a bug in their driver related to the connected monitor(s).
I had switched back to Windows after years of issues with Linux drivers, I needed a new PC, and I needed CUDA for college and tinkering.
Now, it's been barely a couple of months since I reinstalled Ubuntu, and a couple of weeks since I found out the latest release runs even worse, so this is new to me. I don't plan to use Windows at home ever again, so I could sell my GPU and buy AMD, but so far I'm simply disappointed.
Ugh, that sucks. It makes sense. I'm somewhat optimistic that as the open-sourcing effort continues, more and more of NVIDIA's driver stack will be open-source and it will see significant improvements, too.
I'm using 4070 Ti with open kernel module on Wayland.
It's MOSTLY painless. Some GNOME extensions seem to randomly hang everything on startup (I'm currently investigating which ones, I believe Dash to Dock and/or Unite are to blame) and there's a weird issue with VR when streaming via ALVR: SteamVR launches, but games crash unless I disable the second monitor (no such issues with WiVRn, so not entirely sure if it's a driver problem or not)
Besides that in my daily driving I saw no other issues.
Been using Nvidia+Wayland for years now, even on an optimus laptop.
I'm convinced that many these people saying Nvidia has serious issues on Linux must be (by no fault of their own) going by habit and downloading the driver installer .bin from the Nvidia website and trying to install drivers that way. So yes, if you do that you're going to have issues.
Learn to do things the way your distro does them (use a package manager) and most problems go away.
I feel I'm in the same boat. For several months I've been thinking my GPU was on its way out (it's a pretty old 2080 now). My desktop freezes randomly. I can log into it remotely but all the usb devices stop working and the screen goes blank. l took a good look at the logs and noticed a bunch of pageflip timeouts followed by usb disconnections. I later discovered the Nvidia forums seem to have many recent complaints (with similar logs) especially around their latest drivers and Plasma + Wayland compatibility.
StarCraft 2 definitely works on Linux, with a relatively simple act of adding it to Steam as a non-Steam title, and then letting the Proton layer do its thing.
And this is coming from a very Linux-hesitant newbie who mostly uses Windows.
Fortnite doesn't work because Sim Tweeney doesn't want it work: both BattleEye and EAC can work on Linux, Epic just chooses not to enable that functionality.
I would do it the other way round: use Windows in a virtual machine from Linux. If you are in Windows and have the urge to use Linux, do the proper switch once and for all. You will never look back. I haven't in almost 15 years.
Given what Windows has become and already discussed here on HN I would even hesitate to run it in a virtual machine.
Except that if you require anything that is GPU-related (like gaming, Adobe suite apps, etc) you'll need to have a secondary GPU to passthrough it to the VM, which is not something that everyone has.
So, if you don't have a secondary GPU, you'll need to live without graphics acceleration in the VM... so for a lot of people the "oh you just need to use a VM!" solution is not feasible, because most of the software that people want to use that does not run under WINE do require graphics acceleration.
I tried running Photoshop under a VM, but the performance of the QEMU QXL driver is bad, and VirGL does not support Windows guests yet.
VMWare and VirtualBox do have better graphics drivers that do support Windows. I tried using VMWare and the performance was "ok", but still not near the performance of Photoshop on "bare metal".
“ or a Linux user, you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem. From Windows or Mac, this FTP account could be accessed through built-in software.”[1] vibes.
Convenience features in software are huge and even if a system is well designed a system that abstracts it all away and does it for you is easier, and most new users want that, so it often wins. Worse is better etc
The comment you linked is one of the most misunderstood comments on this site, which makes sense because it's one of the most cited comments on this site.
Perhaps I should have put a larger explanation around it but I am mocking neither sureglymop nor BrandonM but we can still learn lessons from hindsight.
Sure, it’s trivial to set the switch in BIOS for virtualisation, and download a couple of libraries but people like computers doing things for us, we like abstractions even if they sacrifice flexibility because they facilitate whatever the real world application we are attempting.
I think power users of any technology will generally overvalue things that 80% to 95% of the user base simply don’t care about.
I admit that having touched Windows twice in the last 5 years I wouldn’t know but I would be willing to wager that WSL has very few drawbacks or shortcomings in the minds of most of its users.
Because it's only silly sounding because of hindsight. With today's context of file sync applications being a huge industry, that comment seems silly. But that was the prevailing opinion at the time. Check out this blog post: https://www.joelonsoftware.com/2008/05/01/architecture-astro...
>Jeez, we’ve had that forever. When did the first sync web sites start coming out? 1999? There were a million versions. xdrive, mydrive, idrive, youdrive, wealldrive for ice cream. Nobody cared then and nobody cares now, because synchronizing files is just not a killer application. I’m sorry. It seems like it should be. But it’s not.
That's just what a lot of competent people thought back then. It seems hilariously out of touch now.
But it wasn't my opinion at the time, and I didn't hear from those people. I was in middle school, kids were commonly frustrated syncing their homework to/from a flash drive, family members wanted to sync photos, and everyone wanted something like this.
Before Dropbox, the closest thing we had was "the dropbox," a default network-shared write-only folder on Mac. Of course you could port-forward to a computer at home that never sleeps, but I knew that wasn't a common solution. I started using Dropbox the same month it came out.
You can do GPU passthrough in a Gnome box, as in, your VM can see the host's GPU (let's say Nvidia) and it works exactly the same as on the host? Or another metric is if you can run Photoshop in a VM with full hardware acceleration. I haven't tried Gnome box in particular, but this isn't what I'm seeing when I search.
Yep, regular VMs where you basically only care about the CPU and RAM are easy, provided nothing in the VM is trying to not run in a VM. USB and network emulation used to be jagged edges, but that was fixed. VirtualBox was my go-to. It never had great GPU support, but the rest was easy.
I'm pretty sure there are solutions to assign an entire GPU to a VM, which ofc is only useful if you have multiple. But those are specialized.
Not even close. I mentioned a software package that literally offers a full gui for all your virtualization needs.. how is that comparable to the things mentioned in that comment?
That really depends on what you want to run. Dipping into a Linux laptop lately (Mint) there are things, old things (think 1996-1999) that somehow "just work" out of box on Windows 10, but configuring them to work under WINE is a huge PITA coming with loads of caveats, workarounds and silent crashes.
I'm hoping that IOMMU capability will be included in consumer graphics cards soon, which would help with this
iirc there are rumors of upcoming Intel and AMD cards including it
Really? I recall installing it 3 years ago, and aside from some oddities with popups, it worked just fine. I think it was this script [0]. I don't know if they broke it, I switched to OpenSCAD, which meets my needs.
Sadly I'm not one of those people because I have a desktop with an AMD Ryzen 7 5800X3D, which does not have an integrated graphics card.
However now that AMD is including integrated GPUs on every AM5 consumer CPU (if I'm not mistaken?), maybe VMs with passthrough will be more common, without requiring people to spend a lot of money buying a secondary GPU.
Yes, my Ryzen 7600 has an integrated GPU enabled. AMD's iGPUs are really impressive and powerful, but I do not have any idea what to do with it and despite that I moved to an Nvidia GPU (after 20 years of fanboyism) specifically because I was tired of AMD drivers being terrible on Windows, I STILL have to deal with AMD drivers because of that damn iGPU.
I could disable it I guess. It could provide 0.05% faster rendering if I ever get back into blender.
True, but I don't have the need to run applications that require GPU under WSL, while I do need to run applications that require the GPU under my current host OS. (and those applications do not run under Linux)
I don’t know why there aren’t full fledged computers in a GPU sized package. Just run windows on your GPU, Linux on your main cpu. There’s some challenges to overcome but I think it would be nice to be able to extend your arm PC with an x86 expansion, or extend your x86 PC with an ARM extension. Ditto for graphics, or other hardware accelerators
There are computers that size, but I guess you mean with a male PCIe plug on them?
If the card is running its own OS, what's the benefit of combining them that way? A high speed networking link will get you similar results and is flexible and cheap.
If the card isn't running its own OS, it's much easier to put all the CPU cores in the same socket. And the demand for both x86 and Arm cores at the same time is not very high.
You may be interested in SmartNICs/DPUs. They're essentially NICs with an on-board full computer. NVIDIA makes an ARM DPU line, and you can pick up the older gen BlueField 2's on eBay for about $400.
There is ongoing work on supporting paravirtualized GPUs with Windows drivers. This is not hardware-based GPU virtualization, and it supports Vulkan in the host and guest not just OpenGL; the host-based side is already supported within QEMU.
Windows in a vm with a passed through GPU is really nice. Although still pretty niche these days it's easier than it used to be. It also works with a single GPU, e.g. on a laptop.
I personally have a desktop PC with an AMD GPU and then another Nvidia GPU that I pass through to windows hosts. I have a hook that changes the display output and switches the inputs using evdev.
He’s right. Laptops have integrated graphics, but all mid-tier and higher laptops also have a dedicated GPU. Desktops are similar, though my guess is a lot of business desktops have only the integrated graphics.
If you can GPU passthrough (it's quite simple to set up), this is not a large issue. You're right that Linux is sorely lacking in native creative software though!
> who need to use Windows for productivity apps and those who don’t.
LibreOffice has gotten quite good over the years, including decent(ish) MSO file format interoperability, and Thunderbird seems to support Exchange Server.
So, I suppose things like MS Project or MS Visio many not have decent counterparts (maybe, I don't really know), but otherwise, it seems like you don't need-need to use Windows for productivity apps.
Counterpoint: things like the Valve Index for VR simply don't behave well in this environment no matter how much I've worked on getting it there.
I'm not a novice either, $dayjob has me working on the lowest levels of Linux on a daily basis. I did linux from scratch on a Pentium 2 when I was 12. All that to say yes I happen to agree but edge cases are out there. The blanket statement doesn't apply for all use cases
IMO this is the real blindspot: it's VR support, not Photoshop, or MS Office, or CAD tools (all of which I've got running fine via Wine). I'm guessing the intersection between VR users and Wine users must be really small and I suspect it's because of this that support is so lacking.
I used Linux as my daily driver for years, before finally switching back to Windows, and then to the Mac. I got tired of things like wine breaking on apps, I got tired of the half-assed replacements for software available on Windows, like GIMP compared to Photoshop. I got tired of the ugly desktop that inevitably occurs once you start needing to mix QT and GTK based apps. Linux is not a panacea.
I hate the half assed commercialised approached for software on both Mac and Windows where you download 50mb+ of electron bullshit for a bash 2 liner with default tools on Linux.
Mostly for windows but when I installed 5+ tools from untrustworthy websites (which they all look like if you aren't used to that) it feels like my computer is likely forever busted with some scamware. But there is no dd, no proper editor, no removing adware and "news" without these tools.
On windows if you want to configure something it's like going into a computer museum where you start in the metro area and end up in UIs straight out of win 95. That's better on Mac, but the UI is depressing (in my opinion) and I always had the feeling my Mac wouldn't need to run that hot if it wouldn't draw shadows, mirroring and weird effects I haven't asked for.
Running Windows from a ZFS partition with its own dedicated GPU, viewed through looking-glass on the Linux host at 1440p@120Hz, has been super useful.
I set it up originally for gaming, but nowaways I install a lot of disposable software there.
I use Linux guests VMs too (a la Qubes), but sadly there's no guest support for looking-glass on Linux. Native rendering speeds on VMs are something hard to let go.
I used to do VFIO with hardware passthrough so I could have linux but still run windows software like CAD that takes advantage of the gfx card. That was a pain to set up and use.
The other way, its very simple. WSL2 can run ML tasks with just a tiny bit of overhead in moving the data to the card.
> We currently package our virtual machines for four different virtualization software options: Hyper-V (Gen2), Parallels, VirtualBox, and VMware. These virtual machines contain an evaluation version of Windows that expires on the date posted. If the evaluation period expires, the desktop background will turn black, you will see a persistent desktop notification indicating that the system is not genuine, and the PC will shut down every hour.
Edit: Oops, dead link -- the dev tools evaluation VM hasn't been released for 6+ months. But they do offer Windows evaluations ISO's after registration.
That's how I do it. I don't see the draw for Windows as the main OS, especially with Windows 10+ being dumbed down beyond belief and having seconds of lag to do anything at all. Seems even from this thread that people just want the convenience of a gaming rig in the same box as their work (which is a security issue because games are full of remote code execution vulnerabilities).
It's funny, more than any productivity app (though I do have a few of those), the Directory Opus [1] Explorer replacement is one of the things that I've yet to find a viable replacement for on both Linux and macOS. Unparalleled customisability, scriptable actions, outstanding performance (thumbnailing 10,000 images in a folder never causes slowdown), incredible search and "huh, why doesn't anyone else do this" features everywhere. I use my file explorer a lot so the friction is felt daily.
I'm using Forklift [2] on my mac at work, but it's a pale imitation of what a file explorer can truly be. I did some searching for Linux but it's all pretty pedestrian.
I feel like every conversation about this is the bell curve/midwit meme[1], with the middle being the argument over “Windows VM on Linux” and “Linux VM on windows”, and the edges being “own multiple computers”.
Right! Use Linux, because it is your preference [1]. It doesn't cause harm (side-effects: incompatibility and vendor lock-in, due to mass-effect) to others.
We need to remember why Microsoft uses WSL. Microsoft wants to prevent users (i.e. developers) to migrate on Linux. It is the old approach Embrace, Extend, and Extinguish [2].
Monopolies are made by users and politics, because we don't consider vendor lock-in and mass-effect. I wish strong regulation for all information-technology. We saw the wonderful effects of regulation with AT&T {UNIX, C, Open-Source, Open-Documentation} and then a mistake was done. The company was split up, looking back a complete failure.
[1] Means: It is a better operating-system and adapt to users needs. Either novice user or programmer.
[2] https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguish
I've considered it, but there are two Windows features I need that sound like they'd require some time investment to set up correctly on linux.
1. I use UWF on windows (Education Edition). All disk writes to C:/ are ephemeral. On every single reboot, all changes are discarded and my pc is back to the exact same state as when I first set it up. I do keep a separate partition for documents that need persistence.
as for 1. if you ever have some free time on your hands, and want to take declarative configs to the next level, you can check out Impermanence for NixOS: https://news.ycombinator.com/item?id=37218289
I think the biggest problem with VirtualBox on arm64 is that it is only for arm64 guests, unlike the qemu-system-x86_64 which colima et al use and allow booting up "normal" guest OSes
Also, VBoxManage was created by someone who firmly subscribes to the "git UX is awesome" school of thought :-(
It is slowly improving (albeit with some egregious bugs, like losing EFI data on export) but TBH even their x86 product pales in comparison to Parallels or VMWare Fusion, in terms of machine performance.
Okay. Then you had a Mac. Then you need to run Linux in a VM anyway because similar to Windows, macOS is also a dumpster fire. Then why bother? You are going to have a Linux VM anyway. I usually just sync my VM disk between all my laptops & desktops, no matter what host OS it runs.
WSL 2 is one of the biggest reasons I'm able to be productive as a blind software developer. With it I'm able to enjoy the best desktop screen reader accessibility (Windows and NVDA) as well as the best developer tools (Linux). I hate Microsoft's AI and ads force-feeding as much as anyone else but trust me, you'd do the same if you were in my shoes. Screen reader accessibility on Mac Os is stagnating even faster than the os itself and even though Linux / Gnome accessibility is being worked on, it's still ready only for enthusiasts who don't mind their systems being in a constant state of somewhat broken, as illustrated by this series of blog posts from just a few weeks ago: https://fireborn.mataroa.blog/blog/i-want-to-love-linux-it-d...
>Screen reader accessibility on Mac Os is stagnating
Apocryphally, a lot of this was apparently developed at the direct insistence of Steve Jobs who had some run ins with very angry visually impaired people who struggled to use the early iphone/ipad.
That said, my source for this is one of the men who claims to have spoken to Mr Jobs personally, a visually impaired man who had lied to me on several fronts, and was extremely abusive. However I couldn't find anyone inside apple management or legal who would deny his claim. And he seemed to have been set the expectation that he could call the apple CEO at any time.
Thanks for pointing this out. I'm not visually impaired but even so the graphics and presentation features on Windows seem noticeably better than the competition.
I've been using WSL on and off for Linux development for the last few years.
When it works, it's great! When it doesn't....oh man it sucks. It has been non-stop networking and VPN problems, XServer issues, window scaling issues, hardware accelerated graphics not working, etc. this whole time. I've spent more time trying to fix WSL issues then actually developing software. It's never gotten better.
It's fast. It's powerful. But using it as a daily driver is very painful in my experience. I avoid it as much as possible and do most of my work in MSYS2 instead. Sure, it's much slower. But at least it works consistently and has for years.
Not available in Win10 until recently, and broken and fixed even more recently... but thank you for the heads up. It seems this is finally a thing.
I will have to see if it actually works in my case. The devices are intolerant of timing. Even using usb-serial instead of legacy hardware, let alone the ip stack, can be a problem unless using real ftdi adapters.
Basically virtualizing rs-232's hardware flow control into usb packets was always technically invalid since the beginning, but if the host is overwhelmingly fast enough, you get away with it, usually, and by now all new serial devices have adapted to expect the behavior of usb-serial adapters since that's what everyone has. For that reason, new devices generally tolerate even worse timing and you can even get away with going over ip. But the fact is the timing is garbage by that point and not everything works.
Still, I'm sure it's working well enough for most things or else there would be more reports that it doesn't work.
Since WSL2 is basically VM now, i guess we can passthrough the usb device to VM and skip the whole IP stack, latency is still there, but much better than usbipd
I've tried WSLg for couple of times and all I run was something like xclock to ensure it works. I literally have 0 interest in running GUI Linux apps, so for me it all smooth sailing.
The beta version actually updates more often than the release group. I use the beta so I get the updates sooner. It's been rock stable for me for YEARS.
Every time I praise WSL on hn I pay the karma tax but I will die on this hill. WSL is more powerful than Linux because of how easy it is to run multiple OS on the same computer simultaneously. It's as powerful as Linux with some janky custom local docker wrappers for device support, local storage mapping, and network mapping. Except it's not janky at all. It's an absolute delight to use, out of the box, on a desktop or laptop, with no configuration required.
Edit: for clarity, by "multiple OS" I mean multiple Linux versions. Like if one project has a dependency on Ubuntu22 and another is easier with Ubuntu24. You don't have to stress "do I update my OS?"
You can accomplish the same with Distrobox on Linux, but there's definitely something to be said about having the best of both worlds by running Windows + WSL.
I honestly think Microsoft could win back some mind share from Apple if they:
* Put out a version of windows without all the crap. Call it Dev edition or something and turn off or down the telemetry, preinstalled stuff, ads, and Copilot.
* Put some effort into silicon to get us hardware with no compromises like the Macbooks
I'm on Mac now, and I jump back and forth between Mac laptop and a Linux desktop. I actually prefer Windows + WSL, but ideologically I can't use it. It has potential - PowerToys is fantastic, WSL is great, I actually like PowerShell as a scripting language and the entire new PC set up can now be done with PowerShell + Winget DSC. But, I just can't tolerate the user hostile behavior from Microsoft, nor the stop the world updates that take entirely too long. They should probably do what macOS and Silverblue, etc. do and move to an immutable/read-only base and deploy image based updates instead of whatever janky patching they do now.
Plus, I can't get a laptop that's on par with my M4 Pro. The Surface Laptop 7 (the arm one) comes close, but still not good enough.
I'm not saying it's a perfect solution, but with Windows 11 Pro and group policy I was able to disable all of the annoying stuff, and because it is group policy it has persisted through several years of updates. It is annoying you have to do this, and it does take some time to get set up right. But it's a solution.
That said I'd pay for a dev edition as you described it, that would be fantastic.
it's kind of ridiculous that techy people in a tech forum don't know how to do it.
Why? HN has traditionally always largely been a macOS and Linux crowd. Why do we have to care about fixing an OS that is broken out of the box (that most of us don't use anyway)?
Because someone cannot make informed comments about the "other" party unless they have a reasonably deep knowledge of it, too.
Far too many Linux users, especially, make fun of Windows and if you dig a bit you see that most of their complaints are things that are solved with 5 minutes of googling. Some complaints are philosophical, and those I agree with, but even in that case, I'd be curious how consistent they are with their philosophy when for example Linux desktop environments due weird things.
Summarizing a bit: Linux users with years or decades of experience of tinkering as sysadmins with Linux frequently make junior-level user complaints about Windows usage, frequently based on outdated information about it.
I say this who has been using both Linux and Windows for a few decades now and has a fairly decent level of sysadmin skills on both.
I didn't know about this. My knowledge of Windows is very limited. I use it every day for work, but it's managed by our IT and Security departments. It's locked down. You cannot use external drives. You can't install applications yourself and you can't run un-approved applications. So, I learned over the years to never touch anything that already hasn't been approved, even settings. If you want to apply for something to be approved, you can submit a written justification co-signed by your manager. My manager has never rejected anything I requested, but it's a huge hassle. Most of us just don't bother, even developers.
There is no flavor of Windows 11 that is acceptable. Even the UI itself is a disaster. A cornucopia of libraries and paradigms from React Native to legacy APIs as if an interdimensional wave function of bad ideas had collapsed into an OS, but with ads.
Windows LTSC already exists, but Microsoft, in all their wisdom, restricts it to enterprise licensees only, and seems to actively discourage using it as a desktop OS. The first problem is of course fixable with some KMS server shenanigans, but the second can be kinda painful when it comes to keeping drivers up-to-date, installing apps that rely on features LTSC excludes (and doesn't provide an easy way to install manually), etc.
I've often said that if Microsoft had just iterated on Windows 2000 forever I'd probably still be a full-time Windows user. If Microsoft had maintained an LTSC-like Windows variant that was installable from the normal retail installation media and with a normal retail product key (at the very least Pro, but ideally Home), that also likely would have kept me on Windows full-time instead of switching to Linux as my daily driver.
I use Windows 11 IoT Enterprise LTSC, which as far as I'm aware has all the features that Pro has (plus the IoT Enterprise stuff) and zero bloat. I switched to it from my already de-bloated 11 Pro installation (because it removes some telemetry you're normally unable to disable) and have had 0 issues with it. I can't say I activated it using a normal retail product key, however, there are easy solutions to that.
Ya I totally get that. The way I view it is that windows is a glorified driver support layer and any actual work i do is almost exclusively in the Linux container.
When I used to have free time it was great for games too
> I can't get a laptop that's on par with my M4 Pro.
This is the only reason I have not requested a windows laptop from my company. WSL is better for docker development in basically every way than a mac can be (disclaimer: haven't tried orbstack yet, heard good things, but my base assumption is it can't be better than WSL2) except it is literally impossible to get hardware as good as the M3 or M4 for any other OS than macOS.
I replaced my m1 with a snapdragon laptop running Win11 and upgraded that to pro. For what I do with it, it runs great with very long battery times, for less than Apple quoted to repair the m1. I don't use the copilot features and haven't seen any ads so far, except maybe for office during setup.
Outside US and countries of similar income level, Windows is doing quite alright in mindshare, and will keep doing that unless Apples stops pretending being the computer version of audiophile.
I on the other hand cannot get an affordable Mac that has the same GPU, disk space and memory size as my workstation class laptop.
The biggest difference between OSX and Windows is, Apple adds (some say steal) functionality from competition, and open source. They make it neat. On windows to have something working, you need a WezTerm, Everything for search, Windhawk for a vertical taskbar on the right, Powertoys for an app starter, Folder Size for disc management etc. If you spend a lot of time, Win11 can be ok to work with.
If Powerpoint and Affinity would work on Linux, I'd use Linux though.
Maybe just for your specific preferences. Terminal is plenty fine. Vertical taskbar on the right is straight up user preference. PowerToys for an app starter? Like Alfred? The start search does a decent enough job of that. Folder Size is nice, but enumerating all files is very taxing.
It was removed in Win11, when they rewrote the taskbar to pretend that it's macOS dock (icons centered by default). Today your only options are horizontal taskbar along the top or the bottom edge, and icons aligned left or center.
Last time I checked, Windows 11 lost this capability and 3p solutions like Windhawk are needed. I'd be very happy if they brought this back though, feel free to share a link to some info about how to do it natively.
To the tech savvy, there is essentially only one advantage to running Windows, and that is the ability to run Windows-only software. In all technical respects - control, performance, flexibility - it is inferior to the alternatives. Don't confuse vendor lockin with technology.
I find it dismaying that people on Hacker News willingly submit to incredibly user-hostile behavior from Microsoft and call it "the best of both worlds". Presumably a nontrivial proportion here are building the next generation of software products - and if we don't even respect ourselves, how likely is it that we will respect our users?
"I find it dismaying that people on Hacker News willingly submit to incredibly user-hostile behavior from Microsoft"
And I find it funny that the crowd that spends whole days implementing user-hostile features in yet another SaaS crapware has so much to say about Microsoft's bad behavior.
There is an additional reason: Some (many?) people simply prefer the Windows UI conventions (once you remove all the enshittifications post Windows 7).
I'm not aware of any particular UI convention that's in Windows that isn't available in, say Plasma. Day to day usage is extremely similar, and where they diverge it's usually because 1) Plasma has a feature that Windows doesn't, or 2) someone at Microsoft opted for senseless change for change's sake - a toy interface is layered over a functional one, often (but not always) grudgingly allowing access to the old behavior with extra steps, in a tacit admission of no-confidence. This behavior is pervasive - the "new control panel", the new context menu ("show more options" to get to the original, an extra click that yields a menu with many of the same options but in a different order with different icons), and best of all moving the "Start button" to the center - a change which more than any other exemplifies the silliness, because it 1) at best achieves nothing, and 2) flies in face of the original UI research based on Fitt's Law that informed 30 years of Windows UI tradition.
I honestly can't imagine anyone preferring all that. </rant>
What Apple hardware tax? The macbook air is the best value laptop there is. If the latest version is out of the budget, you can buy older generations used. Even m1 air would be better than any windows laptop at a comparable price point.
Superior hardware with terrible software. Also they straight up artificially limit their hardware so they don't cannibalize their sales, which is slightly understandable, but they do it in the dumbest ways. My SOs MacBook Air can only do one external monitor, even though it has the same specs as her work Pro. Oh and good luck actually getting that external display to work, I swear only like 50% of USB-C docks work on the platform.
Funny how that was the other way around just a few years ago. Macs had inferior hardware, but they were supposed to have better software. At least that's what the Mac users claimed.
I fell for that, years ago. No the software wasn't superior either. I remember having to manually install codecs, which on linux had been a problem many many years before but had been solved already.
>Macbooks are winning the laptop war because of superior hardware.
No. This is just you repeating marketing.
No Nvidia chip = B tier at best.
I have a $700 Asus with a 3060 that is better. Go ahead and scale up to a $2000 computer with an Nvidia chip and its so obviously better, there is nothing to debate.
No one cares about performance per watt, its like someone ran a 5k race, came in 3rd and said "Well at least I burned fewer calories than the winner!"
1. Turning them on/off ala bumblebee isn't a solved problem. It's buggy, especially on not-windows. Even on windows, it's going to be buggy especially in regards to sleep.
2. You obviously lose the advantage of a nvidia GPU that way. If you have to always have it off to get decent battery life, which you do, then it's kind of moot. If you turn it on for your 30 minute workload then there goes 70% of your battery.
You can, I just think it's inconvenient so I favor laptops with better battery. Besides, I almost never find myself being on the go and needing a dedicated GPU.
Well, I'll have to hardly disagree. You want a laptop that its battery life is not 1 hour at best. That wasn't a thing in Windows/Linux laptops until M1 started using arm64. 6 Hours of intense work? Good luck with that.
Not only that, but being able to run very intensive work (Pro Audio, Development...) seamlessly is an absolute pleasure.
Its screen is one of the best screens out there.
The trackpad (and some keyboards) are an absolute pleasure.
The robustness of the laptop is amazing.
I don't care about the marketing of Apple, I don't buy anything new they launch, and I condemn all of their obscure pricing techniques for the tech they sell. But my M1 is rocking like the first day, after four years of daily use. That's something my Windows laptops have never delivered to me.
Apple has done a lot of things wrong, and I will not buy another Apple laptop in the future, but I don't want Nvidia on a Laptop, I want it to be portable, powerful and durable.
That is changing now, and it's amazing. I want my laptop to be mine, and to be able to install any OS I like. New laptops with arm64 and Intel Lake cpus are promissing, but we're not there yet, at least not that I have experienced.
Each to their own for sure, and for you, the nvidia requisite is important. For me it's not about brands, but usability for my work and hobbies.
I have a Thinkpad T560 with only 8GB. I develop using docker and I use kate with python3-pylsp for completion. And of course the occasional zoom/teams.
Instead of slack I normally use localslackirc, so that alone probably saves a ton of battery rather than using the electron one.
When I compile a lot I still manage to get half a day on battery. If I want to save power I just ssh to a server and do everything there :)
edit: that model has also hotswap battery so if you really really need more battery life you can buy a spare.
The 6 hours of real work battery that Apple manages with ARM is genuinely impressive, and finally I think shifted the landscape to take ARM seriously as a CPU for consumers.
But it's just not that big a deal. Sure, I COULD spend a day working without power, but it's 2025 and USB-C power delivery is a mature spec. My desk has power. My work desk has power. My living room has power. My bedroom has power. The coffee shop has power. Airplanes have power. My fucking CAR has power.
Where are you working that you need a full 6 hours of hard working power without occasional access to a power outlet and a battery bank won't meet your needs?
I would be satisfied with 2 hours of hard working battery, which is what Ryzen powered Windows laptops deliver. My girlfriend uses her $800 mid range Ryzen laptop to play games and other power hungry things off charger every single day. It's also what work laptops other than Macs have always provided. Sure, my Thinkpad from 2012 needed a giant tumor of a battery to provide that, but it was always an available option, and you could swap it out for a tiny battery if you really wanted to slim it down.
Never an option in apple land. Battery not good enough? Fuck you, too bad.
> *You* want a laptop that its battery life is not 1 hour at best.
But why?
I mean I can see why some want that. But why would I or most or devs in general want that? I very rarely code on laptop, and almost never when not at a desk.
But the increasing market share of Macs and even Linux these days plus the ever increasing of OSS initiatives from Microsoft points out that Microsoft knows a lot fewer of their users are as captive as they were in the 90's, for example.
More specifically: a lot fewer developers are as captive as they were in the 90's. And while normal users vastly outnumber developers, Microsoft has figured out that those normal users ain't inclined to stick around if those developers jump ship and stop developing for Windows.
In other words, specifically those of a former Microsoft CEO (who understood the problem but not the solution):
Even for regular users, a big chunk of regular users are looking at other platforms:
- "creatives" have always been a core Apple market and they've grown, so that market has grown; plus, since Windows is globally less dominant, a lot of "Photoshop/video editing software/3D modeling + Windows" folks are now on Macs
- gamers now have Proton + Steam on Linux + SteamOS so quite a few more of them are on Linux now, especially since Valve is pushing in this direction to keep Microsoft honest
- large number of regular office workers have iPhones, especially as you move towards the top of the hierarchy, and are far more tempted than they would have been in the past to try or use a Mac
- in many schools there are now Chromebooks instead of Windows laptops; this is primarily a US thing, but it does pop up in some other places, too
Windows is sort of stable but probably still bleeding users slowly.
There's a dedicated settings page for quickly setting popular dev settings such as showing extensions and full paths. Getting rid of the rest just involves tweaking a few other settings like don't show tips or welcome screen. I also hide the weather and news widget because it's tabloid rubbish but many people seem to love it.
> nor the stop the world updates that take entirely too long
Interesting enough, that beyond release upgrades, happening may be once a year, all or may be 99% of updates took ~5 minutes of interruption of me, including needed reboot. I really wonder how others manage to have "entirely too long" updates.
That can't be helped. I go for a smoke and when come back system is already upgraded.
I've not being using Debian setups lately, but on Ubuntu, alert on need-to-reboot packages after daily unattended upgrades run is happening almost every month. I'm kinda sure that Debian is on similar schedule here.
> "Microsoft doesn't make any release from the Long-Term Servicing Channel available for regular consumers. The company only makes it available to volume licensing customers, typically large organizations and enterprises. This means that individual users cannot purchase or download Windows 11 LTSC from Microsoft's website."
"More powerful than Linux" is silly. It's a VM. The most useful thing is that it does a bunch of convenience features for you. I am not suggesting that it is not extremely convenient, but it's not somehow more powerful than just using Linux.
You know what's even more convenient than a VM? Not needing a VM and still having the exact same functionality. And you don't need a bunch of janky wrapper scripts, there's more than one tool that gives you essentially the same thing; I have used both Distrobox and toolbx to quickly drop into a Ubuntu or Fedora shell. It's pretty handy on NixOS if I want to test building some software in a more typical Linux environment. As a bonus, you get working hardware acceleration, graphical applications work out of the box, there is no I/O tax for going over a 9p bridge because there is no 9p bridge, and there is no weird memory balloon issues to deal with because there is no VM and there is no guest kernel.
I get that WSL is revolutionary for Windows users, but I'm sorry, the reason why there's no WSL is because on Linux we don't need to use VMs to use Linux. It's that simple...
Yeah if you are working with Linux only, its better to go full linux.
WSL2 is really handy when you want to run other software though. For example, I use Solidworks, so I need to run windows. Forscan for Ford vehicles also has to run under Windows. Having WSL2 means that I can just have one laptop and run any software that I want.
My development is mainly Windows and I prefer Linux host with Windows VM guests. The experience is more stable and I can revert to a snapshot when Windows or Microsoft product update brakes something or new test configuration does. It also allows to backup and retain multiple QA environments that are rarely used, like a client's Oracle DB. It is nice being able to save the VM state at the end of the week and shut it all down so you can start the next right where you left off. Cannot do that when your development environment is the bare metal OS. Windows has known issues of waking a sleeping laptop.
I too think it would be definitely more stable Linux Host with Win VM guests, but I can see the other way around being more convenient to get support for commercially. Though with the VMWare licensing changes, I think what is by default easier for commercial support options may be changing too.
Can you share more details of how you make that work well? What hypervisor, what backup/replication, for instance? I can only imagine that being a world of irritation.
It's been a few years since I used it, but Virtualbox (free) had perfectly good suspend/restore functionality, and the suspended VM state was just a file.
I'm on Lenovo Yoga 6, Gentoo, 6.12 kernel, 4.20 Xfce. Sleeps works perfect. Same on my Asus+AMD desktop. I've not had sleep related issues for years. And last time I did, it was an out-of-tree Wifi driver causing the whole mess.
I discovered over the weekend that only 1 monitor works over HDMI, DisplayPort not working, tried different drivers.
Suspend takes a good 5 minutes, and on resume, the UI is either turn or things barely display.
I might buy a Windows license, especially if I can't get multi-screen to work.
This has been a pain point for us and our development process… not all versions of Nvidia drivers are the same… even released ones. You have to find a “good” version and keep to it, and then selectively upgrade… at least this has been the case the last 5 years, folks shout out if they have had different experiences.
Side note: our main use case is using cuda for image processing.
"Works on my machine!" is stupid when it comes to software running under an OS, because a userland program that is correct shouldn't work any differently from box to box. (Exceptions you already know notwithstanding.) It is very different when it comes to an operating system.
I know people here hate this, but if you want a good Linux experience, you need to start by picking the right hardware. Hardware support is far and away the number one issue with having a good Linux experience anymore. It's, unfortunately, very possible to even set out to pick good hardware and get burnt for various reasons, like people misrepresenting how well a given device works, or perhaps just simply very similar SKUs having vastly different hardware/support. Still, i'm not saying you have to buy something from a vendor like System76 that specifically caters to Linux. You could also choose a machine that just happens to have good Linux support by happenstance, or a vendor that explicitly supports Linux as an option. I'm running a Framework Laptop 16 and it works just fine, no sleep issues. As far as I know, the sole errata that exists for this laptop is... Panel Self Refresh is broken in the AMDGPU driver. It sorta works, but it's a bit buggy, causing occasional screen artifacts. NixOS with nixos-hardware disables it for me using the kernel cmdline argument amdgpu.dcdebugmask=0x10. That's about it. The fingerprint reader is a little fidgety, and Linux could do a better job at laptop audio out of the box, but generally speaking the hardware works day in and day out. It's not held together with ducktape.
I don't usually bother checking to see if a given motherboard will work under Linux before buying it, since desktop motherboards tend to be much better about actually running Linux well. For laptops, Arch wiki often has useful information for a given laptop. For example, here's the Arch wiki regarding the Framework 16:
It's fair to blame Linux for the faults it actually has, which are definitely numerous. But let's be fair here, if you just pick a given random device, there is a good chance it will have some issues.
I recall having a sleep issue with linux 15 years ago, I think its been fixed long ago, except maybe on some very new hardware or if you install the wrong linux on an M series Mac you could have issues with sleep.
The less coupled software is to hardware, the less likely it is tested in that hardware and the higher likelihood of bugs. Linux can run fine but arbitrary Linux distros may not. This is not the fault of hardware makers.
> The less coupled software is to hardware, the less likely it is tested in that hardware and the higher likelihood of bugs.
Yes, exactly! There are whole teams inside Dell etc. dealing with this. The term is "system integration." If you're doing this on your own, without support or chip unfo, you are going to (potentially) have a very, very bad time.
> This is not the fault of hardware makers.
It is if they ship Linux on their hardware.
This is why you have to buy a computer that was built for Linux, that ships with Linux, and with support that you can call.
Hardware support is more than just kernel support. Additionally, not every kernel release works well for every piece of hardware. Each distro is unique and ensuring the correct software is used together to support the hardware can be difficult when you are not involved in the distro. This is why vertical integration between the distro and hardware leads to higher quality.
ChromeOS, where sleep presumably worked, is also Linux. You just exchanged a working Linux for a distro with more bugs. The fact that you're able to do that is pretty cool.
That's not to detract from the larger point here though. It's pretty funny that all of the replies in this thread identify different causes and suggest different fixes for the same symptom. Matches my experience learning Linux very well.
In the same spirit if "it depends", there are other options that may work for people with different Linux/Windows balance points:
* Wine is surprisingly good these days for a lot of software. If you only have an app or two that need Windows it is probably worth trying Wine to see if it meets your needs.
* Similarly, if gaming is your thing Valve has made enormous strides in getting the majority of games to work flawlessly on Linux.
* If neither of the above are good enough, dual booting is nearly painless these days, with easy setup and fast boot times across both OSes. I have grub set to boot Linux by default but give me a few seconds to pick Windows instead if I need to do one of the few things that I actually use Windows for.
Which you go for really depends on your ratio of Linux to Windows usage and whether you regularly need to mix the two.
I'm struggling to find an option for running x86 Windows software on MacOS/Apple Silicon performantly. (LiDAR point cloud processing.)
The possibilities seem endless and kinda confusing with Windows on ARM vs Rosetta and Wine, think there's some other options which use MacOS's included virtualization frameworks.
(Edit: just so you know, the UI is a bit weird, there is a bit of a learning curve. But the app behaves in a very sane manner, with every step the previous state is maintained and a new node is created. It takes time to get used to it, but you'll learn to appreciate it.
May your cloud have oriented normals, and your samples be uniformely distributed. Godspeed!)
Have you tried to install Windows 11 ARM under UTM on Mac? UTM is a kind of open source Parallels. Then you'll run x86 software using Windows' variant of Rosetta. Probably slower than Rosetta but perhaps good enough.
I wanted to play around with Windows 11 for a while now. It boots in UTM just to the degree that I can confirm my suspicions that Windows 11 sucks compared to Windows 10, but is not otherwise usable. (MacBook Air M3, slightly outdated macOS)
The thing about WINE is that it's not necessarily solid enough to rely on at work. You never know when the next software upgrade will break something that used to work.
That's always true, of course. But, compared to other options, relying on WINE increases the chances of it happening by an amount that someone could be forgiven for thinking isn't acceptable.
In my mind, I almost feel like the opposite is true. Wine is getting better and better, especially with the amount of resources that Valve is putting into it.
If you want a stable, repeatable way to wrangle a Windows tool: Wine is it. It's easy to deploy and repeat, requires no licenses, and has consistent behavior every time (unless you upgrade your Wine version or something). Great integration with Linux. No Windows Updates are going to come in and wreck your systems. No licensing, no IT issues, no active directory requirements, no forced reboots.
You can fix this issue by using a wine "bottle manager" like... Bottles. This allows you to easily manage multiple instances of wine installations (like having multiple windows installations) with better and easy to use tooling around it. More importantly, it also allows you to select across many system agnostic versions of wine that won't be upgraded automatically thus reducing the possibility of something that you rely breaking on you.
I used to a long time ago but even back then I was getting more value out of q4wine (a defunct project now) than from CodeWeavers stuff. Granted, I was perhaps too "enthusiast" using git versions of wine with staging patches and my own patches rolled into it, so q4wine (and I guess now Bottles) more DIY approach won me over.
That all said, I haven't tried CodeWeavers in almost 10 years so it might have improved a lot.
Wine is fantastic, but it is fantastic in the sense of being an amazing piece of technology. It's really lacking bits that would make it a great product.
It's possible to see what Wine as a great product would look like. No offense to crossover because they do good work, but Valve's Steam Play shows what you can really do with Wine if you focus on delivering a product using Wine.
Steam offers two main things:
- It pins the version of Wine, providing a unified stable runtime. Apps don't just break with Wine updates, they're tested with specific Proton versions. You can manually override this and 9 times out of 10 it's totally fine. Often times it's better. But, if you want it to work 10 out of 10 times, you have to do what Valve does here.
- It manages the wineserver (the lifecycle of the running Wine instance) and wine prefix for you.
The latter is an interesting bit to me. I think desktop environments should in fact integrate with Wine. I think they should show a tray icon or something when a Wineserver is running and offer options like killing the wineserver or spawning task manager. (I actually experimented with a standalone program to do this.[1]) Wine processes should show up nested under a wineserver in system process views, with an option to go to the wineprefix, and there should be graphical tools to manage wine prefixes.
To be fair, some of that has existed forever in some forms, but it never really felt that great. I think to feel good, it needs to feel like it's all a part of the desktop system, like Wine can really integrate into GNOME and KDE as a first-class thing. Really it'd be nice if Wine could optionally expose a D-Bus interface to make it so that desktop environments could nicely integrate with it without needing to do very nasty things, but Wine really likes to just be as C/POSIX/XDG as possible so I have no idea if something like that would have a snowball's chance in hell of working either on the Wine or desktop environment side.
Still, it bums me out a bit.
One pet peeve of mine regarding using Wine on Linux is that EXE icons didn't work out of the box on Dolphin in NixOS; I found that the old EXE thumb creator in kio-extras was a bit gnarly and involved shelling out to an old weird C program that wasn't all that fast and parsing the command line output. NixOS was missing the runtime dependency, but I decided it'd be better to just write a new EXE parser to extract the icon, and thankfully KDE accepted this approach, so now KDE has its own PE/NE parser. Thumb creators are not sandboxed on KDE yet, so enable it at your own risk; it should be disabled by default but available if you have kio-extras installed. (Sidenote: I don't know anything about icons in OS/2 LX executables, but I think it'd be cool to make those work, too.) The next pet peeve I had is that over network shares, most EXE files I had wouldn't get icons... It's because of the file size limit for remote thumbnails. If you bump the limit up really high, you'll get EXE thumbnails, but at the cost of downloading every single EXE, every single time you browse a remote folder. Yes, no caching, due to another bug. The next KDE frameworks version fixes most of this: other people sorted out multiple PreviewJob issues with caching on remote files, and I finally merged an MR that makes KIO use kio-fuse when available to spawn thumb creators instead of always copying to a temporary file. With these improvements combined, not just EXE thumbnails, but also video thumbnails work great on remote shares provided you have kio-fuse running. There's still no mechanism to bypass the file size limit even if both the thumbcreator and kio-fuse remote can handle reading only a small portion of the file, but maybe some day. (This would require more work. Some kio slaves, like for example the mpt one, could support partially reading files but don't because it's complicated. Others can't but there's no way for a kio-fuse client to know that. Meanwhile thumb creators may sometimes be able to produce a thumbnail without reading most of the file and sometimes not, so it feels like you would need a way to bail out if it turns out you need to read a lot of data. Complicated...)
I could've left most of that detail out, but I want to keep the giant textwall. To me this little bit of polish actually matters. If you browse an SMB share on Linux you should see icons for the EXE files just like on Windows, without any need to configure anything. If you don't have that, then right from the very first double-click the first experience is a bad one. That sucks.
Linux has thousands of these papercuts everywhere and easily hundreds for Wine alone. They seem small, but when you try to fix them it's not actually that easy; you can make a quick hack, but what if we want to do things right, and make a robust integration? Not as easy. But if you don't do that work, you get where we're at today, where users just expect and somewhat tolerate mediocre user experience. I think we can do better, but it takes a lot more people doing some ultimately very boring groundwork. And the payoff is not something that feels amazing, it's the opposite: it's something boring, where the user never really has any hesitation because they already know it will work and never even think about the idea that it might not. Once you can get users into that mode you know you've done something right.
Thanks for coming to my TED talk. Next time you have a minor pet peeve on Linux, please try to file a bug. The maintainers may not care, and maybe there won't be anyone to work on it, and maybe it would be hard to coordinate a fix across multiple projects. But honestly, I think a huge component of the problem is literally complacency. Most of us Linux users have dealt with desktop Linux forever and don't even register the workarounds we do (anymore than Windows or Mac users, albeit they probably have a lot less of them.) To get to a better state, we've gotta confront those workarounds and attack them at the source.
If you (or whoever is reading this) want(s) a more refined Wine, I highly recommend CodeWeavers. Their work gets folded back into open source WINE, no less.
> To get to a better state, we've gotta confront those workarounds and attack them at the source.
To my eye, the biggest problem with Linux is that so few are willing to pony up for its support. From hardware to software.
Buy Linux computers and donate to the projects you use!
That's true, but even when money is donated, it needs to be directed somewhere. And one big problem, IMO, is that polish and UX issues are not usually the highest priority to sort out; many would rather focus on higher impact. That's all well and good and there's plenty of high impact work that needs to be done (we need more funding on accessibility, for example.) But if there's always bigger fires to put out, it's going to be rather hard to ever find time to do anything about the random smaller issues. I think the best thing anyone can do about the smaller issues is having more individual people reporting and working on them.
If your at work, it's probably a Windows shop. Use windows. At home you can chance a bad update, and probably also have access to windows. Can always use a VM, wine is great in some cases, like WSL. Both don't meet every use case.
why bring wine into a vm discussion? just run windows in a vm too. problem solved without entering the whining about wine not being better than windows itself
I work in embedded systems. In that space, it's pretty common to need some vendor-provided tool that's Windows-only. I often need to automate that tool, maybe as part of a CI/CD pipeline or something.
If I were to do it with a Windows VM, I'd need to:
1. Create the VM image and figure out how to build/deploy it.
2. Sort out the Windows licensing concerns.
3. Figure out how to launch my tool (maybe put an SSH server into the VM).
4. Figure out how to share the filesystem (maybe rsync-on-SSH? Or an SMB fileshare?).
If I do it with Wine instead, all I need to do is:
1. Install some pinned version of Wine.
2. Install my tool into Wine.
3. Run it directly.
Im sure with enough tinkering I could get Solidworks to run. The thing is I don't want to spend time tinkering, I want to spend time doing. WSL2 gives me the optimal solution for all of that + dev.
I really want to like Windows 11, and I enjoy using WSL, but Microsoft treats me too much like an adversary for me to tolerate it as a daily driver. Only a complete scumbag of a product manager would think pushing Candy Crush ads is a good idea.
I’ve got an airgapped Toughbook that I use for the few Windows apps I really need to talk to strange hardware.
You don't need LTSC, you just need Windows Pro versions.
Lots of people bitch and moan about Windows problems that only exist because they buy the cheaper "Home" or whatever license and complain that Microsoft made different product decisions for average users than for people who have bought the explicitly labeled "power user" version.
Remember, the average computer user IS a hostile entity to Microsoft. They will delete System32 and then cry that Windows is so bad! They will turn off all antivirus software and bitch about Windows being insecure. They refuse to update and then get pwned and complain. They blame Microsoft for all the BSODs that were caused by Nvidia's drivers during the Vista era. They will follow a step by step procedure in some random forum from ten years ago that tells them to turn off their entire swap file despite running with lots of RAM and spinning rust and then bitch that Windows is slow.
Don't expect Microsoft to not deal with morons using their software. Buy the Pro versions if you don't want the version meant for morons.
I shouldn’t need to spend this much time and energy turning off AI rubbish, bypassing cloud features, or knobbling telemetry and ads because some shitbag at Microsoft decided this was a good way of getting a promotion.
My computer is supposed to work for me, not the other way around.
> For example, I use Solidworks, so I need to run windows.
Right. One of the things a lot of people don't get is the extent to which multidisciplinary workflows require Windows. This is particularly true of web-centric software engineers who simply do not have any exposure to the rest of the engineering universe.
Years ago this was the reason we had to drop using Raspberry Pi's little embedded microcontroller. The company is Linux-centric to such an extent that they simply could not comprehend how telling someone "Just switch to Linux" is in a range between impossible and nonsensical. They were, effectively, asking people to upend their PLM process just for the sake of using a little $0.50 part. You would have to do things like store entire OS images and configurations just to be able to reconstruct and maintain a design iteration from a few years ago.
WSL2 is pretty good. We still haven't fully integrated this into PLM workflows though. That said, what we've done on our machines was to install a separate SSD for WSL2. With that in place, backing-up and maintaining Linux distributions or distributions created in support of a project is much, much easier. This, effectively, in some ways, isolates WSL2 distributions from Windows. I can clone that drive and move it from a Windows 10 machine to a Windows 11 machine and life is good.
For AI workflows with NVIDIA GPU's WSL2 is less than ideal. I don't know if things have changed in this domain since I last looked. Our conclusion from a while back was that, if you have to do AI with the usual toolchains, you need to be on a machine running Linux natively rather than a VM running under Windows. It would be fantastic if this changed and one could run AI workflows on WSL2 without CUDA and other issues. Like I said, I have not checked in probably a year, maybe things are better now?
EDIT: The other reality is that one can have a nice powerful Linux machine next to the Windows box and simply SSH into it to work. Most good IDE's these days support remote development as well. If you are doing something serious, this is probably the best setup. This is what we do.
My coworkers stubbornly try to use WSL instead of Linux directly. They constantly run into corner cases and waste time working around them compared to just using Linux. Some tooling detects that it is running on Windows, and some detects that it is running on Linux. In practice, it's the worst of both worlds.
What may hap be your workload? The only thing that aren't working on Linux day 1 are GPU's, and it's mostly because kernel/distro timings (we haven't had a GPU release without support for mainline kernel in years).
I am into small and portable, decently powerful, high DPI laptops (battery be damned), ideally with touch support. And this category just gets no love in the linux world.
I was holding hopes for the Framework 12" but they cheaped on the screen to target the student market, with no upgrade option at this point.
Or a way worse touchpad experience. No swiping geastures. No smooth scrolling. FN-buttons not working. Or any other million issues. I have never been able to install Linux on a laptop and getting things to work within a weekend. And then reverting becuase I need my computer.
If you're thinking of apple… as a former apple owner and current thinkpad owner… the built quality of apple is severely overrated. Please come back with comments that are not just shilling.
That was kind of my point: we're still at a stage where checking a list of supported laptops and vendors is pretty much mandatory.
This is totally laptop vendors' fault, but that doesn't change the fact of the matter.
PS: it would be fine if there was a few good options in all categories. Right now I see nothing comparable to an Asus Z13 but with first class Linux support for instance.
Why would your primary work device be running an OS not supported by the device vendor? That's just bizarre.
I use Linux as my primary OS, and while Proton/Steam are pretty good now I'm still rebooting into (unactivated) Windows for some games. It's fine. It's also the only thing I use Windows for.
On an unrelated note, I'm frankly confused about who wants Apple's janky OS, because I've been forced to use it for work and it is very annoying.
What modern hardware isn't supported by Linux? I haven't had driver problems in probably over a decade. I don't even target Linux for my builds, it just works. Same with the pile of random laptops I've installed it on. Wifi out of the box etc.
Fingerprint sensors and IR login cameras that are pre-installed on many laptops, and have Windows-only drivers.
As an end-user (yes, I'm an engineer too, but from the perspective of the OS and driver developers I am an end-user) I don't care who is in charge of getting the device to work on an OS—I only care that it works or not. And these devices don't, on Linux. So, they are broken.
Yesterday, they tried to get a Python library that built a native library using Meson to work. They were working under WSL, but somehow, Meson was attempting to use the MSVC toolchain and failing.
And they were using pip/uv whatever from linux, the linux version.
One of the most common issues is calling a windows executable from within wsl… it’s a “convenience” feature that takes about 2 seconds to disable in the wsl config but causes these kinds of weird bugs
For me, the best part of running Linux as the base OS is not having to deal with Windows.
No ridiculous start menu spam; a sane, non-bloated operating system (imagine being able to update user space libraries without a reboot, due to being able to delete files that other processes still have opened!); being able to back up my data at the file level without relying on weird block-level imaging shenanigans and so much more.
How is inverting the host/guest relationship an improvement on that?
> For me, the best part of running Linux as the base OS is not having to deal with Windows.
This is correct, but let's not pretend that linux is perfect. 99% of linux _for me_ is my terminal environment. WSL delivers on that _for me_.
I don't see any start menu spam because I rarely use it, when I do I type what I'm looking for before my eyes even move to look at that start menu.
oh, I can play destiny 2 and other games without shenanigans. Also don't need to figure out why Slack wants to open links in chromium, but discord in firefox (I have to deal with edge asking to be a default browser, but IMO it's less annoying).
Oh and multi-monitor with multiple DPI values works out of the box without looking up how to handle it in one of the frameworks this app uses.
> when I do I type what I'm looking for before my eyes even move to look at that start menu.
That's a /s, right? When I start typing immediately after the windows button, the initial letters are lost, the results are bad either way, and most turn into just web suggestions rather than things named exactly like the input.
> That's a /s, right? When I start typing immediately after the windows button, the initial letters are lost, the results are bad either way, and most turn into just web suggestions rather than things named exactly like the input.
No, I rarely have issues with search in start menu.
> imagine being able to update user space libraries without a reboot
That's... a very weird criticism to level at Windows, considering that the advice I've seen for Linux is to reboot if you update glibc (which is very much a user space library).
Why? It directly results in almost every Windows update requiring a reboot to apply, compared to usually only an application restart or at most desktop logout/login on Linux.
Having to constantly reboot my computer, or risk missing important security patches, was very annoying to me on Windows.
I've never had to reboot after updating glibc in years of using Linux, as far as I can remember.
Running programs will continue to use the libc version that was on disk when they started. They won't even know glibc was upgraded. If something is broken before rebooting, it'll stay broken after.
This is not true. Different programs on the same system that interoperate and use different versions of the same shared library can absolutely cause issues.
For a trivial change to glibc, it won't cause issues. But there's a lot of shared libraries and lots of different kinds of changes in different kinds of libraries that can happen.
I still haven't nailed if it was due to a shared library update, but just the other day, after running upgrades I was unable to su or sudo / authenticate as a user until after rebooting.
It does happen, but it's pretty rare compared to Windows in my experience, where inconvenience is essentially guaranteed.
Firefox on Linux did not really enjoy being updated while running, as far as I remember; Chrome was fine with it, but only since it does some extra work to bypass the problem via its "zygote process": https://chromium.googlesource.com/chromium/src/+/main/docs/l...
I responded "This is not true" to a sibling comment about this same topic, but about "shared libraries", which is the opposite problem (multiple programs could load the same shared library and try to interact).
This is absolutely not true for Linux kernel updating. While you won't be using the new kernel before rebooting, there's 0 risk in not rebooting, because there's exactly 1 version of the kernel running on the machine -- it's loaded into memory when your computer starts.
There's of course rare exceptions, like when a dynamically linked library you just installed depends on a minimum specific version of the Linux kernel you also just installed, but this is extremely rare in Linux land, as backwards compatibility of programs with older kernels is generally a given. "We do not break userspace"
One problem not rebooting with the kernel is drivers. They aren’t all built in.
Most distros leave the current running kernel and boot into the new one next time.
Some, like Arch, overwrite the kernel on an update, so modules can’t be loaded. It is a shock the first time you plug in a USB drive and nothing happens.
Windows at its core just does not seem like a serious operating system to me. Whenever there are two ways to do something, its developers seem to have picked the non-reasonable one compared to Unix – and doing that for decades adds up.
But yes, first impressions undoubtedly matter too.
I have no idea what Windows does with the various network services but my Pi-Hole gets rate-limited* when it connects to the network--there's just constant DNS lookups to countless MS domains, far beyond what could reasonably be expected for a barebones install.
This isn't even a corpo-sloptop with Qualys and Zscaler and crap running, Just a basic WIndows box I rarely boot. It's deeply offensive to me.
When you compare thing on API level, NT is generally superior to POSIX - just look at what a mess fork() is for one example, or fd reuse, or async I/O.
So, basically yesterday, and not default like how it is with execve, and you can never know if the command you're trying to call implements it the same way or does a different escaping.
Care to explain how fork "breaks" threaded apps? You can't mix them for doing multiprocessing, but it's fine if you use one model or the other.
Win10 has been around for literally a decade now. So much so that it's going out of support.
fork() breaks threaded apps by forking the state of all threads, including any locks (such as e.g. the global heap lock!) that any given thread might hold at that moment. In practice this means that you have to choose either fork or threads for your process. And this extends to libraries - if the library that you need happens to spawn a background thread for any reason, no more fork for you. On macOS this means that many system APIs are unusable. Nor is any of this hypothetical - it's a footgun that people run into regularly (just google for "fork deadlock") even in higher level languages such as Python.
How long has fork() existed? Is it less than 10 year? Is it much much more?
> just google for "fork deadlock"
I did, results were completely unrelated to what you're talking about.
Anyway libraries spawning hidden threads… I bet they don't even bother to use reentrant functions? I mean… ok they are written by clueless developers. There's lots and lots of them, they exist on windows too. What's your point?
It is not the standard in Windows land to run processes by handing them fifty commandline arguments. Simple as that. Win32 apps have strong support for selecting multiple files to pass to the app from within the file select dialog, as long as you follow the documentation.
It's like complaining that Unix is hard to use because I can't just drop a dll into a folder to hook functionality like I can on Windows. It's a radically different design following different ideologies and you can't magically expect everything to transfer over perfectly. If you want to do that on Linux land, you learn about LD_PRELOAD or hook system calls.
If you want to build powerful, interoperable modules that can pipe into each other and compose on the commandline, Powershell has existed since 2006. IMO, passing well formed objects from module to module is RADICALLY better than passing around text strings that you have to parse or mangle or fuck with if you want actual composibility. Powershell's equivalent of ls doesn't have to go looking at whether it is being called by an actual terminal or by an app Pipe for example in order to support weird quirks. Powershell support for Windows internals and functionality is also just radically better than mucking around in "everything is a file" pseudo folders that are a hacky way to represent important parts of the operating system, or calling IOCntrls.
I also think the way Windows OS handles scheduled tasks and operations is better than cron.
I also think Windows Event logging is better than something like dmesg, but that's preference.
Also EVERYTHING in Windows land is designed around remote administration. Both the scheduled tasks and Event Logging systems are transparently and magically functional from other machines if you have you AD setup right. Is there anything in Linux land like AD?
> Win32 apps have strong support for selecting multiple files to pass to the app from within the file select dialog
The problem is when you want to click a file on your file manager and you want it to open in the associated application. Because the file manager can only hope the associated application parses the escapes the same way it generates them. Otherwise it's file not found :)
I'm not going to bother to reply point by point since you completely missed the point in the first few words.
I have used Windows for years, and I loved it. I never understood why Linux and Mac users kept bashing on it. I just didn't know any better.
These days I'm avoiding booting into Windows unless I really have no choice. The ridiculousness of it is simply limitless. I would open a folder with a bunch of files in it and the Explorer shows me a progress bar for nearly a minute. Why? What the heck is it doing? I just want to see the list of files, I'm not even doing anything crazy. Why the heck not a single other file navigator does that — not in Linux, not on Mac, darn — even the specialized apps built for Windows work fine, but the built-in thing just doesn't. What gives? I would close the window and re-open the exact same folder, not even three minutes later and it shows the progress bar again. "WTF? Can't you fucker just cache it? Da fuk you doing?"
Or I would install an app. And seconds after installing it I would try to search for it in the Start menu, and guess what? Windows instead opens Edge and searches the web for it. wat? Why the heck I can't remove that Edge BS once and for all? Nope, not really possible. wat?
Or like why can't I ever rebind Cmd+L? I can disable it but can't rebind it, there's just no way. Is it trying to operate my computer, or 'S' in 'OS' stands for "soul"?
Or for whatever reason it can't even get the time right. Every single time I boot into it, my clock time is wrong. I have to manually re-sync it. It just doesn't do it, even with the location enabled. Stupid ass bitch.
And don't even let me rant about those pesky updates.
I dunno, I just cannot not hate Windows anymore. Even when I need to boot in it "for just a few minutes", it always ends up taking more time for some absolute fiddlesticks made of bullcrap. Screw Windows! Especially the 11 one.
> Or for whatever reason it can't even get the time right. Every single time I boot into it, my clock time is wrong.
Dual booting will do that because linux & windows treat the system clock differently. From what I recall one of them will set it directly to the local time and the other always sets it to UTC and then applies the offset.
The most reliable fix is to get Windows to use UTC for the hardware clock, which is usually the default on Linux. (It's more reliable because it means the hardware clock doesn't need to be adjusted when DST begins or ends, so there's no need for the OSs to cooperate on that.)
That flag has been broken for at least several Windows versions, unfortunately. A shame, given that that's the only sane way of using the RTC in the presence of DST or time zone shifts...
That's exactly the type of Windows-ism I'm talking about. Two options (use UTC or the local time), and Windows chose to pick the nonsensical one.
Yeah, well, I use ntfs in Linux. It somehow knows how to treat the partitions. Even though it can't fix the issues when they arise (which almost never happens) — there's no chkdsk for Linux. So, I just don't understand why Windows can't automatically sync the clock (as it explicitly set to do it) when it boots? Why does one have to get creative to fix the darn clock? If I can't even trust the OS to manage the time correctly, what can I trust it with, if anything at all?
I loved windows XP and Windows 7. They were a bit brittle regarding malware, but I was using a lot of pirated software at the times, so that may have been me. Win 8 was bad UX wise, but 8.1 resolved a lot of the issues. But since then, I barely touched windows.
I want a OS, not an entertainment center, meaning I want to launch a program, organize my files, and connect to other computers. Anything that hinders those is bad. I moved from macOS for the same reason, as they are trying to make those difficult too.
Exactomundo! I'm a software developer, not a florist. I don't care about all those animations, transitions, dancing emojis, styled sliding notifications, windings and dingleberries. If I want to rebind a fucking key I should be able to. If I want to replace the entire desktop with a tiling manager of my choosing — that should be possible. And definitely, absolutely, in no way, should just about any kind of app, especially a web-browser, be shoved in my face. "Edge is not that bad", they would say. And would be completely missing the whole point.
Are you one of those guys that fiddles with registry settings and decrapifiers? To me, it sounds like you turned off file indexing. I turn it off when doing audio recording and yeah, that slows down file browsing.
The reason varies by the decade. Microsoft has a tendency to fix one thing, then break another.
That said, a distaste for advertising goes beyond OCD. Advertisers frequently have questionable ethics, ranging from intruding upon people's privacy (in the many senses of the word) to manipulating people. It is simply something that many of us would rather do without.
Advertising triggers a lot more than OCD in me outside of my start menu. On my machine, where I spend most of my waking hours, it was certainly the last straw for me.
But there's also the thing where Microsoft stops supporting older machines, creating a massive pile of insecure boxes and normie-generated e-waste; and the thing where it dials home constantly; and the thing where they try and force their browser on you, and the expensive and predatory software ecosystem, and the insane bloat, and the requiring a Microsoft account just to use my own computer. Oh yeah, and I gotta pay for this crap?!
I went full Linux back when Windows 11 came out and will only use it if a job requires. Utterly disgusting software.
What makes you think I’m not chill already? You engaged in a slightly rude trope, and I provided a very mild push back, at least from my point of view the stakes are all correctly low.
But you still get the worst of the Windows world, which is more than many are willing to deal with. I was using windows for years as my main gaming OS, but after they announced W11 being the only way forward. Switching to Linux on the desktop was like a breath of fresh air. I'll leave it at that.
If I were to run an OS on a VM it's gonna be windows, not Linux
You obviously don't. Maybe WSL is the best compromise for people who need both Windows and Linux.
But it's ridiculous to think that WSL is better than just Linux for people who don't need Windows at all. And that's kind of what the author of this thread seems to imply.
I think that case could be made. For example for people who have a laptop that is not well supported by linux. With WSL they get linux and can use all of their hardware.
If it’s impossible to massage Linux into working well with your laptop – sure. But you’re missing out so much, like, well, not having to deal with Windows.
Similarly powerful would be totally fine. More powerful really is silly. Personally I couldn't make a lot of my workflows work very well with WSL2. Some of the stuff I run is very memory intensive and the behavior is pretty bad for this in WSL2. Their Wayland compositor is also pretty buggy and unpolished last I used it, and I was never able to get hardware acceleration working right even with the special drivers installed, but hopefully they've made some progress on that front.
Having Windows and Linux in the same desktop the way that WSL2 does obviously means that it does add a lot of value, but what you get in the box isn't exactly the same as the thing running natively. Rather than a strict superset or strict subset, it's a bit more like a Venn diagram of strengths.
By default wsl2 grabs half of the memory, but that's adjustable. The biggest pain point I have is to run servers inside wsl that serve to non-localhost (localhost works auto-magically).
I am surprised you had such problems with wsl2 graphics acceleration. That just worked for me, including CUDA accelerated workloads on the linux side.
As everyone said, WSL2 is actually virtual machines and it is what most people are actually using now. That said, I feel the need to chime in and say I actually love WSL1 and I love Windows NT the kernel. It bums me out all the time that we probably won't get major portions of the NT kernel, even an out-of-date version, in some open source form.
I like Linux, and I use Linux as my daily desktop, but it's not because I think Linux or even UNIX is really that elegant. If I had to pick a favorite design it would be Windows NT for sure, even with all its warts. That said, the company behind Windows NT really likes to pile a lot of shit I hate on top of that pretty neat OS design, and now it's full of dubious practices. Automatic "malware submission" on by default, sending apps you download and compile yourself to Microsoft and even executing them in a VM. Forced updates with versions that expire. Unbelievable volumes of network traffic, exfiltrating untold amounts of data from your local machine to Microsoft. Ads and unwanted news all over the UI. Increasing insistence in using a Microsoft account. I could go on and on.
From a technical standpoint I do not think the Linux OS design is superior. I think Linux has some amazing tools and APIs. dmabufs are sweet. Namespaces and cgroups are cool. BPF and it's various integrations are borderline insane. But at its core, ... It's kinda ugly. These things don't all compose nicely and the kernel is an enormous hard-to-tame beast. Windows NT has its design warts too, all over, like the amount of involvement the kernel has in the GUI for historical reasons, and the enormous syscall surface area, and untold amounts of legacy cruft. But all in all, I think the core of what they made is really cool, the subsystems concept is super cool, and it is an OS design that has stood up well to time. I also think the PE format is better than ELF and that it is literally better for the capabilities it doesn't have w.r.t. symbols. Sure it's ugly, in part due to the COFF lineage, but it's functionally very well done IMO.
I feel the need to say this because I think I probably came off as a hater, and tbh I'm not even a hater of WSL2. It's not as cool as WSL1 and subsystems and pico processes, but it's very practical and the 9p bridge works way better than it has any right to.
Turns out that it's easier to emulate a CPU than syscalls. The CPU churns a lot less, too, which means that once things start working things tend to keep working.
You're thinking of the POSIX personality of Windows NT of old. This was based on Interix and has been deprecated about two decades ago and is now buried so deep that it couldn't be revived.
The new WSL1 uses kernel call translation, like Wine in reverse and WSL2 runs a full blown Linux kernel in a Hyper-V VM. To my knowledge neither of these share anything with the aforementioned POSIX subsystem.
I mean... WINE does the same on windows, but microsoft refuses to release their API docs for all internal APIs. They release WSL by relying on Linux's open-ness, while refusing the same for themselves.
A big one of those reasons was Docker. Docker was still fairly niche when WSL was released in 2016, but demand for it grew rapidly, and I don't think there was any realistic way they could have made it work on the NT kernel.
I think the two fairly deep integrations are window's ability to navigate WSL's filesystem and wslg's fairly good ability to serve up guis.
The filesystem navigation is something that AFAIK can't easily be replicated. wslg, however, is something that other VMs have and can do. It's a bit of a pain, but doable.
What makes WSL nice is the fact that it feels pretty close to being a native terminal that can launch native application.
I do wish that WSL1 was taken further. My biggest grip with WSL is the fact that it is a VM and thus takes a large memory footprint. It'd be nice if the WSL1 approach panned out and we instead had a nice clean compatibility wrapper over winapi for linux applications.
> The filesystem navigation is something that AFAIK can't easily be replicated.
The filesystem navigation getting partially open sourced is one of the more interesting parts being open sourced per this announcement. The Plan9 file server that serves files from Windows into Linux is included in the new open source dump. (The Windows filesystem driver that runs a Plan9 client on the Windows side to get files from Linux is not in the open source expansion.)
It's still fascinating that the whole thing is Plan9-based, given the OS never really succeeded, but apparently its network file system is a really good inter-compatibility file communication layer between Linux and Windows.
> I do wish that WSL1 was taken further.
WSL1 survives and there's still a chance it will see more work eventually, as the tides shift. I think the biggest thing that blocked WSL1 from more success was lack of partners and user interest in Windows Subsystem for Android apps. That still remains a potentially good idea for Windows if it had been allowed "real" access to Google Play Services and App Store, rather than second rate copy of Amazon's copy of Google Play Services and Fire App Store. An actual Google partnership seems doomed given one of the reasons to get Windows Subsystem for Android competitive was fear of ChromeOS, but Google still loves to talk about how "Open" Android is despite the Google Play Services moat and that still sounds like something that a court with enough fortitude could challenge (even if it is probably unlikely to happen).
> The integration between Windows and the WSL VM is far deeper than a typical VM hypervisor.
Sure, but I never claimed otherwise.
> You cannot claim with a straight face that Virtualbox is easier to use.
I also didn't claim that. I wasn't comparing WSL to other virtualization solutions.
WSL2 is cool. Linux doesn't have a tool like WSL2 that manages Linux virtual machines.
The catch 22 is that it doesn't need one. If you want to drop a shell in a virtual environment Linux can do that six ways through Sunday with no hardware VM in sight using the myriad of namespacing technologies available.
So while you don't have WSL2 on Linux, you don't need it. If you just want a ubuntu2204 shell or something, and you want it to magically work, you don't need a huge thing with tons of integration like WSL2. A standalone program can provide all of the functionality.
I have a feeling people might actually be legitimately skeptical. Let me prove this out. I am on NixOS, on a machine that does not have distrobox. It's not even installed, and I don't really have to install it since it's just a simple standalone program. I will do:
$ nix run nixpkgs#distrobox enter
Here's what happened:
$ nix run nixpkgs#distrobox enter
Error: no such container my-distrobox
Create it now, out of image registry.fedoraproject.org/fedora-toolbox:latest? [Y/n]: Y
Creating the container my-distrobox
Trying to pull registry.fedoraproject.org/fedora-toolbox:latest...
...
0f3de909e96d48bd294d138b1a525a6a22621f38cb775a991974313eda1a4119
Creating 'my-distrobox' using image registry.fedoraproject.org/fedora-toolbox:latest [ OK ]
Distrobox 'my-distrobox' successfully created.
To enter, run:
distrobox enter my-distrobox
Starting container... [ OK ]
Installing basic packages... [ OK ]
Setting up devpts mounts... [ OK ]
Setting up read-only mounts... [ OK ]
Setting up read-write mounts... [ OK ]
Setting up host's sockets integration... [ OK ]
Integrating host's themes, icons, fonts... [ OK ]
Setting up distrobox profile... [ OK ]
Setting up sudo... [ OK ]
Setting up user groups... [ OK ]
Setting up user's group list... [ OK ]
Setting up existing user... [ OK ]
Ensuring user's access... [ OK ]
Container Setup Complete!
[john@my-distrobox]~% sudo yum install glxgears
...
Complete!
[john@my-distrobox]~% glxgears
Running synchronized to the vertical refresh. The framerate should be
approximately the same as the monitor refresh rate.
302 frames in 5.0 seconds = 60.261 FPS
^C
No steps omitted. I can install software, including desktop software, including things that need hardware acceleration (yep, even on NixOS where everything is weird) and just run them. There's nothing to configure at all.
That's just Fedora. WSL can run a lot of distros, including Ubuntu. Of course, you can do the same thing with Distrobox. Is it hard? Let's find out by using Ubuntu 22.04 instead, with console output omitted:
To be completely, 100% fair: running an old version of Ubuntu like this does actually have one downside: it triggers OpenGL software rendering for me, because the OpenGL drivers in Ubuntu 22.04 are too old to support my relatively new RX 9070 XT. You'd need to install or copy in newer drivers to make it work. There are in fact ways to do that (Ubuntu has no shortage of repos just for getting more up-to-date drivers and they work inside Distrobox pretty much the same way they work in real hardware.) Amusingly, this problem doesn't impact NVIDIA since you can just tell distrobox to copy in the NVIDIA driver verbatim with the --nvidia flag. (One of the few major points in favor of proprietary drivers, I suppose.)
On the other hand, even trying pretty hard (and using special drivers) I could never get hardware acceleration for OpenGL working inside of WSL2, so it could be worse.
That aside, everything works. More complex applications (e.g. file browsers, Krita, Blender) work just fine and you get your normal home folder mapped in just like you'd expect.
> I get that WSL is revolutionary for Windows users
It is... I'm working these days on bringing a legacy windows only application to the 21st century.
We are throwing a WSL container behind it and relying on the huge ecosystem of server software available for Linux to add functionality.
Yes that stuff could run directly on windows, but you'd be a lot more limited in what's supported. Even for some restricted values of supported. And you'd have to reinvent the wheel for a few parts.
With WSL you can use “Linux the good parts” (command line tools, efficient-enough paradigms for fork() servers) and completely avoid X Windows, the Wayland death spiral, 100 revisions of Gnome and KDE that not so much reinvent the wheel but instead show us why the wheel is not square or triangular…
It's all opinion of course, but IMO Windows is the most clumsy and unintuitive desktop experience out there. We're all just used to the jank upon jank that we think it's intuitive.
KDE is much more cohesive, stable, and has significantly more features.
It blows my mind that people can complain about the direction KDE is going when trying to paint a picture about how it's so much nicer to use Windows. I know the boiling frog experiment is fake, but just checking: are you sure the water isn't getting a little uncomfortably warm in the Windows pool right now?
Agreed. I used tiling WMs for a long while (ion3, XMonad) and it was such a productivity boost.
Then I was forced to use a Mac for work, so I was using a floating WM again. On my personal machine, ion3 went away and I never fully got around to migrate to i3.
By the time I got enough free time to really work on my personal setup, it had accumulated two huge monitors and was a different machine. I found I was pretty happy just scattering windows around everywhere. Especially with a trackball's cursor throw. This was pretty surprising to me at first.
Anyway this is just my little personal anecdote. If I go back to a Linux install I'll definitely have to check out i3 again. Thanks for reminding me :)
Compiling and testing cross-platform software for Linux lately (Ubuntu and similar)... You can't even launch an application or script without CLI. Bad UX, IMO. For these decisions, There are always reasons, a justification, something about security. I don't buy it.
I compile my program using WSL, or Linux native. It won't launch; not an executable. So, into the CLI: chmod +x. Ok. It's a compiled binary program, so semantically I don't see the purpose of this. Probably another use case bleeding into this. (I think there's a GUI way too). Still can't double click it. Nothing to launch from the right-click menu. After doing some research, it appears you used to be able to do it (Ubuntu/Gnome[?]), but it was removed at some point. Can launch from CLI.
I make a .desktop file and shell script to move it to the right place. Double click the shell file. It opens a text editor. Search the right click menu; still no way. To the CLI we go; chmod +x, and launch if from the CLI. Then after adding the Desktop icon, I can launch it.
On windows, you just double click the identified-through-file-extension executable file. This, like most things in Linux, implies the UX is designed for workflows I don't use as a PC user. Likely servers?
This sounds very weird to me. Any sane build toolchain should produce a runnable executable that already has +x. What did you use to compile it?
Removing double-click to run an executable binary certainly sounds like something either Gnome or Ubuntu would do, but thankfully that's not the only option in town. In KDE I believe the same exact Windows workflow would just work.
Yeah the typical way programs are run is by using a .desktop file that's installed. The reason nobody cares is because running random executable that have a GUI is a pretty rare use case for Linux desktops. We don't have wizards or .msi installers, we just install using the package manager. And then it shows up where it needs to.
If you're on KDE, you can right-click the start menu and add the application. Also, right-click menu should give you a run option.
Just FYI, you may also enjoy systemd-machine. It's essentially the same thing as toolbx but it handles the system bus much more sanely, and you can see everything running inside the guest from the host's systemctl.
This is very much YMMV thing. There is no objectively best platform. There are different users and requirements.
I’ve been a software developer for 20 years and in _my_ opinion Windows is the best platform for professional software development. I only drop of to linux when need some of the excellent posix tools but my whole work ergonomy is based on Windows shortcuts and Visual Studio.
I’ve been forced to use Mac for the past 1.5y but would prefer not to.
Why would Windows be superior for me? Because that’s where the users are (for the work stuff I did before this latest gig). I started in real time graphics and then spent over a decade in CAD for AEC (developing components for various offerings including SketchUp). The most critical thing for the stuff I did was the need to develop on the same platform as users run the software - C++ is only theoretically platform independent.
Windows API:s are shit for sure for the most part.
But still, from this pov, WSL was and will be the best Linux for me as well.
I fully agree with you - "YMMV" is the one true take. Visual Studio has never been particularly attractive to me, my whole workflow is filled with POSIX tools, and my code mostly runs on Docker and Linux servers. Windows is just another thing to worry about for me, be it having to deal with the subtle quirks of WSL not running on raw metal or having to deal with running UNIX-first tooling (or finding alternatives) on Windows. If it wasn't for our work provided machines being Windows by default, and at home, being into VR gaming and audio production (mostly commercial plugins), I'd completely ditch Windows in a heartbeat.
If Windows provided easier access to hardware, especially USB, from WSL it would be nice. In fact, if WSL enumerated devices and dealt with them as native Linux does, even better.
Windows has many useful software that is not available on Linux.
So, for me Windows + WSL is more productive than just using Linux.
The UI is still better on Windows(basic utilities like File Explorer and Config Management is better on Windows). No Remoting Software beats RDP. When I remote to a Windows workstation through RDP, I can't tell the difference. VNC is always janky. Of course there is Word/Excel/Illustrator which is simply not available on Linux
File Explorer is better on Windows? How? I tried Windows 11 for the first time a month ago and it takes several seconds for file explorer to open, it's asynchronously loading like 3 different UI frameworks as random elements pop in with no consistency, there's two different rightclick menus because they couldn't figure out how to make the new one have all the functionality of the old one so they decided to just keep the old one behind "Show More Options", and it's constantly pushing OneDrive in your face. I'm offended that this is what they thought is good enough to ship to a billion users.
The File Explorer on Windows 11 is the worst experience ever. Windows 7 was snappy as hell, but I don't know what they did to damage it that badly. I use XYplorer, which is written in Visual Basic (so a 32 bit application), but is so much faster the native explorer (and is full with features).
> No Remoting Software beats RDP. When I remote to a Windows workstation through RDP, I can't tell the difference. VNC is always janky
Any recent distro running Gnome or KDE has built-in support for connecting and hosting an RDP session. This used to be a pain point, you don't need to use VNC anymore.
It's actually worse on windows since you need to pony up for a pro license to get RDP hosting support...
> The UI is still better on Windows(basic utilities like File Explorer and Config Management is better on Windows).
5 years ago, we would be comparing old GNOME 3 or KDE Plasma 5 on X11 and Windows 10. I would be forced to agree. The Windows UI was better in many ways at that point.
Today we have KDE Plasma 6.3 on Wayland and Windows 11. This is an entirely different ball game. It's hard to explain. Wayland feels like it has taken an eternity to lift off, like well over a decade, but now things change dramatically on the scale of months. A few months ago HDR basically didn't work anywhere. Right now it's right in front of me and it works great. You can configure color profiles, SDR applications don't break ever, and you even get emulated brightness. Display scaling? Multiple monitors with different scale factors? What about one monitor at 150% and another at 175% scale factor? What about seamlessly dragging windows between displays with different scale factors? Yes, Yes, Yes, and Yes. No `xrandr` commands. You configure it in the GUI. I am dead serious.
File Explorer? That's the application that has two context menus, right? I think at this point Windows users might actually be better off installing KDE's Dolphin file manager in Windows for the sake of their own productivity. If I had the option to use Windows File Explorer on KDE I would impolitely decline. I have not encountered any advertising built into my file explorer. I do not have an annoying OneDrive item in the menu on the left. I have a file tree, a list of Places, and some remote file shares. When I right click it does not freeze, instead it tends to show the context menu right away. And no, I'm not impressed by Tabs and Dark Mode, because we've had that on Linux file managers for so long that some people reading this were probably born after it was already supported.
Windows still has the edge in some areas, but it just isn't what it used to be. The Linux UI is no longer a toy.
> When I remote to a Windows workstation through RDP, I can't tell the difference. VNC is always janky.
I don't really blame you if you don't believe me, but I, just now, went into System Settings, went to the Remote Desktop setting, and clicked a toggle box, at which point an RDP server spawned. Yes, RDP, not VNC, not something else. I just logged into it using Reminna.
Not everything on Linux is seamless and simple like this, but in this case it really is. I'm not omitting a bunch of confusing troubleshooting steps here, you really can do this on a modern Linux setup, with your mouse cursor. Only one hand required.
> Of course there is Word/Excel/Illustrator which is simply not available on Linux
True, but if you want to use Linux and you're held back by needing some specific software, maybe it's not the end of the world. You have many options today. You can install VirtualBox and run your spreadsheets in there. You can use Office 365 in a browser. You can run Crossover[1] and emulate it. You can use an office alternative, like possibly WPS Office. You can dual boot. You can go the crazy route and set up a KVM GPU passthrough virtual machine, for actually native performance without needing to reboot.
The point I'm making here is not "Look, Linux is better now! Everyone go use it and get disappointed ASAP!" If you are happy with Windows, there's literally no point in going and setting yourself up for disappointment. Most people who use Linux do so because they are very much not happy with Windows. I'm sure you can tell that I am not. However, in trying to temper the unending optimism of Linux nerds, sometimes people go too far the other way and represent Linux as being in far worse of a state than it actually is. It really isn't that bad.
The worst thing about modern Linux is, IMO, getting it to work well on your hardware. Once you have that part figured out, I think modern Linux is a pretty good experience, and I highly recommend people give it a shot if they're curious. I think Bazzite is a really nice distro to throw on a random spare computer just to see what modern Linux is actually capable of. It's not the absolute most cutting edge, but it gives you a nice blend of fairly up-to-date software and a fairly modern RPM ostree base system for better stability and robustness, and it's pretty user-friendly. And if you don't like it, you can easily get a full refund!
> You can use an office alternative, like possibly WPS Office.
Or ONLYOFFICE, which is FOSS (and what I use personally). Or LibreOffice (also free/libre software, of course). I don’t miss MS Office one bit, the compatibility is nothing short of excellent nowadays, and the speed and UX both surpass it.
There are specialized software packages that are Windows-only, of course, but at least office programs ain’t it.
IDK how many VMs you've used, but there has been a lot of work specifically with x86 to make VMs nearly as fast as native. If you interact with cloud services everything you do is likely on a VM.
It's handy if you have other services that are Windows-based, though. And, being a VM, it's fairly convenient to have multiple versions and to back up.
Linux doesn't need VMs, people need VMs. If you spend most of your time in Windows-exclusive apps and use WSL2 on occasion, then you already know what you want, why are you worried about arguing about it on the Internet?
For many software engineers, a lot of our work is Linux, and it wouldn't be atypical to spend most of the time doing Linux development. I work on Linux and deploy to Linux, it's just a no-brainer to run Linux, too, aside from the fact that I simply loathe using modern Windows to begin with.
(Outside of that, frankly, most people period live inside of the web browser, Slack, Discord, and/or Steam, none of which are Windows-exclusive.)
My point isn't that Linux is better than Windows, it's that WSL2 isn't better than literally running Linux. If you need to do Linux things, it is worse than Linux at basically all of them.
You still have to go and make sure that what you want is there and works, but it's not a bad bet. With a few major omissions aside, there is a pretty big library of supported games.
> For anything that is PvP multiplayer, this is very much not a given because of how pervasive kernel-level anti-cheat solutions are today.
To be fair, though, you probably still have a better shot of being able to play the games you want to under Linux than macOS and that doesn't seem to be that bad of an issue for Mac users. (I mean, I'm sure many of them game on PC anyways, but even that considered macOS has greater marketshare than Linux, so that's a lot of people either able to deal with it or have two computers.)
Speaking as a Mac user, it's really bad. Much worse than Linux/SteamOS actually. Not only most games just aren't there, many games that are advertised as Mac-compatible are actually broken because they haven't been updated for a long time, and macOS is not particularly ABI-stable when it comes to GUI. Sometimes they just don't support hi-DPI, so you can play it but forget about 4K. But sometimes it just straight up won't start.
I do indeed have two computers with a KVM setup largely for this reason, with a secondary Windows box relegated to gaming console role.
Fair point. I know it was rough when Apple made the break-away with 32-bit.
Still, the point is that you can make it work if you want to make it work. Off the top of my head:
- Two computers, completely separate. Maybe a desktop and a laptop.
- Two computers, one desk and a KVM like you suggest.
- Two computers, one desk. No proper KVM, just set up remote desktop and game streaming.
- (on Linux) KVM with GPU passthrough, or GPU passthrough with frame relay. One computer, one desk.
- Game streaming services, for more casual and occasional uses.
- Ordinary virtualization with emulated GPU. Not usually great for multimedia, but still.
- And of course, Steam Play/Heroic Launcher/WINE. Not as applicable on macOS, but I know CodeWeavers does a lot to keep macOS well-supported with Crossover. With the aforementioned limitations, of course.
Obviously two computers has a downside, managing two boxen is harder than one, and you will pay more for the privilege. On the other hand, it gives you "the real thing" whenever you need it. With some monitors having basic KVM functionality built-in, especially over USB-C, and a variety of mini PCs that have enough muscle to game, it's not really the least practical approach.
I suspect for a lot of us here there is a reasonable option if we really don't want to compromise on our choice of primary desktop OS.
Windows supports Linux because the latter is open source, it's a lot easier than the reverse.
Linux, on the other hand, barely supports Windows because the latter is closed, and not just closed, windows issues component updates which specifically check if they run in wine and stop running, being actively hostile to a potential Linux host.
The two are not equivalent, nobody in the Linux kernel team is actively sabotaging WSL, whereas Microsoft is actively sabotaging wine.
Do you have a link to where I can read more about this? My understanding is that Microsoft saw Wine as inconsequential to their business, even offloading the Mono runtime to them [1] when they dropped support for it.
> Until 2020, Microsoft had not made any public statements about Wine. However, the Windows Update online service will block updates to Microsoft applications running in Wine. On 16 February 2005, Ivan Leo Puoti discovered that Microsoft had started checking the Windows Registry for the Wine configuration key and would block the Windows Update for any component.[125] As Puoti noted: "It's also the first time Microsoft acknowledges the existence of Wine."
Microsoft seems to be taking a outside-in "component at a time" approach to open sourcing Windows. Terminal, Notepad, Paint, Calculator, the new Edit.com replacement, a lot of WSL now, etc.
This approach has been fascinating so far, but yeah not "exciting" from "what crazy things can I do with Windows like put it in a toaster" side of things.
It would be great to see at least a little bit more "middle-out" from Windows Open Source efforts. A minimal build of the NT Kernel and some core Windows components has been "free as in beer" for a while for hobby projects with small screens if you really want to try a very minimal "toaster build" (there's some interesting RPi out there), but the path to commercialization is rough after that point and the "small screens" thing a bit of a weird line in the sand (though understandable given Microsoft's position of power on the desktop and sort of the tablet but not phone).
The NT Kernel is one of the most interesting microkernels left in active use [0], especially given how many processor architectures it has supported over decades and how many it still supports (even the ones that Windows isn't very commercially successful on today). It could be a wealth of power to research and academia if it were open source, even if Microsoft didn't open source any of the Windows Subsystems. It would be academically interesting to see what sort of cool/weird/strange Subsystems people would build if NT were open source. I suppose Microsoft still fears it would be commercially interesting, too.
[0] Some offense, I suppose to XNU here. Apple's kernel is often called a microkernel for its roots from the Mach kernel, but it has rebuilt some monoliths on top of that over the years (Wikipedia more kindly calls it a "hybrid kernel"), and Mach itself is still so Unix flavored. NT's "object oriented" approach is rather unique today, with its more VMS heritage, a deeply alternate path from POSIX/Unix/Linux(/BSD).
I doubt it would happen, large projects that aren't open source from the onset and are decades old can have licensed or patented code, Microsoft would have to verify line by line that they can open source it.
Wait long enough and it will happen, the question is just "how long". (Microsoft has open-sourced OS and languages from the 1980s) Some days it seems like Microsoft is more interested in Azure, Copilot and GAME PASS and Windows is an afterthought.
I would certainly love it if Microsoft stopped trying to sell Windows and just open sourced it. I think Windows is a much more pleasant desktop operating system than Linux, minus all the ads and mandatory bloat Microsoft has put in lately. But if Windows was open source the community could just take that out.
I really don't see it happening any time in the next decade at least, though. While Windows might not be Microsoft's biggest focus any more it's still a huge income stream for them. They won't just give that up.
I preferred WSL to running linux directly even though I had no need for any windows only software. Not having to spend time configuring my computer to make basic things work like suspend/wake on lid down/up, battery life, hardware acceleration for video playback on the browser, display scaling on external monitor and so on was reason enough.
I use Windows with wsl for work, and Linux and MacOS at home. Windows is a mess, it blows my mind that people pay for it. Sleep has worked less reliably on my work machine than my Fedora Thinkpad, and my Fedora machine is more responsive in pretty much every way despite having modest specs in comparison. Things just randomly stop working on Windows in a way that just doesn't happen on other OSes. It's garbage.
That was certainly not the case ~2 years ago, the last time I installed linux on a laptop.
It also doesn't appear to be the case even now. I searched for laptops available in my country that fit my budget and for each laptop searched "<laptop name> linux reddit" on google and filtered for results <1 year old. Each laptop's reports included some or other bug.
The laptop with the best reported linux support seemed to be Thinkpad P14s but even there users reported tweaking some config to get fans to run silently and to make the speakers sound acceptable.
You are going to find issues for any computer for any OS by looking things up like this.
And yeah, it's best to wait a bit for new models, as support is sorted out, if the manufacturer doesn't support Linux itself. Or pick a manufacturer that sells laptops with Linux preinstalled. That makes the comparison with a laptop with Windows preinstalled fair.
> You are going to find issues for any computer for any OS by looking things up like this
I wasn't cherry-picking things. I literally searched for laptops available in my budget in my country and looked up what was the linux support like for those laptops as reported by people on reddit.
> Or pick a manufacturer that sells laptops with Linux preinstalled
I suppose you are talking about System76, Tuxedo etc. These manufacturers don't ship to my country. Even if I am able to get it shipped, how am I supposed to get warranty?
You weren't cherry picking but the search query you used would lead to issue reports.
HP, Dell and Lenovo also sell Linux laptops on which Linux runs well.
I sympathize with the more limited availability and budget restrictions, but comparisons must be fair: compare a preinstalled Windows and a preinstalled linux, or at least a linux installed on hardware whose manufacturer bothered to work on Linux support.
When the manufacturer did their homework, Linux doesn't have the issues listed earlier. I've seen several laptops of these three brands work flawlessly on Linux and it's been like this for a decade.
I certainly choose my laptops with Linux on mind and I know just picking random models would probably lead me to little issues here and there, and I don't want to deal with this. Although I have installed Linux on random laptops for other people and fortunately haven't run into issues.
As a buyer, how am I supposed to know which manufacturer did their homework and on which laptops?
> it's been like this for a decade
Again, depends on the definition of "flawlessly". Afaik, support for hardware accelerated videoplayback on browsers was broken across the board only three years ago.
> As a buyer, how am I supposed to know which manufacturer did their homework and on which laptops?
You first option is to buy a laptop with linux preinstalled from one of the many manufacturers that provides this. This requires no particular knowledge or time. Admittedly, this may lead you to more expensive options, entry grade laptops won't be an option.
Your second best bet is to read tech reviews. Admittedly this requires time and knowledge, but often enough people turn to their tech literate acquaintance for advice when they want to buy hardware.
> Afaik, support for hardware accelerated videoplayback on browsers was broken across the board only three years ago.
Yes indeed, that's something we didn't have. I agree it sucks. Now, all the OSes have their flaws that others don't have, and it's not like the videos didn't play, in practice it was an issue if you wanted to watch 4K videos for hours on battery. Playing regular videos worked, and you can always lower the quality if your situation doesn't allow the higher qualities. Often enough, you could also get the video and play it outside the browser. I know, not ideal, but also way less annoying that the laptop not suspending when you close the lid because of a glitch or something like this.
> You first option is to buy a laptop with linux preinstalled
I have earnestly tried for >20 minutes trying to find such a laptop with any reputed manufacturer in my country (India) and come up empty-handed. Please suggest any that you can find. Even with Thinkpads, the only options are "Windows" or "No Operating System".
>Your second best bet is to read tech reviews.
Which tech reviews specifically point out linux support?
>Playing regular videos worked, and you can always lower the quality if your situation doesn't allow the higher qualities
The issue was never about whether playing the video worked. CPU video decoding uses much more energy and leads to your laptop running hot and draining battery life.
Can we at least agree to reduce the timeframe for things working flawlessly to "less than two years" instead of "a decade"? Yes you were able to go to the toilet downstairs but the toilet upstairs was definitely broken.
If buying with Linux is not an option at your place, you can always buy one of the many models found with this search without OS and install it yourself. Most thinkpads should be all right. Most elitebooks should do. Dell laptops sold with Ubuntu somewhere on the planet should do. I'm afraid I can't help nore, you'll have to do your search. Finding out which laptops are sold with Linux somewhere should not be rocket science. I don't buy laptops very often, I tend to keep my computers for a healthy amount of time, I can't say what it's like in India in 2025.
> Can we at least agree to reduce the timeframe for things working flawlessly to "less than two years" instead of "a decade"? Yes you were able to go to the toilet downstairs but the toilet upstairs was definitely broken.
No. I understand that it can be a dealbreaker for some, but that's a minor issue for me on laptops, even unplugged, and I do watch a lot of videos (for environmental reasons I tend to avoid watching videos in very high resolutions anyway, so software rendering is a bummer but not a blocker). There are still things that don't work, like Photoshop or MS Office, so you could say that it's still not flawless, still, that doesn't affect me.
Many results, including a US-specific page of the Lenovo website.
>If buying with Linux is not an option at your place, you can always buy one of the many models found with this search without OS and install it yourself.
>Finding out which laptops are sold with Linux somewhere should not be rocket science.
It should not. Given the amount of time I have already spent on trying to find one, it is fair to say that there are none easily available in India, at least in the consumer laptop market.
> I understand that it can be a dealbreaker for some, but that's a minor issue for me on laptops
Stockholm Syndrome was bullshit made up on the spot to cover for the inability of the person making it up to defend their position with facts or logic, and...that fits most metaphorical uses quite well, too, though its not usually the message the metaphor is intended to communicate.
> Many results, including a US-specific page of the Lenovo website.
Are you failing to see that this US-specific page gives you a long list of models you can consider elsewhere?
> Stockholm syndrome.
Yeah, no. It just appears I have different needs than you and value different tradeoffs. It appears that the incredible comfort Linux brings me offsets the minor inconvenience software rendered browser video playback causes me.
I'm done in this discussion, we've been quite far away the kind of interesting discussions I come to HN for for a few comments now.
On Windows, I don't have to pick my hardware accordingly.
I have to onboard a lot of students to work on our research. The software is all linux (of course), and mostly distribution-agnostic. Can't be too old, that's it.
If a student comes with a random laptop, I install WSL on it, mostly ubuntu. apt install <curated list of packets>. Done. Linux laptops are OK too, I think, but so far only had one student with that. Mac OS used to be easy, but gets harder with every release, and every new OS version breaks something (mainly, CERN root) and people have to wait until it's fixed.
> On Windows, I don't have to pick my hardware accordingly.
Fair enough. I think the best way to run Linux if you want to be sure you won't have tweak to stuff is to buy hardware with linux preinstalled. That your choice is more limited is another matter than "linux can't suspend".
Comparing a preinstalled Windows with a linux installed on random laptop whose manufacturer can't be bothered to support is a bit unfair.
Linux on a laptop where the manufacturer did their work runs well.
Yes, machines with Linux preinstalled normally work quite well. But it's still a downside of choosing Linux that the choice of laptops is so much smaller. Similar to the downside of Mac OS that you are locked in to pricey-but-well-built laptops, or the downside of Windows that "it runs Windows" doesn't mean the hardware is not bottom-of-the-barrel crap with a vendor who doesn't care about Linux compatibility. WSL allows to run a sane development environment even then :)
> You can use Wine/Crosseover, which is cool, but even now the number of software products it supports is tiny. Steam has a lot of games.
This isn't really the case, and hasn't been for some years now, especially since Valve started investing heavily in Wine. The quality of Wine these days is absolutely stunning, to the point that some software runs better under Wine than it does on Win11. Then there's the breadth of support which has has moved the experience from there being a slight chance of something running on Wine, to now it being surprising when something doesn't.
The important bit though is that Docker containers are not VMs or sandboxes, they're "just" a combination of technologies that give you an isolated userland using mostly Linux namespaces. If you're running a Linux host you already have namespaces, so you can just use them directly. Distrobox gives you basically the same sort of experience as WSL2 except it doesn't have any of the weird parts of running a VM because it's not VMs.
This is the kind of statement that makes you pay the karma tax. WSL is great, I use it on a day to day basis. I also use Linux on a day to day basis. And as great as WSL is, for running Linux software on supported hardware, Linux beats WSL hands down. And I mean, of course it does, do you expect a VM to beat native? In the same way that Windows software runs better on Windows. (with a few exceptions on both sides).
Compared to Linux, WSL I/O is slow, graphics is slow and a bit janky, I sometimes get crashes, memory management is suboptimal, networking has some quirks, etc... These problems are typical of VMs as it is hard for the host and guest OS to coordinate resource use. If you have an overpowered computer with plenty of RAM, and are mostly just using the command line, and don't do anything unusual with your network, then sure it may be "better" than Linux. But the truth is that it really depends on your situation.
WSL 1 had fast IO but couldn't support all features.
WSL 2 supports all features but has famously slow IO.
Example:
1. Shell into WSL
2. Clone a repo
3. Make a bunch of changes to the repo with a program within WSL
4. Run git status (should finish in less than a second)
5. Open repo from a Windows IDE
6. Run git status. This makes windows change each file's permissions, ownership, etc... so it can access the files as git status recursively travels through every file and folder
7. Go for coffee
8. Go for lunch
9. Git status finished after 35 minutes.
10. Close IDE
11. Shell back into WSL
12. Make a change in WSL
13. Run git status from within WSL
14. Wait another 35 minutes as Windows restores each file's ownership and permissions one by one
------------------------------------
The IO overhead is so bad that Microsoft built two new products just to get around it:
1. VSCode WSL remote-client architecture.
VSCode acts as a server within WSL and a client within Windows. Connect both VSCode instances (through proxy/tunnel if needed) and the server can perform the client's File IO ops on behalf of the client rather than letting an Application on Windows try to interact with any of WSL's file systems.
2. Windows DevDrive
Basically set aside a virtual-disk/partition and set it up as a different file system (ReFS) that doesn't use Window's file permissions, ownership and doesn't decrypt then decompress on each file input, doesn't compress then encrypt each file output, and doesn't virus scan the files on usage.
TL;DR Store the files on a network drive and hope race-condition ops from both WSL and Windows don't corrupt any files.
The problem is Windows IO filters and whatnot, Microsoft Defender trying to lazily intercept every file operation, and if you're crossing between windows and Linux land, possibly 9pfs network shares.
WSL2's own disk is just a VM image and fairly fast - you're just accessing a single file with some special optimizations. Usually far, far more responsive than anything done by windows itself. Don't do your work in your network-shared windows home folder.
Not the biggest issue of them, 'find' and 'git status' on WSL2 in a big project is still >100 times slower on windows dev drive which avoids those filters than it is with WSL 1 on dev drive.
WSL 1 on regular ntfs with defender disabled is about 4x slower than WSL1 on dev drive, so that stuff does cause some of it, but WSL2 feels hopelessly slow. And wsl 2 can't share memory as well or take as much advantage of the filesystem cache (doubling it if you use the windows drive in both places I think, unless the network drive representation of it doesn't get cached on the WSL2 drive.
WSL2, in my testing, is orders of magnitude faster at file heavy operations than anything outside WSL, dev drive or not. We have an R&D department that's using WSL2 and jumping through hurdles of forwarding hardware because it's night and day compared to trying under windows on the same machine. It provided other benefits too, but the sheer performance was the main selling point.
WSL2 does not take less advantage of filesystem caches. Linux's block cache is perfectly capable. HyperV is a semi-serious hypervisor, so it should be using a direct I/O abstraction for writing to the disk image. Memory is also balloning, and can dynamically grow and shrink depending on memory pressure.
Linux VM's is something Microsoft has poured a lot of money into optimizing as that's what the vast majority of Azure is. Cramming more out of a single machine, and therefore more things into a single machine, directly correlates with profits, so that's a heavy investment.
I wonder why you're seeing different results. I have no experience with WSL1, and looking into a proprietary legacy solution with known issues and limited features would be a purely academic exercise that I'm not sure is worth it.
(I personally don't use Windows, but I work with departments whose parent companies enforce it on their networks,
> Linux's block cache is perfectly capable. HyperV is a semi-serious hypervisor, so it should be using a direct I/O abstraction for writing to the disk image.
Files on the WSL2 disk image work great. They're complaining about accessing files that aren't on the disk image, where everything is relayed over a 9P network filesystem and not a block device. That's the part that gets really slow in WSL2, much slower than WSL1's nearly-native access.
> Memory is also balloning, and can dynamically grow and shrink depending on memory pressure.
In my experience this works pretty badly.
> a proprietary legacy solution with known issues and limited features
Well at least at the launch of WSL2 they said WSL1 wasn't legacy, I'm not sure if that has changed.
But either way you're using a highly proprietary system, and both WSL1 and WSL2 have significant known issues and limited features, neither one clearly better than the other.
> WSL2 does not take less advantage of filesystem caches.
My understanding is when you access files on the windows drive, the linuxvm in WSL2 caches it in its own memory, and the windows side caches it in its: now you have double the memory usage on disk cache where files are active on both, taking much less advantage of caches than if you had used WSL1 where windows serves as the sole cache for windows drives.
I'm only comparing working on windows filesystems that can be accessed by both. My use case is developing on large windows game projects, where the game needs the files fast when running, and WSL needs the files fast when searching code, using git, etc. WSL1 was usable on plain NTFS, and now much closer to ext4 with dev drive NTFS. WSL2 I couldn't make fast.
You could potentially have the windows files on a network drive on the WSL2 side living in native ext4, but with that you get the double filesystem caching issue, and you might slow a game editor launch on the windows side by way too much, your files are inaccessible during upgrades and you have to always have RAM dedicated to WSL2 running to be able to read your files. MS store versions of WSL2 will even auto upgrade while running and randomly make that drive unavailable.
Running WSL2 on Dev Drive means that you're effectively doing network I/O (to localhost); of course it's slow. It's also very pointless since your WSL2 FS is already a separate VHD.
Not pointless if you are working on a windows project but using unix tools to search code, do commits, etc. WSL2 just isn't usable for it in large projects. git status can take 5 minutes on unreal engine.
I use it, I am required to use Windows, and it’s a huge improvement over doing Data Science on native Windows, but the terrible filesystem access ruins what otherwise would be a seamless experience.
It’s fine for running small models but when you get to large training sets that don’t fit in RAM it becomes miserable.
There is a line where the convenience of training or developing locally gives way to a larger on demand cloud VM, but on WSL the line is much closer.
still use WSL1 also because VMWare runs so dreadfully slow with any kind of Hyper-V enabled - if so, VMWare must also use it, so you get a Type-2 running under a Type-1 the lag is untennable lag and performance.
I'm guessing they use plan9 because distros already ship support for it, and it's super simple compared to NFS? It doesn't seem like CIFS/NFS would be any faster, and they introduce a lot more complexity.
Where are you experiencing filesystem slowness? I've been using WSL in some advanced configurations (building Win32 apps by cross-compiling from Linux CLANG and dropping the .exe into a Windows folder, copying large files from Linux->Windows and vice versa, automating Linux with .BAT files, etc.) and I haven't seen this slowness at all.
While I can see the subtle distinction you're trying to draw people's attention to (NTFS is not the problem, filesystem operations generally on Windows are the problem) I have to say it seems like a distinction without a difference in real terms. They made a range of changes that seem to produce more complicated code everywhere because the overhead of various filesystem tasks are substantially higher on this OS vs every other OS.
But in the end they had to get the OS vendor to bless their process name anyway, just so the OS would stop doing things that tank the performance for everybody else doing something similar but who haven't opened a direct line up with the OS vendor and got their process name on a list.
This seems like a pain point for the vendor to fix, rather than everybody shipping software to their OS
I find it to be incredibly janky. Pretty much every every time my computer sleeps (so every morning, at least) I have to restart it because somehow the VM-host networking gets screwed up and VS code connections into the VM stop working. You also can't just put things in your Windows User directory because the filesystem driver is so slow that git commands will take multiple seconds, so now you have two home directories to keep track of. There were also some extremely arcane things I had to fix when setting it up involving host DNS and VPN adapter priority not getting propagated into the VM so networking was completely broken. IIRC time would also not match the host after a sleep and get extremely far out of sync, though I haven't run into that for a while since now I have to reboot Windows constantly anyway.
I don't have a need to run multiple OSes though. All of my tools are Linux based, and in companies that don't let people run Linux, the actual tools of the trade are almost all in a Linux VM because it's the only reasonable way to use them, and everything else is cross-platform. The outer OS just creates needless issues so that you now need to be a power user with two operating systems and their weird interactions.
> extremely arcane things I had to fix when setting it up involving host DNS and VPN adapter priority not getting propagated into the VM so networking was completely broken
Are you sure you set up the VPN properly? Messing around with Linux configs is a good way to end up with "somehow" bugs like that.
I don't know how it's set up. That's kind of my point though. I have to now be an expert in Linux and Windows to debug this stuff, which is a waste of my time as someone who's job it is to develop (server, i.e. Linux) software. I had exactly zero issues when I was using Fedora. At one point my company made all of the Linux users move off (we do now have an IT-supported Linux image, but I haven't found the time to re-set up my laptop and don't fully trust that it will work without a bunch of trouble/IT back-and-forth because they also made Windows users start using passkeys), and since then I've seen way more issues with Windows than Linux (e.g. one day my start menu just stopped reacting to me clicking on programs), in addition to things like ads in the lock screen and popups for some XBox pass thing that I had to turn off, which is just insane in a "professional" OS. A lot of days I end up having to hold down the power button to reboot because it just locks up entirely.
OSX was a bit janky with docker filesystem slowness, homebrew being the generally recommended package manager despite being awful (why do I sometimes tap a cask and sometimes pour a bottle? Don't tell me; I don't care. Just make it be "install". Also, don't take "install" as a cue to go update all of my other programs with incompatible versions without asking), annoying 1+ second animations that you can't turn off that make it so the only reasonable way to use your computer is to never maximize a window (with no tiling support of course), and completely broken external monitor support (text is completely illegible IIRC), but Windows takes jank to another level.
By contrast, I never encounter the issues people complain about on Linux. Bluetooth works fine. Wifi works fine. nVidia GPUs and games work fine. Containers are easy to use because they're natively part of the OS. I prefer Linux exactly because I stopped enjoying "tinkering" with my computer like 10 years ago, and I want it to just quietly work without drawing attention to itself (and because Windows 8 and the flat themes that followed were hideous and I was never going to downgrade to that from Windows 7).
Thats odd. I have none of these problems. Sleep doesnt interrupt the VM. And I regularly use the git CLI through WSL on projects living within windows user directories. Both work fine.
Openssh should have been a game changer but they made a classic openssh porting bug (not reading all bytes from the channel on close) and have now been sat on the fix in “prerelease” for years. I prodded the VP over the group about the issue and they repeatedly made excuses about how the team is too small and getting updates over to the windows team is too hard. That was multiple windows releases ago. Over on GitHub if you look up git receive pack errors being frequent clone problems for windows users you’ll find constant reports ever since the git distribution stopped using its own ssh. I know a bunch of good people at Microsoft, but this leadership is incapable of operating in a user centric manner and shouldn’t be trusted with embedded OSS forks.
I'm a simple man, if I open the shell and `ssh foo@bar.com` doesn't work, I don't use that computer. Idk if Windows has fixed that yet or why it's so hard for them. Also couldn't even find the shell on a Chromebook.
openssh has been an optional windows component for... almost a decade now? including the server, so you can ssh into powershell as easily as into any unix-like. (last time I set it up there was some fiddling with file permissions required for key auth to work, but it does work.)
OpenSSH on Windows is great for the odd connection and SFTP session, but I still feel strongly that any serious usage should just stick with PuTTY and WinSCP. The GUI capabilities these provide are what Windows users are used to. The only benefit of built-in SSH is if you're working with some minimal image stuff, like Windows Server Core or Tiny11. IMHO.
Running a Linux VM on Windows is nicer than just booting into Linux? That's quite a take. Windows is so user-hostile these days that I feel bad for those who have to deal with it. Calling it delightful must be symptomatic of some sort of Stockholm syndrome.
I have since moved to macbooks for the hardware, but until not too long ago WSL was my linux "distro" of choice because I didn't want to spend time configuring my computer to make basic things work like suspend/wake on lid down/up, battery life, hardware acceleration for video playback on the browser, display scaling on external monitor and so on.
That was certainly not the case ~2 years ago, the last time I installed linux on a laptop.
It also doesn't appear to be the case even now. I searched for laptops available in my country that fit my budget and for each laptop searched "<laptop name> linux reddit" on google and filtered for results <1 year old. Each laptop's reports included some or other bug.
The laptop with the best reported linux support seemed to be Thinkpad P14s but even there users reported tweaking some config to get fans to run silently and to make the speakers sound acceptable.
Not all distros that exist in the current year are "modern". Mint for example, still ships with X11 and old forks of Gnome. Lots of people are running Arch with weird components that don't work well for whatever reason. And so on...
Modern means systemd, pipewire, Wayland, Gnome, an up to date kernel, etc... So the current Ubuntu and Fedora releases.
I've had 100% working laptops for 15 years now. Because I always run the newest Ubuntu.
I run Ubuntu and suspend is pretty much a nightmare to the point I just gave up pretending it exists. These are Dell computers sold with supposed Ubuntu support. Close the lid and put it in a backpack is inevitably an invitation for a hot laptop or empty battery when you pull it out a few hours later (for the record: Windows isn't any better at this in my experience so WSL never solved that problem either).
Previous laptops (all ThinkPads) used to be able to get everything all to work (debian) but it did take effort and finding the correct resources. Unfortunately all the old documentation about this stuff is pre-systemd and UFI and it's not exactly straightforward anymore.
Google "Dell suspend issues". It's just their computers, it doesn't work any better on Windows. My wife has had 2 Dell laptops now, neither suspended properly ever (and she only runs Windows). According to the internet, this is a Dell problem. One of her laptops also had the Wifi card break within 4 hours of use, brand new. But she likes the "design" and is stubborn.
Google harder. It's a general Windows problem. Microsoft can't even get it to work on their own Surface devices. Show me a Windows laptop that suspends properly and I'll show you a liar.
Well there you go. Meanwhile Linux suspend does work more often than not in my experience. I've had a ThinkPad, Acer and MSI laptop with working suspend on Linux.
Other than an up to date kernel, your list of what "modern" means is entirely wrong. The rest of the entries are polarizing freedesktop-isms. There's nothing out of date about, e.g., KDE Plasma.
I read all the links, most of the problems weren't bugs (Fan runs loud? Fans run under Windows as well... Only modern suspend? Literally created for Windows...). From all those links the only thing that was a bug was an issue with a kernel regression and 4/5 distros he listed weren't one I listed.
Maybe I was too positive on Fedora (I was going by it's reputation, I use Ubuntu for work). Ubuntu is solid.
Link 1: screen only updating every 2 seconds, visual glitches.
Link 2: brightness reset to full on screen unlock, fans turning on when charging.
Link 3: bluetooth troubles, speakers cant be muted if headphone jack is on mute.
Link 4: audio quality and low volume, wifi not coming back after sleeping.
Link 5: fans being too loud, poor sound quality.
Either your Stockholm syndrome is affecting your reading comprehension or you just take bugs like these as part of the normal "working perfectly" linux experience.
Nothing works out of the box with Linux. They may "seem" to work out of the box but you realize how many little tweaks go into making a laptop/consumer device work fully when you work as an embedded dev. It is quite difficult to get to the same power consumption levels and same exact hardware / software driver capabilities under Linux. There are simply no APIs for many things. So the entire driver has to live in userspace using some ioctls to write random stuff to memory or it cannot exist. There are also algorithms that the hardware manufacturer wants to keep closed.
Note that NVIDIA drivers didn't get better since they are more open source now. They are not. GPUs are now entire independent computers with their own little operating system. Some significant parts of the driver now runs under that computer.
Yes the manufacturers may allocate some people to deal with it and the corrosiveness of the kernel community. But why? Intel and AMD uses that as a marketing and sales stragtegy. If the hardware manufacturer is the best one there is, where is the profit for supporting Linux? Even Thinkpads don't have 100% support of all the little sensors and PMICs.
HiDPI issue hasn't been solved yet completely. Bluetooth is still quite unreliable. MIPI support should be the best due to the number of devices, until you realize everybody did their own shitty external driver and there are no common good drivers for MIPI cameras so your webcam doesn't work. USB stack is still dodgy. Microsoft in 90s had a cart of random hardware populating the USB tree completely and they just fucked with the NT kernel plugging and unplugging until it didn't break anymore for love's sake. Who did that level of testing with Linux?
Then you cannot claim that Linux works out of the box. It doesn't if you need to select hardware for it. However, I already know that since I actually used Linux for 15 years. Both on the consumer side as a normal user for 15 years and now I am actually an embedded Linux developer. The underlying architecture of GNU/Linux distros is heavily server biased which often is the polar opposite of a consumer system.
Except for Apple (and maybe Framework), all laptops are designed by contract original design manufacturers (ODMs) Taiwan, Korea and China. Your usual Linux laptop OEMs like System76 and Tuxedo just buy better combinations of the whitelabel stuff. They are inferior to actual big OEMs designs which contain more sophisticated sensors and power management and extra UEFI features. This includes business laptops Dell Latitudes, HP Elitebooks and Lenovo Thinkpads. None of those manufacturers actually do Linux-based driver development. All the device development, manufacturing and testing is done under Windows and only for Windows. The laptops are booted with Windows to do functional tests at factory not Linux.
Linux is an afterthought for all OEMs. After Windows parts are released and tested, the kernel changes to Linux is added. They are rudimentary support which doesn't include 100% of the featureset. Many drivers today have quite proprietary user-space side. You'll get none of that from any laptop manufacturer. You may say you don't care about those and you're okay with 10 - 20% power loss. That's not the definition of out-of-the box for me.
There is a reason why 1) people whose main environment is Linux feel (correctly) that these problems have been solved a long time ago, and 2) people whose main environment is not Linux but who try Linux occasionally feel (correctly) that these problems still occasionally crop up.
People whose main environment is Linux intentionally buy hardware that works flawlessly with Linux.
People who try Linux occasionally do it on whatever hardware they have, which still almost always works with Linux, but there are occasional issues with sketchy Windows-only hardware or insufficiently tested firmware or flaky wifi cards, and that is enough for there to be valid anecdotes in any given comments section with several people saying they tried it and it isn't perfect. Because "perfect" is a very high bar.
>People whose main environment is Linux intentionally buy hardware that works flawlessly with Linux.
Hm, recently I bought a random "gamer PC" for the beefier GPU (mainly to experiment with local LLMs), installed Linux on it, and everything just worked out of the box. I remember having tons of problems back in 2009 when I first tried Ubuntu, though. I have dual boot, just today I ran a few benchmarks with Qwen3. On Windows, token generation is 15% slower. Whenever I have to boot into Windows (mainly to let the kid play Roblox), everything feels about 30% slower and clunkier.
At work, we use Linux too - Dell laptops. The main irritating problem has been that on Linux, Dell's Dock Stations are often buggy with dual monitors (when switching, the screen will just freeze). The rest works flawlessly for me. It wasn't that long ago when my Windows (before I migrated to Linux) had BSODs every other day...
> people whose main environment is Linux feel (correctly) that these problems have been solved a long time ago
There is also the quiet part to this. People who religiously use Linux and think that it is the best OS that can ever be, don't realize how many little optimizations go into a consumer OS. They use outdated hardware. They use the lower end models of the peripherals (people still recommend 96 DPI screens just for this). They use limited capabilities of that hardware. They don't rely on deeply interactive user interfaces.
I own a 2011 thinkpad, a 2014 i7 desktop and a "brand new" 2024 zen5 desktop. They all work wonderfully and all functionality I paid for is working. I haven't had a single problem with the newest machine since I bought it other than doing the rigmarole to get accelerated video encoder/decoder to work on Fedora. Sucks but I can't complain.
The older machines I've owned since around 2014 and I remember the hardware support was fairly competent but far from perfect and graphics and multimidia performance was mediocre at best and ZERO support for accelerated video encode/decoder. Fast forward to around the last year or two and linux on both of these machines is screaming fast (within those machines capabilities...), graphics and multimidia is as good as you could get on windows (thanks wayland and pipewire!) and acc. video decode/encode works great (still have to do the rigmarole in fedora, but it's ootb in manjaro).
Both the 2014 machine and the 2025 sport a 4k display @120hz (no frame drops!) with no issues using 200% scaling for hi-dpi usage. Pretty much all of the apps are hi-dpi aware, with the exception of a few running on WINE which until a few months wasn't HI-DPI aware. (this feature is experimental and among many other improvements in WINE may take another year to mature and be 100% stable)
200% is just rendering the same pixels and them drawing them 4 times and driving a single monitor at the single resolution is easy stuff. Would your HiDPI system with one monitor at 125%, one at 100% and another at 150% scaling? This is when the font rendering gets fucked up and your Hi-DPI native toolkits start blurring icons. That's my setup. Windows is perfectly capable to make this work. GTK wasn't able to do fractional scaling until recently and Qt has 100s of papercuts.
I got a Thinkpad to just run this setup under Linux 2020. AMD didn't solve the problem in their driver until 2022 when I was able to drive all of them at 60 Hz.
No, 200% is rendering 4 pixels with "features" 2x larger in each axis. You may get 200% scaling as you said with some legacy apps that give zero fucks about dpi scaling but are still scaled trough some mechanism to properly match other apps.
Fractional scaling has been a problem across all platforms, but I agree Linux has taken its time to get it right and still have some gotchas. You should try to avoid it in any platform honestly, you can get sometimes get blurry apps even in Windows. AFAIK KDE is the first to get it right in this complex situations where you mix multiple monitors with different fractional scaling ratios and have legacy apps to boot. GNOME has had experimental fractional scaling for a while but it's still hidden behind a flag.
It also helps to not have nVidia trash on your old (and sometimes even new) computers if you want longevity. My old machines have intel and AMD graphics with full support from current kernel and mesa.
Linux is basically everyone's go to for older devices. Windows 10 will run like shit on a 10 year old laptop with 4GB RAM but latest Ubuntu is nice and snappy.
I have a 13 year old laptop that runs Windows 10. I cannot run Linux because neither nouveau nor Nvidia drivers support its GPU. It has 8 GiBs of RAM and it works perfectly for light browsing and document editing.
I don't need new reasons to hate Linux. Like I said, I have moved to macbooks as my personal computing device because of the better hardware.
> solved a while ago
Can not be the case because I was facing these issues less than a couple of years ago.
I was responding to the "Stockholm syndrome" comment specifically because there are a number of hardware and software problems (e.g. https://jayfax.neocities.org/mediocrity/gnome-has-no-thumbna...) with using linux as a desktop operating system that linux users have to find their way around, so I found the comment rather full of irony.
PS: I already know that the file-picker issue has been fixed. That does not take away from the fact that it was in fact broken for decades. It is only meant as an example.
If there's some set of fully Linux-capable laptops out there, it's a small subset of the Windows-capable ones.
And it's not clear what the Linux ones are. Like, our dept ordered officially Linux-supported Thinkpads for whoever wanted them, and turns out they still have unsolved Bluetooth audio problems. Those people use wired headphones now.
I'm writing this from Purism Librem 14, which works flawlessly, including suspend. There's also System76, Framework and more. See also: https://news.ycombinator.com/item?id=32964519.
System76 is my go-to. There are others. You can even get some major vendors (Dell, Lenovo) to ship with Linux preinstalled, though I don't know if the firmware or chips diverge from the Windows variants.
> Running a Linux VM on Windows is nicer than just booting into Linux
Indeed, it does. Having stable system and not dealing with Linux on Desktop, clear tradoffs (like "just add another 16gb RAM stick in laptop/desktop and you are golden") is great for peace of mind.
The average uptimes on my laptops (note for plural) is ~3 weeks, until next Windows Update to be applied. I don't have nostalgia on the days of using Linux on desktop (~2003 student times, ~2008 giving it one more try, ~2015 as required by dayjob)
Of course it adds up that I can tell people around me (who are not tech guys often, but smart enough to know basic concepts and be able to run bash scripts provided to them) - "yep, machine with 32GB+ of RAM will work fine, choose any you like" - and it works.
This is the opposite of what I've heard. Most often you hear of people installing Linux on old machines due to it performing better than Windows on low resources.
I'm talking about more regular situation when you deal with new hardware- why on earth I'd go with outdated and limiting me T480 when T16gen4 is around the corner. Or ARM based laptops.
If for some reason I could never use a MacBook again, it wouldn't be easy to decide between Windows or Linux as the host OS on a laptop. Do I want something that's intentionally user-hostile or something that's unintentionally broken a lot?
I'd at least try Linux cause I abhor Microsoft, but idk if it'd work out.
At least the nags in Windows look like modern web-based UI (so far that ‘use Electron’ seems to be the post-Win 8 answer to ‘how to make Windows apps’) in contrast to MacOS which drove my wife crazy with nag dialogs that look like a 1999 refresh of what modal dialogs looked like on the classic Mac in 1984.
My acid test for WSL2 was to install the Linux version of Google Chrome in it, and then play Youtube videos fullscreen with that. It worked. Somehow WSL1 was the more impressive hack but how can you argue with what works? WSL2 works fine.
Also 1980s style X11 widgets on the Windows desktop in their own windows? Cool.
I have to say too, though, once you get the hang of the way an EFI system boots, it's really good for dual boot. I let the Linux installer mount the undersized existing one as /boot/orig_efi and made a new, bigger EFI system partition. Not only was the UEFI on that particular laptop fine with it, scanning both EFI system partitions for bootable stuff, but also, grub2 installed in the new one automatically included the Windows boot in the old one as a boot option.
Cool because nothing about how Windows boots is intercepted; you can just nuke the new partitions (or overwrite them with a new Linux installation). I still prefer a native Linux boot with "just in case" Windows option to WSL.
I don't think people are using WSL to avoid problems with dual booting. Dual-booting has become about as simple as it can be, thanks to UEFI, but it's still not exactly fun to have to close all of your open apps to switch to another OS to run just one app.
Forced to work on Windows for ++nth job, I was looking forward to WSL. Indeed, while it worked, it was magic. Sadly, I have had no end of bizarre bugs. The latest one almost crashed my whole desktop - as far as I can piece together, something crashed, leading to a core dump the size of my desktops entire memory - half the machine's RAM. This in turn put WSL in a weird state - it would neither run, not be uninstallable. Googling found bug reports with similar experiences, no responses from Microsoft and magic incantation that maybe worked for some people - but not for me.
It might be due to my corpo's particular setup etc. but for me 95% of the value of WSL would be the ability to run it on "corporate" Windows boxes. Alas.
I'm sure that feature is important for whatever works you're doing, but that's a feature I've _never_ desired, and WSL is missing plenty of features that are important for my work.
Hardware performance counters basically do not work in WSL2, which among other issues, makes it extremely difficult to use rr.
https://github.com/rr-debugger/rr/issues/2506#issuecomment-2...
Some people say they got it working, but I and many other users encounter esoteric blockers.
The Dozen driver is never at feature parity with native Linux Vulkan drivers, and that's always going to be the case.
gWSL is also a terrible X11 server that makes many very basic window management configurations impossible, and while I prefer VcXsrv, it has its own different terrible issues.
I can imagine that WSL2 looks attractive if all you want to do is run command line apps in multiple isolated environments, but it is miserable for anything graphical or interactive.
> I can imagine that WSL2 looks attractive if all you want to do is run command line apps in multiple isolated environments, but it is miserable for anything graphical or interactive.
Indeed, that's my case - using CLI mostly for ssh/curls/ansible/vim over ansible and Puppet, so on.
For GUI part, Windows is chosen and shines for me.
I think it really depends on what you do and whether the Linux side of it has hard dependencies on system packages. Personally, at work I much prefer working directly on my Linux workstation, and at home have even switched to using Linux for my gaming desktop. I really don't like the direction Windows has been trending for the past few years, and with the specter of a forced Windows 11 upgrade on the horizon I decided it's time to go all in. My system runs better and I can still play all my games. The jankiest thing I do is I have a mingw toolchain so I can compile some game mods into Windows DLLs to be loaded by Wine, but even that ended up being pretty seamless. Just install the toolchain and the project just compiled.
I don't understand. Docker/podman/distrobox/lxc all allow you to do the exact same thing without the virtual machine overhead. I think the real win of WSL is that its a best of all worlds. You get to use Windows with access to every game ever made plus all of the proprietary apps everyone needs to use, with all of the upside of having a full and complete linux command line experience.
You get all of Windows telemetry, vulnerabilities and backdoors, the always fun game of spot the new Advertising opportunity, AI “copilot” spyware I mean feature, updates that reset your machine at will, a terrible UAC model that encourages “just click OK already!”, and dependence on a company that has gone out of their way to prove how much of an unstoppable behemoth they are; and best of all you get to pay for the privileges above.
I know… every year is the year of the Linux desktop… but seriously the AI spyware included was enough to get me gone for good.
It's hard to pick the Windows feature I hate the most, but floating around at the top is Defender. It can't be disabled, at least not easily, and it demolishes IO performance. And Windows update takes the computer hostage, and takes ages to do anything giving no feedback in the process, meanwhile APT can update to a new major version in like 5-10 minutes.
You can setup local and limited user accounts under Windows. Many applications including every development tool out there doesn't need any admin permissions.
Spyware and adware is a government policy / regulation problem. Thanks to GDPR and DMA, using Windows in EU is significantly better experience (try setting a Windows desktop with an EU image). You can remove almost all of the apps including Edge and Copilot. There are no ads in the UI. Neither in Explorer nor in Start menu.
The current process to install windows11 with a local account… is to, press SHIFT + F10 at a screen in the middle of install after the first reboot, enter into the command prompt: ODBE/BYPASSNRO, and disconnect from any internet options, and/or ipconfig disable your networking…
But guess what? Fuck You because that is the old way of doing it now, and now the new command is start ms-chx:localonly
Yes, you get Windows telemetry which enabled fixing bugs without a bug report, you get minimal ads in the start menu (if you're playing "spot the new advertising opportunity" I found it. It's in the start menu. You can stop playing now), AI "copilot" which isn't spyware just because you think it is, updates that ASK you nicely multiple times to update (I don't want to be ableist, if you suffer from a Christopher Nolan Memento-like disability where you don't remember the warnings, you might think it's "resetting at will", but I assure you, it isn't), a great UAC model that's a lot better than "just type your root password into this terminal already, and just hope the binary wasn't hijacked in some way to keylog you, because unlike UAC, there is no visual evidence that you're not getting hacked", and dependence on a company that SV_BubbleTime thinks "has gone out of their way to prove how much of an unstoppable behemoth they are" with no evidence or clarity so they must just be making FUD, and best of all the OS costs so little you can pay it in 8 hours of working as a software developer.
Are you a Windows user who is happy to have a good way to run Linux on Windows, or are you a Linux user trying to convince other Linux user that instead of using Linux, they should use Linux in a VM running on Windows?
I am a longtime Linux user, and I can't see a reason in the universe why I would want to access my Linux through a VM on Windows. That seems absolutely insane.
Gnome (a linux desktop environment) ships a "Boxes" app [0] that is very impressive. You can, with a few clicks, install one of a huge number of Linux distros in an auto-provisioned VM, enable hardware passthrough for USB devices and host 3D acceleration, and manage files with drag-and-drop from the host system. I also use it for Windows and MacOS VMs (don't tell Apple), but you need to provide your own images.
Look I get it. I’m forced to use Windows at work and I thank the lord WSL is a thing. But I would switch to Linux base in a heartbeat if I could. WSL is jank as fuck compared to just using Linux.
> WSL is more powerful than Linux because of how easy it is to run multiple OS on the same computer simultaneously.
I'd venture to say this depends on which OS you're more comfortable with. I'm more comfortable with Linux, so I'd say it's easier/better/less janky to use Linux as a host OS.
> Like if one project has a dependency on Ubuntu22 and another is easier with Ubuntu24. You don't have to stress "do I update my OS?"
Once you're a developer who's been burned by this enough times, you do this with containers or dedicated dev VMs. You do not develop on your host OS and stay sane.
I've yet to find anything comparable feature-wise on Linux - and they all come with the huge downside of having to roll your own cohesive settings widget ecosystem for basic everyday things like WiFi and Bluetooth connectivity. I run Cosmic Epoch on my old Macbook which is better, but again, feature-wise, it's just not comparable for serious work.
Thanks for your reply, but as a Linux user for over 20 years, all I take away from your post is that you haven't really tried, probably because the variety of distros vastly exceeds the two classic options of mac vs windows.
I understand the "roll your own" argument very well. In my time, I've experienced quite the variety of configs and dotfiles, but I'm not young anymore so I've settled with using Regolith which is an opinionated set of tools, including my favourite i3wm, on top of Ubuntu, and I simply use defaults for the most things.
Anyway, it's much easier to use Linux as a daily driver than it's ever been. The choice of distro is simply which package manager to use, and everything else just works, as long as it's in the package manager's inventory.
I haven't compiled my own computer's kernel in 6 years (but I still cross compile for rpi and other IoT), and I haven't used my dotfiles in 3 years, just defaults.
> Thanks for your reply, but as a Linux user for over 20 years, all I take away from your post is that you haven't really tried, probably because the variety of distros vastly exceeds the two classic options of mac vs windows.
A very big and very incorrect assumption. This reads like you asked the initial question without any actual curiosity behind it.
I think it depends a lot on what you're trying to do. I found that anything GPU-related was a nightmare of drivers and configuration which was a show-stopper for me. Now I just run arch/kde and that all works fine out of the box
Me too. Particularly after having to do Docker things a few years ago, destroying my productivity due to file system speed.
However, for those of us that went Linux many years ago, and like our free open source, in 2025, is it better to go back to the dark side, to run Windows and have things like a LAMP stack and terminals run with WSL?
I don't play games or run Adobe products, I use Google Docs and I don't need lots of different Linux kernels. Hence, is it better to run Linux in Windows now? Genuinely asking.
As someone who occasionally does use WSL, I definitely think it's not better no. But I'm still biased, because I know a lot more about using linux than I do about using windows, and WSL is still windows.
> is it better to run Linux in Windows now? Genuinely asking.
definitely is. Servicing takes ~ 1 minute per month to click on "yeah, let's apply those updates and reboot". Peace of mind with no worrying on external hardware won't work or monitor will have issues or laptop won't sleep or during the call battery will discharge faster due to lack of hardware acceleration or noise cancellation not working or ...
While I mostly agree with this sentiment, sidestepping the power management and sleep issues as well as better driver support and touchpad handling on some laptops makes it quite a bit better.
I've been installing Linux almost universally on "Windows computers" [sic] for the past two decades or more, per your characterization. Sometimes great, sometimes meh. Your point? I am simply illustrating there's a value for WSL over bare metal in some cases, not playing the whose fault it is game.
Sic? You don't understand the argument at all then.
Buy computers that were designed for and ship with Linux, and with support you can call to get help. Modern hardware is far too complex to handle multiple OSes without a major effort. Assuming they even want to support anything but Windows, which most don't.
First, that's not the discussion at all. The question is does WSL have valid use cases and benefits over bare metal Linux. The answer is absolutely yes. For whatever reason you have the computer in front of you and you have the choice between the two modalities (many times you don't buy it, employer does, etc.)
Second, if everyone had your attitude, seeing PCs as "Windows computers" and stayed in their lanes in the 90s and 2000s, you would not have the option of three and a half supported "Linux computers" you are alluding to today. Viva hackers who see beyond the label.
WSL is better than no option, sure. It's not as good as Linux on Linux hardware.
The hackers sure. Reverse engineering takes a lot of skill and my hat's off to them.
Almost everyone here, though, are not in either camp. Most have the means and ability to buy a Linux computer if they so choose. But they don't and then complain when Linux fails to run well on a system that never has had a team of dedicated system integration work on it.
I agree. Back in the day (10+ years ago), I used to argue with people about why I ran VMs instead of just partitioning the disk and booting up the OS I needed.
XAMPP did not work out of the box with me on Windows (skill issue on my part, I know), so my preferred setup was to run a Ubuntu Server VM (LAMP stack) and then develop whatever I had on a Windows IDE.
I could have done that under full Linux, I just did not want that. Then Vagrant came into existence, which I'd say was for my use case (but never came around to adopt it).
I'm really happy with my WSL2 setup. I stopped using VMware Workstation when WSL2 broke it, but WSL2 is exactly what I needed to match my use case.
> XAMPP did not work out of the box with me on Windows (skill issue on my part, I know), so my preferred setup was to run a Ubuntu Server VM (LAMP stack) and then develop whatever I had on a Windows IDE.
Why wouldn't you have just spent 5 minutes to get XAMPP working?
WSL gave me the push to switch from macOS to Windows. And I couldn't be happier, tbh. There was a lot lacking in my Hackintosh/Windows dual boot setup.
> Edit: for clarity, by "multiple OS" I mean multiple Linux versions. Like if one project has a dependency on Ubuntu22 and another is easier with Ubuntu24. You don't have to stress "do I update my OS?"
For this part, I just create systemd-nspawn containers.
Last time I wanted to test something in a very old version of WebKit, creating a Debian Jessie container takes a few minutes. Things run at native speed.
You use distrobox (https://distrobox.it/) and move on with your life. At work I use multiple versions of Ubuntu seamlessly without messing with VMs on a host fedora box without issue. That includes building things like .deb packages.
I'm with you - after years of messing with dualboot Linux, including (foolishly) running multiday Gentoo builds, WSL + Windows now gives me everything I want from Linux with zero friction.
In fact, I'm a little annoyed that I can't get a comparably smooth experience on my MacBook without spinning up a full QEMU VM. I know it's a bit hypocritical since, like most people, I run WSL2 (which is container/VM-based), not WSL1 (the original magic syscall translation vision).
Does anyone know why there's no lightweight solution on macOS - something like LXC plus a filesystem gadget - that would let me run stuff like "apt-get install chromium"?
>Native performance
Tart is using Apple’s native Virtualization.Framework that was developed along with architecting the first M1 chip. This seamless integration between hardware and software ensures smooth performance without any drawbacks.
I think WSL is great but if your only goal is to run several Linux OSes, any hypervisor will do. I think Proxmox is better suited to your use-case (hosted on Linux).
I love WSL because it lets me have the best of Windows and Linux.
I like that wsl is a thing when I'm on a windows machine, but it can also serve as a reminder of the often unnecessary frictions that exist between operating systems.
When the answer to a "how do I do X on windows" question begins with "start WSL", my primary reaction is frustration because they're basically saying "there's not a good way to do that on Windows, so fire up a Linux VM".
Just to pick my most recent example, from today. I wanted to verify the signatures on some downloaded rpm files, and the rpm tools work on linux. I know, rpm files are native to a family of linux distros, so it's not surprising that the tools for retrieving and verifying their signatures don't work on windows but... it also seems reasonable to want a world where those tools can install and run on windows, straight from a PowerShell session, with no VM.
Multiply that by all the little utilities that that can't be deployed across multiple operating sytems, and it just seems like some incompatibility headaches are never really going to go away.
Jumping on the anti-wsl bandwagon; I just can't abide the loss on control on windows, will the next update ignore/reset/override my privacy settings? What Gordian knot must I slay to have a local only account (Thanks Rufus!) How do I turn off/uninstall a million things I don't want, Xbox game bar?!?
Linux or *BSD give so much more respect to the user, on windows you are the product! Stand up for yourself and your data!
Is it not the case that wsl2 is a vm; it requires hyperV enablement; and that turns your main windows OS into effectively a type of privileged vm, since hyperV is a type 1 bare metal hypervisor?
This is not often discussed, so it took me a lot of digging a couple of years ago, but I'm still surprised this is never discussed as a consequence / side effect / downside of wsl2. There are performance impacts to turning on hyper V, which may or may not be relevant to user (e.g. If this is also their gaming machine etc:)
> It's an absolute delight to use, out of the box, on a desktop or laptop, with no configuration required.
I have been using it since the beginning of WSL 1 with a very terminal heavy set up but it has some issues.
For example WSLg's clipboard sharing is buggy compared to VcXsrv. It doesn't handle pasting into Linux apps without introducing Windows CRs. I opened an issue for this https://github.com/microsoft/wslg/issues/1326 but it hasn't gotten a reply.
Also, systemd is still pretty sketchy. It takes over 2 minutes for systemd services to start and if you close a WSL 2 terminal for just a few minutes systemd will delay a new terminal from opening for quite some time. This basically means disabling systemd to use WSL 2 in your day to day.
Then there's this 6 year old issue with 1,000+ upvotes https://github.com/microsoft/WSL/issues/4699 around WSL not reclaiming disk space. It means you need to routinely shut everything down and compress your VM's disk or you'll run out of space.
Beyond that is does work well so I'm happy it exists.
The delay is related to starting WSL 2, not starting a systemd service btw.
Maybe it's specific to Windows 10 Pro, who knows. I'm using the latest WSL 2 from the MS app store.
I just know when I installed Docker directly into WSL 2, when I launched a terminal I could not run `docker info` and connect to the Docker daemon for 2 minutes. The culprit was the Docker service was not available. I was able to reproduce this on Arch and Ubuntu distros.
Separate to that systemd also delayed a terminal from opening for ~15 seconds (unrelated to Docker).
After ~10 minutes of the terminal being closed, both issues happened. They went away as soon as I disabled systemd.
First opening of my main wsl2 Ubuntu 22.04 instance takes roughly 20 seconds, the next new terminals opens in ~1s. As it happens once a 3 weeks or so when Windows rebooted for updates, I don't care much.
It takes me more time to fill passwords for ssh keys to agent anyways.
Also, systemd is still pretty sketchy. It takes over 2 minutes for systemd services to start and if you close a WSL 2 terminal for just a few minutes systemd will delay a new terminal from opening for quite some time. This basically means disabling systemd to use WSL 2 in your day to day.
That doesn't sound good. I was planning to set up a Windows/WSL2 box, but this gives me second thoughts. Where can I read more about this?
It's still ok even without systemd. Technically systemd is disabled by default, you have to turn it on with systemd=true in /etc/wsl.conf.
I can't find a definitive source with an open ticket but if you Google around for "WSL 2 systemd delay startup" you'll find assorted folks talking it about with a number of different reasons.
I just went by my end results of there is a delay with systemd enabled and no delay with it disabled.
I'm also not sure on your question, over the last 5 years, average interruption time is ~ 5 minutes to apply update, which happens roughly once a 3 weeks or so. Once or twice per year, release updates happen and that takes may be 30 minutes of interruption (not totally sure here as I usually grab my coffee and cigarrets and go reading news on balcony, which may easily take ~1h for me).
So for me, updates practically doesn't affect my workflow at all.
Still somewhat janky. I use it on my work machine (since it at least seems a bit faster than using VirtualBox) and regularly run into issues where npm won't build my project due to the existence of symlinks [1,2]. wslg windows also don't yet have first-party support from the windowing system [3]. I also remember having trouble setting up self-signed certs and getting SSL working.
Now if they could only do Windows 12 by taking baby steps in yearly release of Windows 11.1, 11.2 etc.
Iterating on improvements and polishing on Screens and Design that they haven't touched in the past 30 years. Improving on ARM support etc. And STOP adding Ads on the OS.
And the Surface Laptop continues to push Hardware quality forward. From Speaker, Touchpad, Screen, Motherboard etc.
It is really good but honestly would prefer something a little more like:
- Linux that works great on a laptop / does the right thing when closing the lid
- Linux that doesn't have worse battery life than Windows / macOS
- Seamlessly runs Windows when you need to run something (e.g. click on Excel)
- Isn't necessarily free (prefer quality over low price in this situation)
Windows of course has many of these traits and WSL is a pretty good compromise, but I would prefer to boot into Linux and use Windows only when necessary (since my need for it is less common).
Install Promox or TrueNAS on a bare metal desktop to experience the true power of multiple operating systems running simultaneously. On most days, I am running multiple VMs with these OSes in parallel: Windows Server 2025, Windows 11 Pro, and these flavours of Linux - TrueNAS/Debian, Ubuntu, Manjaro, Zorin OS. I also have a dozen or more lightweight containers running, some with LXC on the bare metal host and others with Docker inside the TrueNAS VM.
This setup automatically backs up my data and is resilient to disk failures. It’s the ultimate form of power and bliss.
I like WSL for this single reason too - it gives me space to run isolated experiments without touching my primary OS. So if that's what windows users get out of it, cool.
You can do the same thing with many other technologies on most other operating systems. I've used, in chronological order: FreeBSD jails, VMs, Cloud-hosted VMs, Docker, K8s, and Nix flakes. WSL is probably somewhere in around K8s.
My point is, we've had the ability to run "subsystems" for decades, by different names, on every OS. WSL is cool but quite late to the game, far from being "more powerful than linux".
I used to agree with this for WSL1. Syscall translation gave solid performance, decent FS integration, and interop within WSL with windows executables. I really liked it.
WSL2 has been such a pain. You're basically managing a VM with VMWare Tools somewhat more integrated. I gave up on WSL2 after a few months and went back to booting my arch installation most of the time. Now I'm on a mac for the first time in a long time because windows has gotten so bad.
This is doubly sad because the NT kernel is so well designed to host multiple OSes due to the OS/2 stuff decades ago. All wasted.
Perhaps "more powerful" is also a factor of who is the computer user. For example, Linux is not as "powerful" if the computer user is someone who knows little about how to use it.
For a person who will not invest the time to learn, e.g., how to avoid or minimise dependencies, indeed something like Windows with WSL may appear "more powerful".
The point of this comment is that "power" comes from learning and know-how as much as if not more than simply from choice of operating system. That said, some choices may ultimately spell the difference between limitations or possibilities.
I share your sentiments. Makes testing my builds against windows, Ubuntu 22, Ubuntu 24, etc a breeze. It pretty much 'just works' and I can take it to go on my laptop. Even though I do most my work in Linux, Windows is a convenient 'compatibility layer'. I was skeptical at first when my friend suggested I try this, but daily usage has won me over.
The development experience is relatively cumbersome compared to using a native Linux distribution and containerizing application dependencies where needed.
Last time I used it they kept hogging some common keyboard shortcuts for whatever Windows stuff even though the VM-window was focused. Did they stop that?
> and to fight with my computer about who owns it less.
This is a great way of saying it and expresses the uneasy feeling windows has given me recently. I use Linux machines but I have 1 windows machine in my home as a media PC; and for the last several years windows has made me feel like I don’t own that computer but I’m just lucky to be along for the ride. Ramming ads on the task bar and start menu, forcing updates on me, forcing me to make a Microsoft account before I can login (or just having a dark UI pattern so I can’t figure out how to avoid it, for the pedantic).
With Linux I feel like the machine is a turing complete wonderbox of assistance and possibility, with windows it feels like Microsoft have forced their way into my home and are obnoxiously telling me they know best, while condescendingly telling me I’m lucky to be here at all. It’s a very different feeling.
Yeah, "Weather and More" is such a joke. I like the idea of Weather on my lock screen in theory, and I sometimes miss Windows 8's great support for Lock Screen live data, but I have huge problems with almost everything else in the "and More" (news, no thanks, ads, definitely no thanks, tips, maybe not). Thankfully it is still really easy to turn off "Weather and More", but I wish they'd give us a "Weather and Nothing Else". (Same reason one of the first things I do is disable the "Widgets" display on the taskbar in Windows 11. Weather is great, everything else I don't want and/or actively hate.)
Yeah this is what pisses me off the most about windows. Telemetry that can't be turned off normally. Ads everywhere. Microsoft deciding when I must restart for updates. Microsoft trying to manage my behaviour telling me to try new features. Screw that. My computer is my own and must do what I choose.
This feature thing is really one of their strategies. At work they send us "adoption managers" that run reports to check whether people use feature xyz enough and set up stupid comms campaigns to push them to do so.
I really hate that. I decide how I use my computer. Not a vendor.
You're right, it is incredibly nice. Just the other day I got a Windows-only developer to install and use the POSIX/*NIX toolkit we use for development/deployment. In 30 minutes he was editing and deploying left and right with our normal open source stack. No messing around with Cygwin or MSYS or anything, it all just worked in Ubuntu on WSL. It's fantastic.
Using WSL on Win11. I would prefer Linux but I never got used to Open Office/Gimp/... and need to use PowerPoint / Affinity. But WSL mostly works, and added some tools and config to make it useful with WezTerm
> Edit: for clarity, by "multiple OS" I mean multiple Linux versions. Like if one project has a dependency on Ubuntu22 and another is easier with Ubuntu24. You don't have to stress "do I update my OS?"
You can run multiple Linux distributions in chroots or containers, such as docker containers. I have showed people how to build packages for Ubuntu 22.04 on Ubuntu 20.04 for example.
This is what tools like toolbx or distrobox solve. You can have easy to use containers with libs from any distro with a few commands, using podman or docker as the backend.
WSL is massively slower than Linux. Not just the 10% or so for VM, but probably 50-90% slower for disk access. It takes many times longer to start tmux. It has update bugs that crash open terminals and that's not even part of the regular windows forced-update fiasco. In short, it's garbage. It's one of the primary reasons I moved back to Linux for my daily driver.
It's a... VM? Like the Linux VMs running on Linux computers in the cloud?
Sorry but not sorry, it's not easier to run than on linux. It requires the Windows store to work, and to use Hyper-V (which breaks VMware workstation, among other things).
It's in a better package, to be sure, but it's not "easier to run multiple OS on the same computer". It's easier to use multiple OSes (no SSH, GUI forwarding, etc), as long as all those OSes are Linux flavors supported by WSL.
The files, including and especially the distro files, `wsl install` installs still originate from the Store's CDN, so the truly paranoid that distrust the Store (including some corporate environments) and just entirely block Store CDN access at the DNS and/or firewall level still break WSL installs.
You're likely right, I haven't used it in ages. Though I recall that at one point you had to get distributions from the Store, but it may have been that long ago that it was still being called "Bash for Windows".
As of 24H2, you can just "wsl install" from the commandline and it'll do all necessary setup to get you up and running, including installation of Hyper-V components if needed.
It's a bit more than just some candy, there's substantial glue on both the Linux/Windows sides to get Plan9, WSLG, and the other components to work.
That said, the kernel they distribute is open source and you're not limited to just the distros they're working with directly. There are a number of third party (e.g. there's no Arch from Arch or Microsoft, but there's a completely compatible third party package that gives you Arch in WSL2)
The main complaint was the market place TOS that gave Microsoft a free-pass on any trademarked assets. The new WSL2 installation way avoids all of this.
Along with the glibc hacks needed by WSL1.
(I was part of the discussion and also very adamant about this not happening)
I'm old enough to remember that before docker there was chroot. It's fairly easy to put lots of different user mode portions of Linux distros into directories and chroot into them from the same kernel. It seems a bit like what you're asking for.
There's also debootstrap which is useful for this technique, not sure if it also works on Ubuntu.
My only big gripe with WSL right now is GUI applications. wslg is not good, and the only good experience is when applications have a good remote development UX such as vscode.
Another, smaller, gripe is networking. Because of how WSL is networked, I've run into edge-case issues with connecting to networked applications running in WSL from Windows.
I use WSL, but I'm actively looking for a way to move away from it. The only thing holding me back are languages like Ruby or Python, which are designed to work in a Unix-like environment. I briefly considered forking Ruby and stripping out all of the Unix-isms but in the end I gave up and just installed Linux (WSL).
docker is pretty easy to use on linux (even rootless docker isn't particularly painful) and KVM using QEMU is also pretty easy for running Windows things. I used WSL quite a bit but ultimately have switched back to running Ubuntu as my main.
Here's the main difference between making Windows vs Linux the main OS from my POV: Windows is a lot of work and only the corporate editions can be converted into not-a-hot-mess-of-distractions (supposedly). Out of the box Linux doesn't have all of the bullshit that you have to spend time ripping out of Windows. You can easily re-install Linux to get the "powerwash" effect. But if you powerwash Windows you have to go back and undo all the default bullshit again.
Having said that Windows+WSL is a very nice lifeline if you're stuck in Windows-land. It's a much better combo than MacOS.
WSL gives you no support for USB devices, which is a massive pain for embedded development when IT forces you to use Windows. Also, this might just be specific to my setup but WSL networking is very finicky with my company's VPN, and breaks completely if the VPN ever drops out requiring a full reboot.
There are always going to be niche cases. In general USB storage devices are slow to transfer data anyways, so you are better off in copying the files directly from windows mounted location.
For me it was slow, full of compatibility issues, and glitchy. Some simple packages wouldn't even install in the official Ubuntu WSL distro. To be honest I don't know what the use case for this is, other than to run some one-off Linux thing once in a while without having to use another box.
I use WSL2 to handle Linux (and Windows cross-) compilation regularly, along with running a number of native tools that are specific to Linux.
I've never had any issues with that, even to the point that I've been able to run MAME natively from Linux and have it show up like any other windowed app.
Windows 10 with WSL(2) is/was peak Windows for me. You could build stuff and edit MS Office documents in the same place. Sadly, it wasn't meant to last. I have no intention of giving W11 a try, not yet decided what I'll be using come this fall.
I'm a daily driver. It completely changed the way I work. Am I curious if something will compile? Open a terminal and type make. The files are all already there. You can even run graphics apps. It's wonderful.
I'll second you, WSL makes Windows a first class experience because now I can seamlessly have Linux and Windows apps in one laptop. Yes, I could run VMWare Workstation or HyperV, etc, but this is just better integrated.
As of a couple of years ago the integration was not that great and I switched to just using a full-fledged VM instead. For example, trying to use binaries in WSL from within Visual Studio or vice versa was not great.
I heart WSL. Years ago I was going to switch to MAC OS to have a more unix like experience/workflow. Then WSL came out and I stayed because Linux is the environment I spend most of my time in.
I agree it is a convenient way to run multiple Linux VMs, but it comes with the drawback of having to use Windows, which is a major impediment to anything I may want to do with my computer.
You can run multiple linux distros on linux just fine via KVM/QEMU, there is nothing special WSL offers except that it is a must if you're doomed to use windows.
I used to love WSL when I had a Windows machine because I used lots of docker containers, but now that I am in a Mac with Apple Silicon, there is no going back.
qemu on Linux solves a bunch of these problems as well. But yeah, UX-wise WSL is pretty good at solving the problem of “provide Windows devs a POSIX environment”.
Qemu is nothing like wsl UX wise. The UX on windows is double click gimp and then a window for gimp opens. For qemu it opens a new window for the wm, has awkward input focus interactions, you probably have to log in to the vm, and it can not be easily setup to automatically open the app you want.
I tried it and found it to be such an abomination. I can’t understand why any self respecting software developer would use Windows with a bastard linux like WSL instead of just using actual Linux. Feels like a massive skill issue.
I'm not the biggest fan of WSL2, but it's definitely good enough for people to like it. it's worked well enough for me in the past, but the last time I used it, there were problems with mDNS and BPF that it just made more sense for me to boot into leenucks.
But you're definitely not crazy for liking it. And people should chill out instead of downvoting for someone who just says what works for them.
I haven't tried Win11 and probably won't unless my employer forces me to. But if Win11+WSL2 works for you, more power to you.
Windows treats you like a baby. You cannot learn the internals of it and it forces decisions on you. With Windows, the computer that you paid for is not yours.
I won't downvote you, but I will die on the other hill - the one over there that has a guy sitting down with his arms folded sporting an angry face every time someone something positive about WSL. There's at least three of us on that hill. And we're not going anywhere.
Real talk. And anybody who argues is taking a heavy dose of copium to justify their use of Linux and the ensuite of compatibility issues that entails. Let them have their sense of superiority :' )
I'll second this, and I'm someone who ran a certain alternative OS to Linux before Linux was viable instead of run Windows, worked as a developer of Win16 and Win32 apps early in my career which gave me a deep love-hate of the platform, couldn't stand Microsoft's monopoly tactics back in the 1990s and 2000s, and remain ever-sceptical of Microsoft's open source and Linux initiatives...
... but WSL is an excellent piece of work. It's really easy to deploy apps on. Frankly, it can be easier to a deployment there than on a Linux or macOS system, for example the reasons detailed above.
You can run multiple OSes simultaneously on Linux itself - Linux can run VMs just fine. I.e. Linux guests on Linux host and so on. Take a look for example at virt-manager (libvirt / qemu + kvm).
And WSL is a limited VM using HyperV anyway. If you want to run a VM, you can as a well run a proper one which isn't limited and runs a full blown distro with sane configuration.
So WSL is definitely not more powerful than normal Linux.
For WSL 1, I kinda agree. It was basically the Posix Subsystem re-implemented and improved. Technically amazing, and running parallel to Windows without virtualization. Too bad it had so many performance issues.
But WSL2 is just a VM, no more, no less. You can do the same with VMware Workstation or similar tools, where you even get a nice accelerated virtual GPU.
> WSL is more powerful than Linux because of how easy it is to run multiple OS on the same computer simultaneously.
This is why you pay karma tax. This statement is so clearly representative of a falsity.
My linux can run multiple linuxes as well without VM overhead. Something Windows can’t do. Furthermore WINE allows me to forgo running any vm to run windows applications.
I developed on WSL for 3 years and consistently the biggest issue was the lack of ability to use tooling across the shared OSes.
Your karma depleting statements are biased, unfounded, and it shows as you do not really provide counter evidence. That’s why you lose karma.
Except Wine cant cover all of Windows (partly due to fault of Windows). I can't run UWP apps for example. Windows is not a good operating system but if you need it. WSL creates way more intuitive working environment for you. So even if you can run multiple Linux OSes in Linux you can't run Windows as easily you can do linux on Windows. So OPs statement is not incorrect.
I've never seen a good UWP app. My biggest issue with Wine is that it can't run anything that needs a driver. That means any hardware with garbage Windows-only control software (hello Roboteq) needs a proper VM.
I totally agree and will join you on the hill. I used Linux exclusively at my job for two years straight and now do the same job but from Windows 11 with WSL 2 on the same physical ThinkPad T41 laptop. Windows gets the basics right more than Linux did (sleep states, display, printing). And as the OP notes; it makes it easy to run multiple distributions and never fear that something I install or reconfigure within the WSL2 terminal will screw up my host. Having a different OS improves isolation in this regard, not at a technical level but for me making mistakes and entering commands in the wrong place, since Windows does not accept Linux commands. JetBrains and VSCode both have great support for WSL2.
How would a 3% layoff in a big company affect anything unless they want to specifically axe some project? It’s just lubrication for the machine. 3% is less than nothing compared to the bloat in any bigco and let me tell you Microsoft’s reputation is not the leanest of the bunch.
They're not uniform across every team and project. Certain projects can be hit very hard while others are not. Outside looking in, all we can really do is speculate.
Sure we can speculate that 3% is not news. Again, it’s a one way conclusion: I concede if they want to axe a project deliberately, that could show up in the layoff, but projects won’t incidentally get impacted because of a 3%. The causal relationship would be the opposite.
If you mean stack ranking, the hard 20/70/10 bucketing was in force >15 years ago, but even then it didn't mean that those 10% automatically get fired.
It's really hard to cut actual bloat when running layoffs, because the more you work the less time you have to do politics and save your ass, so the less productive type of people tend to be pretty resilient to layoffs.
Have you worked at any of these large companies? It’s really easy actually (practically, not emotionally). It’s usually very obvious and there’s consensus who the bottom 10% are. Politics would affect promotions much more than layoff.
You believe what you want to believe. That’s the lie of the century. Every single layoff is performance based to some degree. Sure you want to consolidate a couple orgs or shut down a project or an office and you lump that together with your performance based stuff.
(Also I was responding to a more generic comment saying doing layoff is bad and makes org more political.)
You’re being sarcastic but it is for sentimental reasons (for the immediate manager and team who doesn’t want to make the hard choices and do the work) as well as the empire building reasons (managers’ universal dick measuring contest is org size [1]).
[1]: the real debate is not “who’s my lowest performer” for each manager. It is about why I should cut rather than my sibling manager. If you force everyone to cut one person they all know who it will be.
It's funny because in this response you are arguing exactly the same thing as I was in my first comment: team sizes are always defined by political reasons (at manager's level, I didn't mention that above because I thought that was obvious, but here we are).
The duds who are the best at telling stories about how important their project is are the ones who can get the budget their team growing, and they are also the ones who are the most likely to defend their interests in the event of a layoff. Because, as you noted yourself, it is never about every individual manager selecting their lowest performers and laying them off, and much more about individual managers (at all levels) defending their own perimeter.
And in practice, being good at this type of games isn't a good proxy for knowing which managers are good at fostering an efficient team under them.
The point I am making is it does not matter if you are cutting 3%. Sure you might end up taking out a third of the bottom 0-10% instead of 0-3% but what difference does it make? It won't be a material political concern for your 50+ percentile employee base.
It does, however, make a difference on the promotion side.
> Sure you might end up taking out a third of the bottom 0-10% instead of 0-3% but what difference does it make?
That's not how it works! You'd have entire projects or department being sacked, with many otherwise very competent people being laid off, and projects deemed strategic being completely immune from layoff.
And even inside departments or projects, the people best seen by management will be safe, and the people more focused on actual work will be more at risk.
The harsh truth is that an organization simply has no way to even know who the “bottom 10% performance-wise” are. (And management assessment tend to correlate negatively with actual performance)
Mac IS the Sotate of the Art at the developer experience. The only annoyance was the virtualisation on Arm but having UTM/Multipass/Virttualbox now, it is the best.
If you are up to too many containers, a linux box would be more preferable.
I still can't believe how people use windows as their main system with all the extremely invasive telemetry and bogus "AI" features that hogs a LOT of resources at idle
I'm not the person you're responding to, but i see their 'ok' reply as being valid.
I, too, use windows for audio recording: i rather suspect anyone that does, knows about what's available, both for mac, and linux.
And, have chosen [reasons, amongst many, being: cost, availability, trust, familiarity...etc, etc] 'not those paths'.
For now.
That's fine. But this is still a place to discuss things no? Also it wasn't even his comment I replied to...
If someone disagrees or agrees with my comment they should feel free to state their points or just ignore it. Maybe he has good points that speak against Mac
I am forced to use Windows at work. Surprisingly many large enterprises use Windows, mostly because of their dependency on Microsoft Office and Exchange. I'm really happy that WSL exists so I have to deal as little with Windows as possible.
At home I still need to have a native Windows laptop because of one application that I use a lot (a backgammon analyser) that runs natively on Windows and is heavily cpu driven. I could run it in a VM but the performance penalty is just too heavy.
I play video games that require an anti-cheat, so there's that. But honestly, it's fairly easy to deal with that. You can use the Windows IoT LTSC version and use one of those trusted debloaters. I haven't seen any AI features or bloat in a very long time.
I am not that proficient, I tried it three times, first hurdle is trying to find a distro, making all that research about which ones have more pre-configuration and which ones would be less buggy for your hardware can be a pain.
The thing that attracted me to Linux is the file system and customization. I just wanted to daily drive it, not really for any work. But bugs are just a reality using most DEs available.
In my case once, it even was related to performance, I had to stay the whole day trying to find out why Kubuntu was slower than Windows on my laptop, ended up just being one line in some config file that forced battery-saving performance, I failed to find the post online after encountering the same issue months later after reinstalling the system.
Believe it or not, it's not all sunshine and rainbows, I just realized I use Windows more and more in my dual boot system, so I gave up on using Linux after that.
It is definitely in win11. Might be less on win10. Maybe try using something like simplewall [0], which shows prompts for every network request that phones home
Few people know what Linux is. Most only know that there are "macs" and "pc" and haven't used a personal computer privately at all since they got their first ipad in 2016.
Some people don't know that computers can be fast. Others modify their system to remove/neutralize all this crap. There are even tools to automate that.
It’s not even that difficult to manually remove these from Windows. It’s like a handful of configs. It’s way easier to do that than make (probably) any Linux distro to work with my current and previous setups. Which btw I could never achieve even with considerable amount of tinkering.
That's the only part I care about dang. I still use WSL1 and have done a number of interesting hacks to cross the ABI and tunnel windows into "Linux" userspace and I'd like to make that easier/more direct
Not a Windows user, but I think WSL is great. I see a lot of Windows user criticising Linux for... essentially not looking like Windows. "Linux Desktop will never reach mass adoption unless it [something that boils down to 'looks more like Windows']".
The thing is: I consider myself a real Linux user, and I don't want it to look like Windows. And I hate it when Windows people try to push Linux there, just because they want a free-with-no-ads version of Windows.
In that sense, if WSL can keep Windows users on Windows such that they don't come and bother me on Linux, I'm happy :-).
Not a Windows user, but I hate WSL. Looks like microsoft realizing they will lose a generation of developers to linux so they implemented linux inside their OS. Now people won't see the joys of recompiling kernel :)
I've stopped seeing joys of recompiling kernel [and consequence reboots of servers which easily could take 10 minutes and that's without IPMI /KVM] since 2009-2010
WSL isn’t Linux implemented in Windows. WSL 1 was, but it is not the good version of WSL that most use.
WSL 2 is a special purpose VM which has ties into Windows in a few key ways to make interoperability easier. You can run a program on Windows and pipe its output to a Linux program for example. Windows and WSL can trade system RAM back and forth as needed. Networking between the two is very smooth.
You can recompile the kernel for WSL all you want, and many do. Microsoft make their changes public as required by the GPL. You can use your own kernel without anything from Microsoft. You can easily create your own WSL distributions, customized to your hearts content.
It’s more than the sum of its parts, really. Feels that way to me, anyway.
People just want to bash Windows left n right. But no other OS in history has been this mature with handling GUI snd and providing the flexibility, customisations etc.
Before I say anything, Windows 11 is bad.
I remember playing with Win98, XP , I would modify many many registry settings, mod binary files to do something with games, you could access all sorts of weird hardware which only had drivers for windows!
Windows 98-7 were best for learning stuff about computers (inner workings etc).
I remember, to remove viruses (XP)I was trying to hard delete system 32 folder, it deleted lots of files and it continued to run!
WSL1 got my hopes up that we were on the path to Windows supporting the Linux user space API but then it was cancelled and replaced with a virtual machine based solution that I didn't need WSL2 to implement myself (with more flexability and capabilities).
I'd much prefer a proper compatibility layer that converts Linux system calls to their equivilent Windows calls and those calls be exposed from the Windows kernel itself.
That way I could just run Linux applications, bash, zsh and development tools directly on top of Windows without needing any "remote development tools" in my IDE or whatever.
Something closer to MSYS2/git bash/busybox for win - but where tools can ignore their Windows specific stuff like the filepath seperator.
Buying Apple hardware with the intent on running anything but what Apple wants you to run is setting yourself up for a battle, including trying to use non-Apple hardware with the hardware you purchased. It's why I'm not spending any personal money on Apple hardware.
Could've been worse. At least they're not locking you out of your device like on iPhones and iPads. They don't stop you from running Asahi, they just aren't interested in helping anyone run Asahi.
Microsoft, on the other hand, sells laptops that actively prevent you from running Linux on them. Things get a little blurry once you hit the tablet form factor (Surface devices run on amd64, but are they really that different from an iPad?) where both companies suck equally, though Microsoft also sells tablets that will run Linux once someone bothers to write drivers for them.
Apple might not be releasing documentation on their peripherals, but they went out of their way in making it possible in the first place.
Apple could just have gone and do a straight port of the iOS boot procedure to their ARM Mac lineup... and we'd have been thoroughly screwed, given how long ago the latest untethered bootrom exploit was.
Or they could have pulled a Qualcomm, Samsung et al and just randomly change implementation details between each revision to make life for alt-os implementers hell (which is why so many Android BSP dumps are the way they are, with zero hope of ever getting anything upstream). Instead, to the best of my knowledge the UART on the M series SoCs dates back right to the very first iPod.
The fact that the Asahi Linux people were able to create a GPU driver that surpasses Apple's own in conformance tests [1], despite not having any kind of documentation at all is telling enough - and not just of the pure genius of everyone involved.
Macs are almost universally seen as developer computers. If you are going to be developer friendly, then you need to do things that are developer friendly. Asahi project is 80% reverse engineering stuff.
Parallels also has a commercial offering that does some nice GUI-level integration with both Windows and Linux VMs.
My understanding is that these are both built on top of some Apple API, and Parallels actually collaborates with Apple on making it work for their use case. So it's not the first-class support that you get from Microsoft with WSL, but it's still pretty good.
Eh, I have a Mac but end up SSHing into some Linux machine pretty often. There are too many differences between the two unless I'm using something like Python or JS. Docker helps too, but that's Linux.
Also, it's really annoying that macOS switched to zsh. It's not a drop-in for bash. Yeah you can change it back to bash, but then any Mac-specific help/docs assume zsh because defaults matter. Pretty fundamental thing to have issues with.
Apple has gone out of their way to build first party virtualization APIs in their OS to boot a Linux VM directly by specifying kernel and initrd on disk. That would be a direct point of comparison to WSL, not Asahi. What are you talking about?
You can’t? Just install UTM for a full VM one-click install (easier than wsl /install and two reboots) or any number of docker thingies that people build for the Mac.
I don't know why your experience was poor. At least under Apple Virtualization for ARM64 Linux, the performance has been great. Perhaps as the other commenter suspects you might be running x86 Linux under software emulation?
In any case, I've run bare metal Asahi on M1 (and M1 Pro) and they work amazingly well too. Installation was quite straightforward too.
Maybe just skill issue, but I was using it for developing a large rust project. Compilation time was way worse than native and memory was a problem since I was allocating half of total to the vm and I only had 16 gb.
Also the network would cut out and I would have to restart the vm periodically.
Just using a linux laptop is way better but then I don’t have a nice touchpad, excellent battery life etc.
I’ve done this numerous times and it’s never been onerous and everything has worked flawlessly. It’s also not slower than native if you’re running an ARM build of Linux.
Almost all relevant x86 distros have arm64 builds these days as well and once you enable Rosetta 2 you will be able to run x86 binaries/docker containers on them, but the Linux kernel remains arm64.
Otherwise, it is just using qemu interpreter to emulate x86 in software.
Biggest thing is that I don't want to get stuck rebuilding software from source because the package maintainer didn't make an arm64 binary.
Rosetta2 on my host OS enables the guest OS to run x86 binaries... that's interesting, I'll try it too, but I'd be surprised if it's truly hassle-free. At the very least would have to reconfigure apt-get to allow x86 binaries. Then idk about dynamically-linked x86 libs (I'm not a Linux expert).
I'm sure you can make apt work in a multilib world, but the mainstream way it generally works well is you stick to distro's arm64 packages (pretty comprehensive; arm64 is not some esoteric arch) for the most part and they work just great and you use/build docker containers that might be x86 and that is supported by `--platform` switch once you get basic stuff configured.
I suspect if your use case is more esoteric, it's likely not going to be worth the time. I'd just SSH to a second linux box.
To correct your statement on one key thing: Rosetta2 in this case is not running on host OS. Apple provides a Linux version of Rosetta 2 which runs inside your VM and is registered as a binfmt interpreter for ELF x86 binaries[1]. This is similar to how `wine` or `java` or `mono` can execute their respective binaries directly.
It's easier to dual boot Asahi than Windows. Secure boot and disk partitioning are two examples of roadblocks that are streamlined in the Asahi installation, but quite difficult on Windows
Let's be honest, nobody earnestly expected them to care about running native Linux in the first place. You knew what you got into when you bought the Mac.
Apple implementing iBoot is table stakes. They should have gone the extra mile actually, and implimented UEFI the same way Intel did; but that would have made it too easy to support Apple Silicon on Linux. Sticking to their proprietary bootloader forces reverse-engineers to painstakingly guess at DeviceTree drivers until a bare-minimum boot environment is possible.
If Apple hadn't opened iBoot in some way then I don't know how they would handle a secure reinstall process. If that's "to actually help them" then they very clearly didn't try too hard. Without driver code or UEFI support they're basically telling the community to pound sand, and modern Asahi efforts reflect that.
Wow. In 2009, when it looked like Microsoft was the most closed company of all time, I was telling people at work, they should port windows to the linux kernel. What happened over the next 15 years, I don't think people would have believed it if you told them back then. Things have changed.. ALOT. Now granted, this isn't what I said they should do, but you know, eventually they might see the light.
Never see anything Microsoft does in the direction of open source as “they have seen the light”. It’s a trap. Claiming open source friendliness is the bait, Windows is the trap itself.
Yeah I remember when they bought Github and my coworker was telling me how they've turned a new leaf and want to support foss... nope, they wanted to train an AI on all the code there.
I personally don’t use it, pretty much just cause I’m comfortable with my current development environment, and nothing has spurred me to migrate in a while. I’ve been vaguely suspicious to see Microsoft rapidly gain such a huge market share with VS Code, but I don’t know any specific criticisms about it.
Sounds like the argument is while it’s technically open source, trickiness with the licenses makes it basically impossible to legally fork it into a usable software. That sounds plausible to me, I’m no lawyer.
But isn’t Cursor a wildly successful VS Code fork, done legally? (I assume if it were in violation of licenses, Microsoft would have already destroyed them.) Seems like a glaring exception to this argument.
I'm not be sarcastic or funny when I ask this. Why isn't this called the Linux subsystem for Windows? It seems like a Linux subsystem running on Windows. If it were the other way around, (ie, a Windows Subsystem for Linux) I'd think that Linux would be the primary OS, and something like WINE would the subsystem.
I think it's supposed to be read as "the Windows subsystem for [running] Linux [applications]". Back in the old days there used be a thing called Services For UNIX (SFU), which was a bunch of Windows tools and services that provided UNIX-like behavior. Then came Subsystem for UNIX Applications (SUA). And now it's WSL.
WSL1 was hobbled by needing to calculate Unix Permission numbers and hardlink counts for every file. On Windows, you need to create a handle to the file to get those things. That's a file open on every file whenever you list a directory.
Maybe someone will finally build my dream: a WSL distro that I can also dual-boot natively. I'd love to switch between bare-metal Windows with WSL and bare-metal Linux with virtualized Windows at my leisure!
Parallels on Mac did this in reverse a decade ago. You could dual boot windows and MacOS, or you could boot into your windows OS while running MacOS and access both file systems properly.
It's not about bugs, it's that users can do basically whatever they want in their WSL2 guest VMs and most endpoint security software has little control over it or visibility into it. It's a natural little pocket of freedom for users, which is great but undermines efforts to lock those systems down (for good or ill).
QEMU has win64 builds, and the guest OS can access SAMBA/NFS/SSHFS host shares. Getting guest OS hypervisor to work is soft locked on Home licensed windows, so options are often limited to Win guests on linux hosts.
In general, the utilities on posix systems heavily rely on a standardized permission and path structure fundamentally different than windows registry paradigms.
Even something as simple as curl... while incredibly useful on windows, also opens a scripting ecosystem that side-channels Microsoft signing protections etc.
Linux VM disk image files can be very small (some are just a few MB), simply copied to a physical drive/host later given there is no DRM/Key locks, and avoids mixing utf8 with windows codepage.
Mixing ecosystems creates a Polyglot, and that is going to have problems for sure... given neither ecosystems wants the cost of supporting the other.
Best method, use cross platform application ports supported on every platform. In my opinion, Windows should only be used for games and industry specific commercial software support. For robust system privacy there are better options now, that aren't booby-trapped for new users. =3
WSL is a stupid idea. Microsoft should just stop developing and maintaining its windows kernel and built a windows compatibility layer on top of Linux.
A "Windows Subsystem" is a concept that dates back to the original Windows NT line of operating systems. Historically, there've been a number of supported "Windows Subsystems", essentially APIs for the kernel. In Windows NT 3.1, there were multiple subsystems: Win32, POSIX, and OS/2, plus a separate one specifically for security.
While WSL2 isn't implemented as an architectural sub-system (it uses a VM instead), WSL1 was far closer to the original architecture, providing a Linux compatible API for the Windows kernel.
I think it's because WSL refers to the Windows subsystem that allows you to run Linux, not to the Linux system itself. You still have to download and install Linux on top of it, or at least you did the last time I used it a few years ago.
There may also be some trademark law precedent that forces this naming convention. Even on the google play store, if you have 3rd party apps for something, it's always "App for X", the name cannot be "X app".
I think you always can. In the past you may lose some features / have some bugs. For recent kernel versions (>= 6.6) the only patches WSL kernels have is dxgkrnl + some hacky fixes for clock sync. Others are all in upstream already. So you'll just lose WSLg / CUDA passthrough and nothing else now.
Of course, there might be some regressions. They are usually only fixed (upstream) after WSL kernel gets upgraded and it starts to repro in WSL.
Their kernel modifications and patches are public, and some of them have been upstreamed long ago. You'll need to compile your own to get the benefit, but I don't see why you wouldn't be able to use your kernel of choice.
Of course, if you want the native integration WSL offers, you'll need to upgrade the Linux driver/daemon side to support whatever kernel you prefer to run if it's not supported already. Microsoft only supports a few specific kernels, but the code is out there for the Linux side so you can port the code to any OS, really.
With some work, this could even open up possibilities like running *BSD as a WSL backend.
A version that tracks the underlying distro better, or even closer to mainline. Current WSL2 kernel is 6.6, kernel is 6.12 or 6.15. Debian Trixie will be 6.12.
strace shows that the sleep program uses clock_nanosleep, which is theoretically "passive." However, if the host suspends and then wakes up after the sleep period should have ended, it continues as if it were "active."
No. WSL2 is a Linux VM. It doesn't expose Windows API internals or implementation details. It uses normal, already well-documented public ones. Wine and ReactOS can already use the publicly available documentation and they are still behind on many such APIs' implementation. Windows is a big OS. It takes serious man power to implement many things.
Microsoft doesn’t like open source software. This is cosplay.
Microsoft releases the important parts of VS Code under proprietary licenses. Microsoft doesn’t release the source code for Windows or Office or the github.com web app under free software licenses.
Don’t get it twisted. This is marketing, nothing more.
WSL is amazing. Nothing short of it. My laptop SSD controller died on a conference. I bought a 200 dollar netbook running windows, installed WSL, downloaded the MDADM packages, was about to mount the encrypted drive with cryptsetup, mount the ext4 partition, then chroot into it, then my home drive was like working on my old laptop.
I did this in about 20 minutes, with the help of chatgpt.
In the end I was able to keep working through the trip and provide some demos to clients which landed us some big deals.
Copying files between Windows and WSL is EXTREMELY slow. I really wanted to give Windows a chance but the slowness completely destroyed that chance, along with the lack of hardware acceleration for GUI applications.
A lot of people here are saying nice things about having dev environment on WSL. Honest question: how do you deal with with those minor but insufferable Windows' quirks like 0d0a line endings, selective Unicode support, byte-order-marks and so on.
While right now I enjoy the privilege to develop on Linux, things may change.
The worlds don't really cross. If I'm using WSL to develop software using a Linux toolchain I'm not using any other Windows tools, other than VS Code, in that environment. I could but I just don't find the need. I could literally be remoted into an actual physical Linux box and the experience would be nearly identical.
Occasionally I'll use File Explorer to manage my Linux file system or Linux tools on the Windows file system but really not the degree in which any quirks would be an issue.
may be you missed the point - in WSL, you are with Linux/unix based env. So your Vim or other editors and tools just work like in regular Linux, Windows part until needed can be invisible and uninvolved.
Other tools like VSCode IDE, has special handling (extension) to work _inside_ WSL and only keep GUI frontend on the Windows side (very close to how it works over SSH).
On the other hand, I quite often use "explorer.exe ." inside WSL to open File Explorer and jiggle around with files downloaded/created/modified (say with sed) in WSL and it works fine too.
Or use MarkText markdown editor on folder inside WSL being some git repo and I'm adding docs/instructions there.
I'm using WSL since about 2017 on insider builds and wsl1 for occasional cases and WSL2 as as daily driver for, 5 years, so nice for me and no need in Linux on desktop.
> Honest question: how do you deal with with those minor but insufferable Windows' quirks like 0d0a line endings, selective Unicode support, byte-order-marks and so on.
exactly why I use WSL, lf-only line endings, UTF-8, everything a basic debian bookworm iso can provide, plus docker with GPU Access
While I had to, I enjoyed using WSL1 on Windows. It was disappointing to find WSL2 has no user upside; it just discards the benefits of WSL1 in favor of the simpler implementation.
Shame for all of the people who worked hard on WSL1 only to be virtualized into nonexistence.
Anybody know what the deal is with neither Oracle nor Microsoft trying to make it possible for VirtualBox and WSL2 to coexist without severe performance impact? What the heck is the issue that neither side knows how to solve? Or is there a deliberate business decision not to solve it?
It's because WSL2 is using HyperV behind the scenes, and HyperV is a Type 1 (Native Hypervisor), running directly on top of hardware.
When you activate it, it also makes you host windows OS virtualized as well, albeit with native access for some components like GPU etc.
That's why all other Windows Hypervisor (Virtualbox, VMWare Workstation) will experience one issue or another when WSL2 is activated, because more abstraction is happening and more things could go wrong.
That makes no sense. Are you actually familiar with the technical issues or are you hand-waving? WSL2 itself is a Linux VM running in top of Hyper-V. Heck, as far as I know other Hyper-V VMs run fine alongside WSL2 too. Why can't a VirtualBox Linux VM do the same?
That doesn't in any way explain why VirtualBox couldn't be made to run on top of Hyper-V. You might as well tell me Linux apps can't be made to run on Windows because Windows isn't Linux.
> Anybody know what the deal is with neither Oracle nor Microsoft trying to make it possible for VirtualBox and WSL2 to coexist without severe performance impact? What the heck is the issue that neither side knows how to solve? Or is there a deliberate business decision not to solve it?
Oh I thought your parrent post was asking general overview on why Virtualbox will have severe performance impact if WSL2 is activated. I posted the reason due to multiple abstraction conflicting with each other and there you go.
> Why VirtualBox couldn't be made to run on top of Hyper-V. You might as well tell me Linux apps can't be made to run on Windows because Windows isn't Linux
AFAIK it's already possible but still experimental on Virtualbox, also it's hard issue to solve, and have tiny ROI I suppose. And why would they spent time fixing this slowness that only impact some small userbase like you?
> AFAIK it's already possible but still experimental on Virtualbox, also it's hard issue to solve, and have tiny ROI I suppose. And why would they spent time fixing this slowness that only impact some small userbase like you?
It seems like you're just making guesses and don't actually know the answer? The reason I asked wasn't that I couldn't make the same guesses; it was that I had read online that there are technical obstacles here that (for reasons I don't understand, hence the question) they've had a hard time overcoming. i.e. "tiny RoI" or "small userbase" don't fully explain what I've read.
I was hoping someone would actually know the answer, not just make guesses I could have made myself.
I despise Windows 11 so much, but have to use it. I have a 24/7 box with Ubuntu running a couple of Linux and Windows VMs and that's the way I like it. I don't touch the Ubuntu host except for when I need to reconfigure it.
All development is done on Windows laptop via SSH to those VMs. When I tried using Ubuntu via WSL, something didn't feel right. There were some oddities, probably with the filesystem integration, which bothered me enough to stop doing this.
Nevertheless, I think it's really great what they now did.
Now all what's missing is that they do it the other way around, that they create a 100% windows compatible Wine alternative.
NixOS! [1] You can keep the entrie system config in a single git repo. For me, it's far easier to work with, than, let's say, Ubuntu. But beware, it has steep learning curve.
WSL is amazing if you work for a non tech company that is a windows house but want to do development in Linux. It’s seamless (at least to my middle ability) for VS Code.
It's one of those things that make sense if you understand the details.
Windows NT itself has had an architecture supporting environment subsystems for things like Win32, POSIX, OS/2 for decades. See https://en.wikipedia.org/wiki/Architecture_of_Windows_NT for more details. Later, it made it relatively easy to host Linux and Android too.
You can imagine they commonly called these things "subsystem for XYZ". So "Windows subsystem for Linux" made sense for those in the know.
Does sound weird outside of the world of NT though
That would not be a sound strategy. Microsoft choose to make a OS a commodity, but not services and platforms, as part of the strategy to commoditize the complements like developer tooling
IANAL, but how is this license different from, say, the older BSD license - thought that was "have fun, do what you want, post a notice"? It doesn't say anything regarding ownership of changes, nor how to add copyright for such changes... Does this mean that MS is looking to own changes, or will there be a string of extra copyright notices for each (significant?) change?
The MIT license scrunches the first two clauses of the 3-clause BSD license into a single clause, and omits the third clause (the nonendorsement clause, which is already generally implied). As a practical matter, most of the basic "simple" open source licenses are functionally identical.
But who owns the copyright to changes, and how is it recorded? I just am suspicious as to what or how large companies who sell/rent software deal with open-source, free stuff...
That's not covered by the license; that's covered by the CLA (Contributor License Agreement), and in the absence of one (I don't know if there is one or not for this repository), the author retains copyright to their code as usual.
WSL in combination with the enshittification of Windows was the thing that finally convinced me to switch from Windows as a main driver to Kubuntu/Linux.
KDE Plasma is IMO the best grapical desktop environment at the moment, including MacOS.
OT but the name irks me; Windows subsystem for Linux makes it sound like some sort of official Wine layer. It's a Linux subsystem for Windows if anything.
It makes it sound like Microsoft is giving some capability to Linux whereas it's the other way around.
“ I still hope to see a true "Windows Subsystem for Linux" by Microsoft or a windows becoming a linux distribution itself and dropping the NT kernel to legacy.
Windows is currently overloaded with features and does lack a package manager to only get what you need...”
NT is a better consumer kernel that Linux. It can survive many driver crashes that Linux cannot. Why should Microsoft drop a better kernel for a worse one?
Is this a Wayland issue? This works fine for me on X. But yes, progress goes backwards in Linux. I had hope for the Linux desktop around 2005-2010, since then it only got worse.
If your $DISPLAY managed by Xorg server goes away your X apps will also crash. Wayland combines the server with the parts that draw your window decoration into the same process.
Under Windows everything including the GPU driver can crash. As long as it didn't take the kernel with it, causing a BSOD. Your applications can keep running.
I can restart window manager and compositor just fine in X. Also it is not generally true that X apps crash when the server goes away. This is a limitation of some client libraries, but I wrote X apps myself that could survive this (or even move their display to a new server). It is of course sad that popular client libraries never got this functionality under Linux, but this is a problem of having wrong development priorities.
Can you expand on this? I've used Windows 10 for 2-3 years when it came out and I remember BSODs being hell.
Now I only experienced something close to that when I set up multiseat on single PC with AMD and Nvidia GPUs and one of them decided to fall asleep. Or when I undervolt GPU too much.
Of course that depends on the component and the access level. RAM chip broken? Tough luck. A deep kernel driver accessing random memory like CrowdStrike; you'll still crash. One needs an almost microkernel-like separation for preventing such issues.
People that comment things like this probably have their heart in the right place, but they do not understand just how aggressive Microsoft is about backwards compatibility.
The only way to get this compatibility in Linux would be to port those features all over to Linux and if that happened the entire planet would implode because everyone would say “I knew it! Embrace Extend Extinguish!” At the same time.
I agree. For years I supported some bespoke manufacturing software that was written in the 80s and abandoned in the late 90s. In the installer, there were checks to see what version of DOS was running. Shit ran just fine on XP through W10 and server 2016. We had to rig up some dummy COM ports, but beyond that, it just fuckin worked.
IBM marketed "OS/2 for Windows" which made it sound like a compatibility layer to make Windows behave like OS/2. In truth it was the OS/2 operating system with drivers and conversion tools that made it easier for people who were used to Windows.
Untrue. OS/2 for windows leveraged the user’s existing copy of windows for os/2’s compatibility function instead of relying on a bundled copy of windows, like the “full”
Os/2 version.
Os/2 basically ran a copy of windows (either the existing one or bundled one) to then execute windows programs side by side with os/2 (and DOS) software.
It was previously called the Windows Subsystem for Android before it pivoted. It had a spiritual predecessor called Windows Services for UNIX. I doubt the name had been chosen for the reasons you say, considering the history.
That said, to address the grandparent comment’s point, it probably should be read as “Windows Subsystem for Linux (Applications)”.
That's not what I say, that's what the former PM Lead of WSL said. To be fair, Windows Services for UNIX was just Unix services for Windows. Probably the same logic applied there back then: they couldn't name it with a leading trademark (Unix), so they went with what was available.
It was called Project Astoria previously. Microsoft releasing the Windows Subsystem for Android for Windows 11 is news to me. I thought that they had killed that in 2016.
Astoria and WSA are different things. Sort of. WSL and WSA both use the approach that was proven by Astoria. That approach was possible since the NT kernel was created, but no one within Microsoft had ever used that feature outside of tiny pieces of experimentation prior to Astoria. Dave Cutler built in subsystem support from the beginning, and the Windows NT kernel itself is a subsystem of the root kernel, if I am remembering a video from Dave Plummer correctly.
Anyway, Astoria was an internal product which management ultimately killed, and some of the technology behind it later became WSL and much later, WSA. WSA's inital supported OS was Windows 11.
Microsoft being Microsoft, they artificially handicapped WSA at the outset by limiting the Android apps it could run to the Amazon App Store, because that's obviously the most popular Android app store where most apps are published. [rolls eyes] I don't think sideloading was possible. [rolls eyes again]
I don't work for Microsoft and I never have; I learned all of this from watching Windows Weekly back when it was happening, and from a few videos by Dave Plummer on YouTube.
I wonder if companies open-source stuff mainly as part of a bigger strategy which primarily benefits them. could it be a way to access to a pool of free, contributing talent?
You mean like StarOffice being open sourced as OpenOffice to attempt to undermine Microsoft Office revenue a couple of decades ago? To quote Bugs Bunny, "Myeah, could be..."
why would companies not do things that benefit them? and if it's meant pessimistically, let me take you back to a much worse time when Microsoft didn't open source anything
Why was this flagged? This isn't even a secret, a lot of SaaS companies will open source parts of their offerings to increase adoption, making the money back when larger orgs now want to use it, and are willing to pay for enterprise support plans to get the service straight from the horse's mouth.
I think it's a fair exchange too, even as an individual I pay for plenty of smaller open-source SaaS services—even if they're more expensive than proprietary competitors—for the very reason that I could always selfhost it without interruption if SHTF and the provider goes under.
Would really be curious to hear the reason why, from an internal perspective.
I've seen a number of theories online that boil down to young tech enthusiasts in the 2000's/early-2010's getting hands-on experience with open source projects and ecosystems since they're more accessible than enterprise tech that's typically gated behind paywalls, then translating into what they use when they enter the working world (where some naturally end up at M$).
This somewhat seems to track, as longtime M$ employees from the Ballmer-era still often hold stigmas against open source projects (Dave's garage, and similar), but it seems the current iteration of employees hold much more favorable views.
But who knows, perhaps it's all one long-winded goal from M$ of embracing, extending, and ultimately extinguishing.
The same reason Rome didn’t fall. It simply turned into the Church.
MS isn’t battling software mfgs because they have the lock on hardware direction and operating systems so strongly that they can direct without having to hold the territory themselves.
When WSL came out I was absolutely overjoyed - finally an actual linux shell on windows! I use windows for my gaming pc, and I wanted to have a unified gaming/dev box. It felt like the solution.
Over time though more and more small issues with it came up. Packages working not quite right, issues with the barriers between the two, etc. It always felt like there was a little bit more friction with the process.
With Valve really pushing Proton and the state of linux gaming, I've recently swapped over to Ubuntu and Nixos. The friction point moved to the gaming side, but things mostly just work.
Things on linux are rapidly getting better, and having things just work on the development side has been a breath of fresh air. I now feel that it's a better experience than windows w/ WSL, despite some AAA titles not working on linux.
WSL 1 was supposed to be like "Windows on NT" where it emulated the Linux kernal to the NT one. they skipped a ton of features then dumped the whole thing for a containerized virtual machine thing for version 2. Wish the NT one worked out but I get it being complicated.
If the WSL 1 ended up working, it would have been one of the best historical coincidences in MS's history. A long forgotten feature in the NT kernel, unique to pretty much any other OS out there, used to push it's dominance in the 90's, is revived almost 30 years later, to fight for relevance with Unix based OS, once again. To quote Gorge Lucas, It's like poetry, it rhymes.
I can tell that if POSIX subsystem in Windows NT was actually a good enough UNIX experience, I would never bothered with those Slackware 2.0 install disks.
And the subsystems concept was quite common in micro-computers and mainframes space, Microsoft did not come up with the idea for Windows.
The original POSIX subsystem was just there so MS could say that it exists (and pass DoD requirements).
It got actually somewhat usable with the 2k/XP version, slightly better in Vista (notably: the utilities installer had option to use bash a default shell) and IIRC with 7 MS even again mentioned existence of the thing in marketing (with some cool new name for the thing).
Indeed, and that is why if I wanted to do university work at home instead of fighting for a place at one DG/UX terminal at the campus, I had to find something else.
I am aware it got much better later on, but given the way it was introduced, the mess with third party integrations, as Microsoft always outsourced the development effort (MKS, Interix,..), it never got people to care about afterwards.
First impressions matter most.
Realistically anyone who cared would be using something like Cygwin (and the original UNIX server market segment evaporated due to Linux and had zero interest in migrating to NT in that form--some did migrate due to application layer benefits like .NET but not for the same workloads.)
There is an alternative universe where Windows NT POSIX is really as it should have been in first place, and Linux never takes off as there is no need for it.
As there is another alternative one where Microsoft doesn't sell Xenix and keeps pushing for it, as Bill Gates was actually a big fan of.
Obviously we'll never know, but I seriously doubt that parallel universe would've had a chance to materialize. Not the least due to "free as in beer" aspect of Linux whilst web/Apache was growing at the pace it did. All proprietary unices are basically dead. Sun was likely the sole company that had the best attitude to live alongside open source, but they also proved it wasn't a good enough business post bubble burst. NT and Darwin remain alive due to their desktop use, not server.
With Microsoft having either Windows NT with proper UNIX support, or real UNIX with Xenix, there would be no need for Linux, regardless of it being free beer.
Whatever computer people would be getting at the local shopping mall computer store already had UNIX support.
Lets also not forget that UNIX and C won over the competing on timesharing OSes, exactly because AT&T wasn't allowed to sell it in first place, there was no Linux on those days, and had AT&T not sued BSD, hardly anyone would have paid attention to Linux, yet another what-if.
IBM z/OS is officially a Unix-a very weird Unix which uses EBCDIC-but it passed the test suite (an old but still valid version, which makes it somewhat outdated) and IBM paid the fee to The Open Group, so officially it is a Unix. (Although somewhat outdated, they recently added a partial emulation of the Linux namespace syscalls-clone/unshare/etc-in order to port K8S to z/OS; but that’s not part of the Unix standard.)
If Microsoft had wanted, Windows could have officially been Unix too-they could have licensed the test suite, run it under their POSIX/SFU/SUA subsystem, fixed the failures, paid the fee-and then Windows would be a Unix. They never did-not (as far as I’m aware) for any technical reason, simply because as a matter of business strategy, they decided not to invest in this.
NT underlies the majority of M365 and many of the major Azure services. Most F500s in the US will have at the very least an Active Directory deployment, if not other ancillary services.
IIS and SQL Server (Win) boxes are fairly typical, still.
I am not suggesting NT is dead on servers at all. I am positing it would be dead had it not been for owning the majority of desktops. Those use cases are primarily driven as an ancillary service to Windows desktop[1], and where they have wider applicability, like .NET and SQL Server, have been progressively unleashed from Windows. The realm of standalone server products were bulldozed by Linux; NT wouldn't have stood a chance either.
[1]: In fact, Active Directory was specifically targeted by EU antitrust lawsuit against Microsoft.
For all large corps, users sit at 1990s-style desktop computers that run Win10/11 and use Microsoft Office, including Outlook that connects to an Exchange server running on Windows Server. I'm not here to defend Microsoft operating systems (I much prefer Linux), but they are so deeply embedded. It might be decades before that changes at large corps.
WSL 1 works fine. I much prefer it over 2 because I only run windows in a VM and nested virtualization support isn't all there.
Also feels a lot less intrusive for light terminal work.
That would not be unique, as is what BSD has done for Linux compatibility basically forever.
BSD and Linux are in the same bucket, so that doesn't count, not any more than MacOS compatibility with Linux. Windows is the odd one out.
I don't think it is fair to brush it off under "same bucket; doesn't count." The syscalls are still different and there's quite a bit of nuance. I mean the lines you're drawing are out of superficial convenience and quite arbitrary. In fact, I'd argue macOS/Darwin/XNU are really Mach at their core (virtual memory subsystem, process management and IPC) and BSD syscalls are simply an emulated service on Mach, which is quite different from traditional UNIX. The fact that as a user you think of macOS much more similar to Linux is not really reflective of what happens under the hood. Likewise NT has very little to do with Win32 API in its fundamentals but Win2k feels the same to the user as WinME, but under your framing, you'd same-bucket those.
> Likewise NT has very little to do with Win32 API in its fundamentals but Win2k feels the same to the user as WinME, but under your framing, you'd same-bucket those.
I probably would, in this context. Well, maybe not WinME, because that was a dumpster fire. But any Windows coming down from NT line, which is what's relevant in the past 20 years, sure. Same bucket.
Solaris did as well.
The essential problem was that critical Windows APIs like CreateProcess and the NTFS file system were far too slow to be used in UNIX-like ways. If you tried to run git or build things in WSL1 - a key use case - it was way slower than doing so on native or VM Linux.
Performance was one problem, but imho the biggest was that MMAP semantics were inherited from the NT side and made a lot of applications crash (mmap's created could only be as large as the file's current size as in Windows, while Linux/BSD semantics allows for a mmap larger than the file that's usable without constant remapping as the file grows).
They didn't prioritize it until fixing at a late stage, barely before WSL 2 came out. Sometimes i do wonder if they made a premature decision to move to WSL2 since there was quite a lot of basic applications/runtimes that were crashing due to this fix lacking (Naturally a lot of other new Linux API's like io_uring probably would have made it an api chasing treadmill that they just wanted to circumvent).
> (mmap's created could only be as large as the file's current size as in Windows, while Linux/BSD semantics allows for a mmap larger than the file that's usable without constant remapping as the file grows).
I thought you could do it using ntdll functions, no?
https://www.jeremyong.com/winapi/io/2024/11/03/windows-memor...
Good to know, still the obscureness of this function or semantics led WSL1 to be incompatible for a long time (Also skimming this article touches upon some 0-sized mappings being an issue?).
Regardless this led WSL1 to have fatal incompatibilities for a long time, iirc basic stuff like the rpm system or something similarly fundamental for some distros/languages relied on it. And once WSL2 existed people just seems to have gone over.
Win32 APIs like CreateProcess suck because they have to spend so much time setting up the stuff that allows Win32's application model to mimic that of 16-bit Windows, which was coopreratively multitasked. The NT kernel is much faster at creating processes when it doesn't need to worry about that stuff.
As for NTFS: it's not NTFS specifically, it's the way the I/O system is designed in the NT kernel. Imagine any call from outside that layer transitioning through a stack of filter drivers before actually reaching the implementation. Very powerful stuff, but also very bad for performance.
Hm. I used Git on WSL1 for many years, with medium sized repos hosted on a Windows drive, and it worked great. When I moved to WSL2 Git became a whole lot slower - it now takes about 5-8 secs to execute 'git status' where before it was instant.
Are your git repos in ntfs? WSL1 Linux drives are slow and in WSL2 Ntfs.
Yes, exactly, this is well known. So the parent post seems incorrect.
Windows actually created a new process type for this: Pico processes[1]. This allows WSL1 to perform quite a bit better than Cygwin on something like Windows XP.
1. https://learn.microsoft.com/en-us/archive/blogs/wsl/pico-pro...
I know -- I was super excited to see WSL1 and wished it worked. NT when started was the OS/2 personality and back at that time was excited to see NT as the OS to end all OSes (by running them all as a personality).
But WSL2 is freaking incredible, I'm super excited to see this and just wish the rest of windows would move to a Linux kernel and support bash natively everywhere. I was never a fan of powershell, sh/dash/ash/bash seem fine
>But WSL2 is freaking incredible
It's good. But if/when you start using it as your main work platform nagging issues start cropping up. The native linux filesystem inside it cannot actually reclaim space. This isn't very noticeable if you aren't doing intensive things in it, or if you are using it as a throwaway test bed. But if you are really using it, you have to do things like zero out a bunch of space on the WSL disk and then compact it from outside in the Windows OS. Using space from your NTFS partition / drive isn't very usable, the performance is horrible and you can't do things like put your docker graph root in there as it is incompatible. It also doesn't respect capitalization or permissions and I've had to troubleshoot very subtle bugs because of that. Another issue is raw network and device access, it basically isn't possible. Some of these things are likely beyond the intended use of WSL2, in its defense. Just be aware before you start heavily investing your workflow in it. For these use cases a traditional dual boot will work far better and save you much frustration.
Or just go straight to Hyper-V, without all the WSL stuff.
Why not just use Linux then?
The whole point of Windows right now is having a kernel that a) does not shove GPL down the device manufacturer's throat and b) care about driver API stability so that drivers actually work without manufacturer or maintaner intervention every kernel upgrade.
People like to talk like GPL is evil, but it's underpinning more of the world than many people see.
And thanks to no ABI/API stability guarantees, Linux can innovate and doesn't care about what others might say. Considering Linux is developed mostly by companies today, the standard upkeep of a driver is not a burden unless you want to shove planned obsolescence down the throats of the consumers (cough Win11 TPM requirements cough).
The obvious answer: you can't. I work in constrained environment with an IT department that provides the hardware and (most of) the software I develop on. I agree with all the WSL cheering here, it integrates almost seamlessly.
But you're asking the wrong question. It should be "why not use MacOS?" if you need a stable UI with UNIX underneath :).
That's another sound option, but as a person who doesn't like Homebrew and stuffing /usr/local with tons of things, a lightweight Linux VM becomes mandatory after some point on macOS, too.
Other than that, macOS plus some tools (Fileduck, Forklift, Tower, Kaleidoscope to name a few), you can be 99% there.
Homebrew on arm64 installs to /opt/homebrew.
Oh. They changed it at last? This is good news. Thanks for letting me know.
Yup absolutely.
I use macos as my daily driver, but any real work on it happens on a linux container or VM. Using one of {cursor, vscode, windsurf} with a devcontainer is a much better approach for me.
Current macos is going the windows direction with some architecture choice (default uninstallable software, settings panel mess, meaningless updates,…)
Sure, but consider that some people might not be able to just make that choice in any given context.
I was working as a freelancer wher a lot of my job meant interfacing with files other people made in Software that only runs reliably on Windows or Mac (and I tried regularly).
So WSL provided me with a way to run Linux stuff without having to run a fat VM or dual boot. In fact my experience with WSL is probably why I run Linux as my daily driver OS in academia now, since here the context differs and a switch to Linux was possible.
Whether a thing is useful is always dependent on the person and the context. WSL can absolutely be a gateway drug to Linux for those who haven't managed to get their feet wet just yet.
I completely agree with you. WSL2 can be useful for many scenarios at its current form.
We tend to forget that "Horses for Courses" and "Your Mileage May Vary" applies way broader than we think.
> I was never a fan of powershell, sh/dash/ash/bash seem fine
It depends on what you're doing. PowerShell is incredible for Windows sysadmin, and the way it pipes objects between commands rather than text makes it really easy to compose pretty advanced operations.
However, if I'm doing text manipulation, wrangling logs, etc, then yes, absolutely I'm going to use a *nix scripting language.
I sometimes say, tongue in cheek slightly, that the best Linux desktop is Windows.
For anyone curious (as I was) the basic difference is that WSL1 implemented the Linux syscall table directly whereas WSL2 actually runs Linux on top of some virtual drivers (hypervisor).
WSL 2 runs a full Linux kernel under Hyper-V. There are some out-of-tree or staging drivers included in Microsoft's Linux kernel derivative and they publish their kernel sources at https://github.com/microsoft/WSL2-Linux-Kernel.
i routinely upgrade my WSL2 kernel. Now on 6.6.87.1. Personally, I love WSL2.
Note that in recent versions of Windows, typically the bulk of Windows now runs under a hypervisor (i.e., "in a VM") as well: https://learn.microsoft.com/en-us/windows-hardware/design/de...
I had the same experience. Even installing linux is easier for me now. And with new spyware features of windows, there is really no incentive to use it
Could have written the exact same sentence when Vista came out. I still wonder when it's finally enough for the poor souls still stuck in windows
It’s finally enough for me at least. I’m skipping windows 11 and going to Linux instead.
When we die off I guess lol.
I've been using windows since I was 6 or 7. I currently work in a Mac environment and hate it. I worked in a linux one for 5 years. Nothing feels like the first language you learned I guess?
My home computer is windows and it'll be that way until windows stops existing.
Edit: when I say we I mean the people still on windows.
I have a video of me typing things into Microsoft Word at ~3-4 years old. I still hate Windows with a passion now (Mac too, tbh).
Definitely not for me. Was in Windows between 95 and XP, never looked back. Same for my first programming languages, glad I am not stuck still doing PHP and Java.
Switched my main Linux and desktop environment multiple times as well.
Honestly accurate for a dev work machine.
For a gamer... still not quite, but very close.
For the corps ... it's a legacy issue, but that may slip away as a side effect of Trump destroying global soft power and making it a hard sell to remain on a US led platform, purely op sec concerns, the spyware issue will add more weight to that.
I truly believe if AAA titles would not release for windows exclusively no-one would have a good reason to use windows really besides inertia.
Businesses would. The problem with that is you have decision makers in said businesses who don't know any better, so Microsoft-all-the-things gets pushed down the line. Offices are all trapped on Windows 10/11 and using Teams/Outlook with Exchange/Entra/Azure chugging along in all its misconfigured glory. Heck, half the MSPs I work side-by-side with seem to only offer support on Windows machines.
It gets worse. When we go to the manufacturing side of the building, there's a high chance they're still using Windows 7. Yeah, still! And IT or Controls has no idea what to do with it since, well, it's still working. Is it secure? They don't know because the team is comprised of kids who memorized the CompTIA exams and use Windows 11 at home.
Trying to get the business world to switch to Linux with all that in mind is an impossible task. It's the same as asking an American city to rip out all its old infrastructure at once and replace it with new instead of patching the old. The cost and knowledge required for such a task is unthinkable, to them. Believe me, I've tried.
Microsoft was quite brilliant in the way that they shoehorned their way into the fabric of the way we do business, not just in the US, but on a global scale.
I would be very happy with Windows 7 on manufacturing side - lots of CNCs that are still in use and supported by manufacturers are still on Windows 98.
I left some room for myself with "a good reason" :)
When company is forcing you to use something out of inertia, then it's probably not for a good reason.
Actually regarding the "global scale" – I'm not really sure it's true, I think MS has influence mostly in US. Many EU and Asian companies I worked with were using OSX/Linux.
I'm in the EU and (nearly) every company runs windows (on desktop). Especially in larger organizations (there's plenty of windows servers still).
Yeah, I totally agree with what's being said here. It's a tough pill to swallow when you realize just how entrenched Microsoft is in the business world, and how difficult it would be to get everyone to make the switch to Linux.
I mean, think about it - most companies are still stuck on Windows 10 or 11, and they're using all those Microsoft services like Teams, Outlook, and Exchange. It's like they're trapped in this Microsoft ecosystem, and it's gonna take a lot more than just a few people saying "hey, let's switch to Linux" to get them out of it.
And don't even get me started on the IT departments in these places. A lot of them are just kids who memorized some CompTIA exams and don't really know what they're doing. They're using Windows 11 at home, but they have no idea how to deal with all the outdated Windows 7 machines that are still being used in manufacturing.
Microsoft, on the other hand, has been really smart about this. They've managed to get their products and services woven into the fabric of how we do business on a global scale. It's gonna take a lot more than just a few open-source projects to change that.
They're "trapped" because there is no answer to the Exchange/Outlook combo for business purposes and it's very inexpensive for the value it provides. There are of course alternatives to Teams until you pair Teams with SharePoint/OneDrive/Copilot/Exchange/3rd party market.
> A lot of them are just kids who memorized some CompTIA exams and don't really know what they're doing.
Well, this is true throughout IT, even those who went to college for a CS or IT-based degrees. People want to make money, and IT has been a safe haven so far to do so.
> They're "trapped" because there is no answer to the Exchange/Outlook combo for business purposes and it's very inexpensive for the value it provides. There are of course alternatives to Teams until you pair Teams with SharePoint/OneDrive/Copilot/Exchange/3rd party market.
Yep, it's mostly this. Especially for businesses under 300 users, you get Exchange, EntraID, Defender EDR, InTune(MDM) + the Teams/SharePoint/OneDrive/Copilot all integrated for $22/user/month. For a little extra you get a half way decent PBX for VoIP too.
If you tried to piece all that together yourself with different services, then integrate them to the same level, it's going to cost a hell of a lot more than that.
Microsoft is smart too, as none of that requires Windows either. Even of these companies switched to Linux or macOS en masse, they'd still be using Microsoft.
Plus, there's still no competitor to Excel for business types. We might be able to use Google Sheets to great effectiveness, but the finance department at the behemoths can't. The world runs on Excel, like it or not.
> A lot of them are just kids who memorized some CompTIA exams and don't really know what they're doing.
This is true for all fields not just tech/IT. Competent windows sysadmin work nowadays isn't all that different from macOS endpoints or Linux. Everything can be scripted/automated with PowerShell, or just using the Graph API for 365 stuff. You can effectively manage a windows environment and never touch a GUI if you don't want to.
Microsoft usually isn't the best at anything, but what they excel at is being "good enough" and checking boxes.
For larger orgs and enterprises, it is Active Directory/Entra. That is the true Microsoft killer app and lock-in driver. There is no comparable Linux solution that I am aware of.
ChatGPT response
Keep AI accusations to yourself, it's very rude when you get it wrong.
He re-wrote the comment he was replying to. It was either AI or just pointless.
I think you're underestimating how many businesses rely on Excel alone.
You're saying it like there is no alternative and you can't just open and edit same excel files in Libre Office Calc, Google Sheets or Numbers without any problem whatsoever.
I'll say it too!
There's no serious alternative to Excel for those who rely on its advanced features.
You can't just edit Excel files in Libre Office Calc, Google Sheets, or Numbers without any problem whatsoever.
Can you give me an example of such advanced features? I really don't understand what outstanding feature did they pack in this "Excel" which has no alternative?
If the only problem is migrating from XLSX to some other format I'm sure this is trivial and some tooling must be available.
There are complex reports that every European-regulated finance entity needs to submit to their regulator. They are always complicated, but they are only sometimes well-specified. The formats evolve over time.
There is a cottage industry of fintech firms that issue their clients with a generator for each of these reports. These generators will be (a) an excel template file and (b) an excel macro file.
The regulators are not technically sophisticated, but the federated technology solution allows each to own its regional turf, so this is the model rather than centralised systems.
If the regulator makes a mess of receiving one of your reports, they will probably suggest that you screwed up. But if you are using the same excel-generator as a lot of other firms, they will be getting the same feedback from other firms. If you did make a mistake, you can seek help from consulting firms who do not understand the underlying format, but know the excel templates.
There are people whose day-to-day work is updating and synchronising the sheets to internal documentation. It gets worse every year.
Sometimes the formats are defined as XBRL documents. Even then, in practice it is excel but one step removed. On the positive side - if you run a linux desktop you have decent odds to avoid these projects, due to the excel connection.
The problem is not the "advanced features" within Excel but how they are used. If an excel sheet is basically just a front for a visual basic Programm it doesn't easily open anywhere.
As Google's JavaScript API also doesn't work in open office and whatever else they all have in extra layers.
However i am not sure when and why I encountered such a software last time, but my dad is a Visual Basic guy and has done a lot of these weird sheets for internal business stuff.
So the Visual Basic (lol) macros seem to be the only real thing retaining all the people on Excel, interesting...
If Microsoft removed it, the financial services industry would crumble.
To be honest, I will not be upset about this.
Hope you never want credit, insurance, mortgages, etc then.
VBA is the famous example, but Power Query deserves a shout out. I use it to make tables that pull their data from other tables with custom transformation logic.
Google Sheets didn't even support tables until fairly recently.
LibreOffice still doesn't have tables! Not to mention the new(ish) functions in Excel, like LET and LAMBDA.
Power Query the language is nice, I kinda like it. I've read the UI and engine works quite well in PowerBI, but I haven't used it.
The Excel engine is way too slow though. Apparently they're two entirely separate implementations, for some architectural reason, not exactly sure why.
Excel's Power Query editor on the other hand, is an affront to every god from every religion ever. Calling it an "advanced editor", while lacking even the most basic functionality, is just further proof of their heresy.
> Can you give me an example of such advanced features?
macros, vba, onedrive/sharepoint/office integration
I think you highly underestimate the Microsoft Office ecosystem and the tight integration in enterprises.
> I'm sure this is trivial [...].
nope.
You didn't really mention any real feature besides Visual Basic, which clearly has alternatives in other spreadsheet apps. You have to run your VBA through converter script, and the fix incompatibilities in your macros but again, for a Visual Basic guy it is trivial... The rest of the things you mentioned is a good old `rsync` repacked.
But you're right, they surely added a bunch of smaller stuff to keep everything connected, and I'm kind of underestimating it since I never used that ecosystem but heard rumors and complaints from other people who had to use it :)
Please don't make us link the infamous Dropbox HN comment ;)
I'm not dismissing onedrive here but I wanted to say monseur was cheating when he mentioned onedrive/sharepoint as real features of Excel application – they are not directly related to the essence of spreadsheet editing and can be substituted with any solution which does the job, even Dropbox itself.
>There's no serious alternative to Excel for those who rely on its advanced features.
this is just silly, it really means "There's no serious alternative to Excel for those who rely on exclusive Visual Basic macros"
> I'm not dismissing onedrive here but I wanted to say monseur was cheating when he mentioned onedrive/sharepoint as real features of Excel application – they are not directly related to the essence of spreadsheet editing and can be substituted with any solution which does the job, even Dropbox itself.
Not true. Sharepoint and OneDrive are key enablers for real time collaboration. It lets multiple people work on the same file at the same time using native desktop applications. Dropbox has tried to bolt stuff like that on, but it is janky as heck. OpenOffice, etc can't integrate with Excel for real time collaboration (honestly, I'm not sure they support any level of real time collab with anything). Google Sheets won't integrate with Excel for real time. Google is great for collaboration, but sticking everything in Google's cloud system isn't dramatically better than being stuck on Microsoft's stuff. Also Google Sheets just doesn't work as well as Excel.
SharePoint/OneDrive Lists can be directly edited in Excel. The Power platform can directly access/manipulate/transform Excel files in the cloud or on-prem via the Power BI Gateway.
You don't seem to have much of a familiarity with this ecosystem. If you're curious, I'd suggest hunting down these things on learn.microsoft.com, but to dismiss them is only showing your lack of understanding.
So you do all this work, retrain other users, spend a not-so-trivial amount of time and money and risk breaking stuff, all for not paying $22 monthly per user?
I get it, it would be a technically better solution, remove Microsoft lock-in etc, but the cost-benefit analysis isn’t that good in this case.
> There's no serious alternative to Excel for those who rely on its advanced features.
Which is 5% of its users probably.
and 90% of that 5% are the CFOs. As Scooby Doo would say, "Rotsa ruck!"
Every advanced feature of MS Office is used by a different 5% of users. https://web.archive.org/web/20080316101025/http://blogs.msdn... (The whole series is worth reading: https://web.archive.org/web/20080316101025/http://blogs.msdn...)
--- start quote ---
The percentage difference in usage between the #100 command ("Accept Change") and the #400 command ("Reset Picture") is about the same in difference between #1 and #11 ("Change Font Size")
--- end quote ---
The commands you mentioned seem irrelevant here. I never use any advanced features, i.e. those not available in LibreOffice or incompatible with MS Word, and I don't know anybody who does.
"I", "I don't know"
vs.
--- start quote ---
How much data have we collected?
- About 1.3 billion sessions since we shipped Office 2003 (each session contains all the data points over a certain fixed time period.)
- Over 352 million command bar clicks in Word over the last 90 days.
https://web.archive.org/web/20080324235838/http://blogs.msdn...
--- end quote ---
I wish there were more recent studies on this, but they would paint the same picture
Not only is it about lack of features on the open source side, it's about workflow.
Sure Photoshop and Gimp both edit pictures, but the workflow is so different that professional users of Photoshop aren't going to switch just because it's FOSS.
The market is getting more diverse (mobile, steam deck alikes, laptops, consoles, etc), but i guess if you want to quickly earn the most money on your (huge) development investment, you better try and take the biggest piece of the pie first.
Personally i don't really believe in AAA (or UbiSoft's AAAA) titles that much anymore. Strange exclusivity for some console or device may bring some money early on, but i have plenty games in my Steam libary that could run perfectly under many platforms. And most AAA games heavily drop in price after a few months, Nintendo being the sole exception.
AAA and AAAA games became (expensive) gateways to microtransaction based money extraction application, in my opinion.
I enjoy older, smaller games nonproportionately more when compared to big titles which require much more resources and time. Yes they look nice, yes they use every documented and undocumented feature of my GPU, yes "it's so fluffy", but it is not enjoyable, esp. with shoved down microtransactions.
If we're talking FPS, give me any Half-Life (and Portal) title and I'm good. Gameplay first, unique art direction, good story, and a well built universe which is almost palpable with lore.
If we're talking RTS, C&C series, Dune Emperor, Supreme Commander and StarCraft is enough.
I have arm Mac and it's the most painful machine you can own as someone who likes games... Supreme Commander FAF I miss the most, unfortunately unplayable online due to floating-point calculation differences between ARM and x64 which are apparently untranslatable.
I hear you. I don't like that ecosystem as well.
I have more than 2000 games on Steam and i love my Steam Deck which i got for pretty cheap. It's a very fun game system and you can tinker a lot with it. Upgrading (bigger disk capacity) is very easy.
Just bought Black Mesa for two bucks. Works almost flawlessly. Ten year old game , but much fun to be had. Most games i buy on the very very cheap. Bought Skyrim couple of weeks ago for five bucks.
Sure, i click on the free thursday game on the Epic Games store, but i hate that interface with great passion.
You underestimate how many companies use microsoft business central for various things...
But i also believe there's a lot of special software for laboratories etc, that run on windows only
Adobe Photoshop? Microsoft Excel?
So many companies use windows server because they don't have anyone who knows Linux.
Curious, if you don't mind answering, do you mainly uses Ubuntu or Nixos, and which one do you liked more ATM?
Regarding Steam, do you install it with distro provided or through Flatpak?
What is the spec of your machine that you do Linux gaming on? I've noticed a notable performance penalty (around 10%, even higher on GPU heavy games) when running games with Proton, which is mainly why I haven't dropped Windows yet.
I try to use debian, since it's a bit older (read: stable) than ubunutu and I've found that if something compiles and runs on debian it'll run on ubunutu and others but the inverse is not true.
It looks like nvidia suffer more of the difference between windows/proton, while AMD difference it's towards zero.
Source: https://www.youtube.com/watch?v=4LI-1Zdk-Ys
I quite like CachyOS currently. I see no performance penalty (but I also have only a 75 Hz monitor and I haven't tested VR games all that much yet). Currently I'm playing through Kingdom Come Deliverance 2 on ultra with no issues.
CachyOS provides packages for Steam, handles nvidia drivers for you and they even provide their own builds of proton and wine, allegedly compiled with flags for modern hardware + some patches (not sure how much they help though - before Cachy I used Pop OS and also had no problems with performance).
Cachy is based on Arch though, so unless you're ready for your system to potentially break with an update - maybe used something more stable (again - I quite liked Pop OS, it was extremely stable for me)
I've been using Arch for 1-3 years now, as far as I can remember the only time that my system "break" was caused by pacman lock got stuck somehow. Aside of that it's pretty stable in general.
Good to know! It's my first Arch-based distro so I'm a bit wary for now
> I've noticed a notable performance penalty (around 10%, even higher on GPU heavy games) when running games with Proton, which is mainly why I haven't dropped Windows yet.
I don't mean to dismiss your comment at all, but I'm surprised that such a low overhead would be the primary reason holding you back from switching. The difference between, say, 100 FPS and 91 FPS seems so negligible in my mind that it would be pretty near the bottom on the list of reasons not to switch to Linux.
If you don't have an adaptive sync +variable refresh rate) monitor and everything set up to use it, and don't like screen tearing (you enable vsync wait), overrunning the frame budget (e.g 16ms for 60hz) can mean dropping down to half the frame rate.
But I'm hunting for reasons here. A gaming setup should be using adaptive sync so those concerns mostly go away. But there may be problems with Linux support.
Don't get me wrong, what I meant is that I only uses windows on games that runs poorly for me, I use Linux as my daily driver.
Regarding fps, it's around 15fps diff, and it's bad in my case because I had a potato machine.
I think actually Linux has come a long way and recently I actually dual booted fedora with windows and fedora was easily my main choice unless gaming.. unfortunately when updating from 41 to 42 there was clearly an issue with the GPU not having drivers for acceleration or cuda, updating the drivers bricked the OS immediately and while I could recover, I spent hours and hours on this and could never get the GPU drivers installed again without bricking it.. ultimately I realised how at mercy of drivers Linux is. I hope though that in the next few years things improve as windows is dismal to work on these days
I just had a problem with Windows and Nvidia drivers/CUDA not working properly on a two year old Windows 11 install. I had to reinstall the operating system after days of troubleshooting and attempting different things to get it operational again. It can happen on there as well.
Just curious, which games gave you problems?
Unfortunately many of the more popular multiplayer games with anti-cheat tend to consider "made working on Linux" a bug rather than a feature. E.g. Easy Anti-Cheat and Unreal Engine both support Linux natively but Epic still doesn't want to allow it for their own game, Fortnite. https://x.com/TimSweeneyEpic/status/1490565925648715781
There are even games like Infinity Nikki with anti-cheat that allows the Steam Deck but specifically detects and blocks desktop Linux. You have to wonder if that gets them any real security since the method they use to detect the Deck is probably spoofable.
There is more nuance to the anti-cheat systems supporting Linux argument than "it supports it but they won't use it". Turning on Linux support does weaken the security posture of the anti-cheat system, so it's not simply a decision of "it works with Linux, but they won't do it". It is moreso a question of whether the security posture changes for the game with this platform support enabled meet the business requirements. It's not a surprise that games with high MTX revenue do not turn this on, as I imagine this would be the biggest concern with this weaker security posture.
One of the boons of console hardware is also the strict execution environment that is presented on the system. While this of course doesn't prevent all cheating behavior in online games, a large selling point of it as a platform to publishers is not only the market segment available, but the security aspects of the runtime environment.
Agreed, that's the angle Tim Sweeney argues in the linked comment as well.
Really hope valve’s server side anti-cheat will be a success and more competitive games will move over to that.
I'm not familiar with what new changes Valve has been working on in the anti-cheat space but historically most major anti-cheat systems, such as Easy Anti-Cheat, already have long included a server-side anti-cheat component. The catch rate (and overall accuracy) with both is just always going to be higher than only going with one approach.
Nowhere it says they don't want to.
I think you're hitting on ideal vs. constrained wants (or, at least, that's how I've always referred to them). That is: what they want to be able to allow in itself vs. what they want to allow given the trade-offs with other wants.
E.g. "I'm going to go to the beach all day" and "I'm going to keep my job" are both likely the results of ideal type wants whereas "I'm going to go to my job today and then the beach tonight" would likely be the result of a constrained want.
For the curious, the protondb front page gives a pretty good overview of the state of Linux gaming:
https://www.protondb.com/
Scrolling to Medals, 50% of all 25.000+ games tracked by the site are playable, either working perfectly or mostly (Platinum or Gold ratings). Another 20% can be alright under specific circumstances, and with compromises (Silver rating).
AoE2:DE has a gold rating, but multiplayer doesn't work at all, and it's not even due to anticheat.
Did this change recently? I haven't played in a month or so but it's been working great for around a year now for me.
It's been waffling back and forth but always had a "gold" rating even when I verified it was broken. I haven't tried recently (haven't really played video games in years), but there's a comment from 5 days ago saying it's broken again.
At some point, Proton users reported success using some patch, then that stopped working, then there was a different patch... A lot of user reports say "thumbs up" then have a comment explaining how it goes out-of-sync unless you fiddle with it, so it's hard to trust.
Seems the root of the problem is this game's picky netcode, which is similar to the original 1998 game I played as a kid. If your game state diverges from the other players' at all, it goes oos and ends the game for everyone. And yes this happened often enough that people had an abbreviation for it.
I worked on this problem for a bit. What's going on is the game relies on the OS-provided C runtime libraries ("msvcrt"-style things) to do its math. Wine's implementation of these libraries does not match Windows's perfectly. If all players are using the same implementation, then they will agree, and there are no problems, so people think it is working. But if a player on Wine tries to play against a player on Windows, they will fall out of sync because the math errors eventually add up.
That was as far as I was able to take it. Another much more skilled dev at CW dug in a lot deeper and wrote a blog post about it[1], but as far as I know the problem remains unsolved.
[1] https://www.codeweavers.com/blog/rbernon/2022/9/12/ucrtcring...
Oh interesting, I always wondered what the underlying issue was and why downloading some obscure looking dll solves it.
For a practical solution, just using the Windows dlls seems to work fine. Without AoE2:DE goes out of sync immediately, with I've played hour long games.
Oh wow, thanks for sharing. I knew it was an oos but didn't think a math lib specifically was the issue.
I remember it being interesting to work on. It's been years, but if I remember right, there is some way to convince the game to dump a log of unit positions during a multiplayer match, possibly as part of its desync handling. I enabled that on both Win & Linux hosts, ran a match between the machines until they desynced, and diff'd the game's own logs, then confirmed from the Wine logs that the faulty values were coming from CRT-related math functions. It's always fun when you get to use a game's own debug utils to track down a problem.
Anyway it'd be great if the game devs included their own math libraries instead of relying on the OS's. That would fix the problem quite nicely.
Do you know if the code from the blogpost you mentioned is publicly available? I don't think I could find it but I'd love to give it a try
I don't, he did that work after I left the job. You could email Rémi or hop into one of the Wine dev channels to start up a conversation.
It's been several months since I played but getting ucrtbase.dll always worked for me and it was the only thing I ever had to do for the game. You need to redownload it after every update because it gets wiped though.
Oos can till happen, but as you said it can also happen on Windows, hard to blame Wine for that.
Since gold means "works as good as Windows with workarounds" I think that's a correct rating.
I can only testify to oos being common in the Mac version of the original game, and I've heard it happening in the og Windows game. In DE under Windows, I've never seen it happen, so I'd be concerned if you're still seeing it occasionally.
Also, "gold" should mean that it works by default, not that you have to patch in a DLL. The only place the site even says "playable with tweaks" is in a tooltip if you hover over the gold symbol, right above a separate list of details that doesn't mention tweaks. I didn't even know until now.
I've got it from here: https://gitlab.winehq.org/winehq/appdb/-/wikis/Rating-Defini...
We can argue all day over what a rating means, but if it would work without a tweak I'd say it should be rated platinum. (The only other thing I know is missing is Xbox live login, but I don't really care about that)
Yeah there's a lot of random issues with the different games. In case user experience is the main goal, I always recommend going with the main supported ways, which in this case would be Windows 11. I personally try things first on my Linux, but I always keep a backup Windows just in case.
Overwatch is the big one - lots of random issues with it. But basically any game with Denuvo DRM is extremely high risk, resulting in either a ban or the game not running at all.
Denuvo counts each proton version as a unique activation, might help you avoid this issue going forward
Can you remember any particular problems in Overwatch? I've been down that road, so there's a chance I might have some info that you would find useful.
One problem that was unsolved last time I checked: Saving highlight videos. It used to work if you told Overwatch to use webm format instead of mp4, but Blizzard broke that somewhere along the line, possibly in the transition to Overwatch 2. (I worked around this with OBS Studio and its replay buffer feature.)
When I ran a two month experiment, Hogwart's Legacy and Anno 1800.
The former ran slowly at low settings, with the occasional complete single digit slowdown. On the same laptop in Windows 10, it ran medium settings and easily twice the frame rate, no issues.
The latter wouldn't connect to multiplayer, and would occasionally just crash out.
(Comment written from memory, but I enshrined my experiment here: https://retorch.com/blog/linux-mint.htm )
For me, Red Dead Redemption 1 via Proton does not work on Pop_OS + NVIDIA.
In general you want to avoid Nvidia if you want to play games on Linux, but maybe things will get better.
Isn’t pop_os shipping ancient components at this point due to their hate brained idea to try and create their own de and pinning their next release to it?
RDR is now working fine on Pop_OS with Proton 10.x.
i play rdr1 via proton on opensuse + amd and i get better frames than windows
https://areweanticheatyet.com/
Anything "denied" won't work ever unless they change their minds. Anything "broken" is...well...broken.
Escape from Tarkov and GTA V (online).
Why would anyone run malwares on purpose on the same machine they use to do development/work?
i think everyone tried that. gpu (games etc) are the only thing holding windows relevant at this point.
i have some 2012 projects were the makefiles also build in msvc. never again.
then 2015 projects with build paths for cygwin. never again.
then some 2019 projects with build scripts making choices to work on msys2/git-bash-for-windows. never again.
now we can build on WSL with just some small changes to an env file because we run a psql container in a different way under wsl... let's see how long we endure until saying never again.
It's the other way around. You can do very few productive things with Windows other than software development. Almost all other professional software assume Windows.
> You can do very few productive things with Windows other than software development.
I guess you meant Linux here
Ah you're right. I can't edit it.
For consumers. A load of professional software still exists only for Windows, particularly as you do more niche.
It always infuriates me when people say Windows is all about games. Techies are so detached from reality they forget that people have creative hobbies and have to use industrial grade software. Doing creative hobbies on Linux is an act of sadomasochism. And on top of that, Linux and MacOS cannot run software from 3 years ago while Windows can run software from 35 years ago. And on top of that, Linux is completely unusable to Japanese/Chinese speakers due to how hard it is to input the moon runes, and on top of that Wayland breaks the least painful setup that you could have earlier. And on top of that, Wayland people shown a middle finger to all the people who need accessibility features.
No, Windows is not about games, Windows is about being an objectively the most stable pile of garbage there is.
A fair comment, but the argument I'd make against that is a lot of those creative tools are moving to the web. I personally work for Figma, and have seen that first hand. UI/UX design was entirely OSX/Windows centric for the last 40 years, and now it's platform agnostic. Even video editors are just at the nacent stage of looking at the web as an editor surface.
Totally hear you though for things like CNC milling software that's meant to stay static for the lifetime of the mill - that's not going anywhere.
Software moving to the web is not a win for Linux, it's a loss for everyone.
No, it's definitely a win for Linux. I get it. I've dabbled in software minimalism. I love native dev. I know the web "sucks." But the range of mainstream software available for Linux has exploded now that software is moving to the web (including Electron) and I can't see how that's a bad thing from the perspective of a Linux user. Of course I'd rather open a web browser to run an app than change my entire operating system to run an app.
Would you like to also own and have control of the data you store in these web-based platforms?
If I'm already compromising by using non-free software, does it matter that much? How do I know what a native app is sending back in its telemetry?
By using non-free software, you're compromising on politics that don't really affect anything directly - not unless great many others suddenly embrace the ideas behind Free Software.
The compromise of using SaaS in the cloud in lieu of regular, native software, is affecting both you and society directly.
It's the only truly portable platform, and there's no way we can force another into existence.
It doesn't have to be slow and bad, that's just a ""skill issue"" (poor prioritization by the companies making it).
And this is why wine/proton are so good: they’re implementing the only defacto stable API that exists.
not a single EA game works on Wine/proton
The command and conquer collection worked quite good out of the box.
On Protondb: Split Fiction is platinum, Sims 4 is gold, Most F1 games work with the exception of 2014 and 24
Personally not a big consumer of EA titles, but Star Wars Squadrons ran great for me.
Yeah, I really like my Mac, but third-party software isn't its strong suit. It's hilarious how often Apple will wholesale break like half the software in existence.
>And on top of that, Linux is completely unusable to Japanese/Chinese speakers due to how hard it is to input the moon runes
How do Deepin and such solve this?
Linux on HN is always an example of https://xkcd.com/2501/
How many months can you use a Linux desktop to do daily externally mandated processes and not drop down to a bash shell at some point?
Average consumers and users do not want to use the unix utilities that Linux people love so much. Hell, developers barely want to use classic unix utilities to solve problems.
Users do not know what a "mount point" is. Users do not want a case sensitive file system. Users do not want an OOM killer that solves a poor design choice by randomly culling important applications at high utilization.
Users do not care for something that was designed in the 60s before we understood things like interface design and refuses to update or improve due to some weird insistence on unix purity.
Users do not care about ABI stability. They care about using the apps they need to use. That means your platform has to be very easy to support, Linux is not at all easy to support, and at least part of that is a weird entitlement Linux users feel and demonstrate in your support queue.
Hilariously, users DO WANT a centralized app repository for most day to day apps! Linux had this forever, though it had mediocre ergonomics and it was way too easy for an average computer user to manage to nuke their system as Linus Sebastian found out in a very unfortunate timing situation. Linux never managed to turn this potential victory into anything meaningful, because you often had to drop into a bash shell to fix, undo, modify, or whatever an install!
For me it's Adobe Phuckushop. But yeah, always that one thing holding one back from swapping
> gpu (games etc) are the only thing holding windows relevant at this point.
I actually switched to Linux full-time when Starfield wouldn’t run on Windows but worked in Proton. We are now in a world where Valve provides a more sable Windows API than Microsoft. The only limitation now is anti-cheat but that’s a political problem, not a technical one.
I was excited about it too, even just having a tmux and using it for grepping and file copying. Then after a year or two on windows, my computer started slowing down. Tale as old as time. I'm not surprised, and some of the issues aren't ms' fault, but nevertheless I see CPU spikes to 100 with several browser tabs open, or the drawing tablet driver goes to 100% cpu usage even though I never even use it. The UX shouldn't degrade like a mechanical system.
Except if you're on Nvidia...
No.
Their GTX series cards all used proprietary blobs that required unmanageable device specific interfaces.
Starting from the RTX series cards, they still have proprietary blobs but instead of having device specific interfaces, they all use a shared public interface which makes compatibility and performance much better.
It's not across the board, but there are instances of gaming benchmarks showing more performance under linux than windows.
I'd trade half my GPU performance for the NVIDIA drivers not freezing my system on wake-up. The new half-open ones arguably made it worse, it consistently freezes now.
If you're using DisplayPort, try switching to HDMI. (Really.) For me it made the freezes much shorter. It's a bug in their driver related to the connected monitor(s).
That didn't occur to me! I'll give it a try, although I suspect that will break VRR for my setup.
Then why are you using NVIDIA? The AMD open-source driver stack is very mature by now
I had switched back to Windows after years of issues with Linux drivers, I needed a new PC, and I needed CUDA for college and tinkering.
Now, it's been barely a couple of months since I reinstalled Ubuntu, and a couple of weeks since I found out the latest release runs even worse, so this is new to me. I don't plan to use Windows at home ever again, so I could sell my GPU and buy AMD, but so far I'm simply disappointed.
Ugh, that sucks. It makes sense. I'm somewhat optimistic that as the open-sourcing effort continues, more and more of NVIDIA's driver stack will be open-source and it will see significant improvements, too.
Am currently on nvidia and have no issues with their proprietary drivers. While they aren't following the linux ethos, the software runs just fine.
Have they fixed the drivers on wayland yet?
I'm using 4070 Ti with open kernel module on Wayland.
It's MOSTLY painless. Some GNOME extensions seem to randomly hang everything on startup (I'm currently investigating which ones, I believe Dash to Dock and/or Unite are to blame) and there's a weird issue with VR when streaming via ALVR: SteamVR launches, but games crash unless I disable the second monitor (no such issues with WiVRn, so not entirely sure if it's a driver problem or not)
Besides that in my daily driving I saw no other issues.
I’ve been running on Wayland with nvidia drivers for around a year. No issues for development work. Haven’t tried gaming.
Been using Nvidia+Wayland for years now, even on an optimus laptop.
I'm convinced that many these people saying Nvidia has serious issues on Linux must be (by no fault of their own) going by habit and downloading the driver installer .bin from the Nvidia website and trying to install drivers that way. So yes, if you do that you're going to have issues.
Learn to do things the way your distro does them (use a package manager) and most problems go away.
Ubuntu-packaged NVIDIA drivers freeze my entire system on wake-up. The switch to Wayland and the new half-open drivers made it worse.
I feel I'm in the same boat. For several months I've been thinking my GPU was on its way out (it's a pretty old 2080 now). My desktop freezes randomly. I can log into it remotely but all the usb devices stop working and the screen goes blank. l took a good look at the logs and noticed a bunch of pageflip timeouts followed by usb disconnections. I later discovered the Nvidia forums seem to have many recent complaints (with similar logs) especially around their latest drivers and Plasma + Wayland compatibility.
I'll take Linux seriously when I can play Starcraft 2 and Fortnite on it
StarCraft 2 definitely works on Linux, with a relatively simple act of adding it to Steam as a non-Steam title, and then letting the Proton layer do its thing.
And this is coming from a very Linux-hesitant newbie who mostly uses Windows.
I have not tried Fortnite.
https://www.reddit.com/r/linux_gaming/comments/ppgk04/starcr...
Starcraft 2 worked well for years.
Fortnite doesn't work because Sim Tweeney doesn't want it work: both BattleEye and EAC can work on Linux, Epic just chooses not to enable that functionality.
I played starcraft2 13 years ago on Linux, the wizard installer worked just fine.
I would do it the other way round: use Windows in a virtual machine from Linux. If you are in Windows and have the urge to use Linux, do the proper switch once and for all. You will never look back. I haven't in almost 15 years.
Given what Windows has become and already discussed here on HN I would even hesitate to run it in a virtual machine.
Edit: more than 15 years.
Except that if you require anything that is GPU-related (like gaming, Adobe suite apps, etc) you'll need to have a secondary GPU to passthrough it to the VM, which is not something that everyone has.
So, if you don't have a secondary GPU, you'll need to live without graphics acceleration in the VM... so for a lot of people the "oh you just need to use a VM!" solution is not feasible, because most of the software that people want to use that does not run under WINE do require graphics acceleration.
I tried running Photoshop under a VM, but the performance of the QEMU QXL driver is bad, and VirGL does not support Windows guests yet.
VMWare and VirtualBox do have better graphics drivers that do support Windows. I tried using VMWare and the performance was "ok", but still not near the performance of Photoshop on "bare metal".
People throw around the ideas of VMs or WINE like it's trivial. It's really not.
On linux it's quite trivial. KVM is part of the kernel. Installing libvirt and virt-manager makes it really easy to create vms.
I'd say even passing through a GPU is not that hard these days though maybe that depends on hardware configuration more.
“On Linux it’s quite trivial…” giving big
“ or a Linux user, you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem. From Windows or Mac, this FTP account could be accessed through built-in software.”[1] vibes.
Convenience features in software are huge and even if a system is well designed a system that abstracts it all away and does it for you is easier, and most new users want that, so it often wins. Worse is better etc
[1] https://news.ycombinator.com/item?id=9224
The comment you linked is one of the most misunderstood comments on this site, which makes sense because it's one of the most cited comments on this site.
https://news.ycombinator.com/item?id=23229275
This probably isn't even the best dang comment about the situation, it's just the one I could find quickly.
Perhaps I should have put a larger explanation around it but I am mocking neither sureglymop nor BrandonM but we can still learn lessons from hindsight.
Sure, it’s trivial to set the switch in BIOS for virtualisation, and download a couple of libraries but people like computers doing things for us, we like abstractions even if they sacrifice flexibility because they facilitate whatever the real world application we are attempting.
I think power users of any technology will generally overvalue things that 80% to 95% of the user base simply don’t care about.
I admit that having touched Windows twice in the last 5 years I wouldn’t know but I would be willing to wager that WSL has very few drawbacks or shortcomings in the minds of most of its users.
Also sometimes the harder approach is also not as capable as some people make it out to be, and there are some unsolved caveats.
I don't see what's misunderstood about it, but also it's not right to make fun of the user for it.
Because it's only silly sounding because of hindsight. With today's context of file sync applications being a huge industry, that comment seems silly. But that was the prevailing opinion at the time. Check out this blog post: https://www.joelonsoftware.com/2008/05/01/architecture-astro...
>Jeez, we’ve had that forever. When did the first sync web sites start coming out? 1999? There were a million versions. xdrive, mydrive, idrive, youdrive, wealldrive for ice cream. Nobody cared then and nobody cares now, because synchronizing files is just not a killer application. I’m sorry. It seems like it should be. But it’s not.
That's just what a lot of competent people thought back then. It seems hilariously out of touch now.
But it wasn't my opinion at the time, and I didn't hear from those people. I was in middle school, kids were commonly frustrated syncing their homework to/from a flash drive, family members wanted to sync photos, and everyone wanted something like this.
Before Dropbox, the closest thing we had was "the dropbox," a default network-shared write-only folder on Mac. Of course you could port-forward to a computer at home that never sleeps, but I knew that wasn't a common solution. I started using Dropbox the same month it came out.
I'm happy for you :)
The future is rarely made by people who are comfortable with the status quo. That’s the only thing we can get from this.
Even the described FTP-based Dropbox replacement is easier than getting a VM to work properly with DRM'd software and/or GPU acceleration.
Really? With GNOME Boxes it's pretty straightforward. I hear KDE is getting an equivalent soon, too.
You can do GPU passthrough in a Gnome box, as in, your VM can see the host's GPU (let's say Nvidia) and it works exactly the same as on the host? Or another metric is if you can run Photoshop in a VM with full hardware acceleration. I haven't tried Gnome box in particular, but this isn't what I'm seeing when I search.
Ah, yeah, seems like I was mistaken and maybe Red Hat's virt-manager was what I was thinking of.
virt-manager is a bit more involved than GNOME's Boxes, I'm not sure I could recommend it to someone that doesn't know what they're doing.
Yeah, reading your original comment I was about to go off until I saw GPU pass through with DRM software. Highly cursed.
Yep, regular VMs where you basically only care about the CPU and RAM are easy, provided nothing in the VM is trying to not run in a VM. USB and network emulation used to be jagged edges, but that was fixed. VirtualBox was my go-to. It never had great GPU support, but the rest was easy.
I'm pretty sure there are solutions to assign an entire GPU to a VM, which ofc is only useful if you have multiple. But those are specialized.
Not even close. I mentioned a software package that literally offers a full gui for all your virtualization needs.. how is that comparable to the things mentioned in that comment?
That really depends on what you want to run. Dipping into a Linux laptop lately (Mint) there are things, old things (think 1996-1999) that somehow "just work" out of box on Windows 10, but configuring them to work under WINE is a huge PITA coming with loads of caveats, workarounds and silent crashes.
The silent crashes get me. Also running one exe spawns a lot of wine/wineserver/wine-preloaded processes.
I'm hoping that IOMMU capability will be included in consumer graphics cards soon, which would help with this iirc there are rumors of upcoming Intel and AMD cards including it
Tried doing 3d modeling in a Windows VM - couldn't get acceleration to pass through.
What 3D modelling were you doing that couldn't be done on linux?
Fusion360 doesn't work on Linux. Or at least I tried multiple times and couldn't get it to work
Really? I recall installing it 3 years ago, and aside from some oddities with popups, it worked just fine. I think it was this script [0]. I don't know if they broke it, I switched to OpenSCAD, which meets my needs.
[0] https://github.com/cryinkfly/Autodesk-Fusion-360-for-Linux
I needed to use Rhino 3D specifically because it had an environmental simulation plugin.
Mostly having software better than FreeCAD, AKA everything that exists on Windows and macOS.
AMD has SRIOV on the roadmap for consumer gpus which hopefully makes things easier in the future for gpu accelerated VMs
https://www.phoronix.com/news/AMD-GIM-Open-Source
Windows can run GPU accelerated Windows VMs with paravirtualization. But I have no use case for two Windows machines sharing a GPU.
There is also native context for VirtIO, but for now Windows support is still not planned.
Also note some brave soul implemented 3D support on KVM for Windows. Still in the works and WinUI apps crash for some reason.
Quite a lot of people have both integrated Intel graphics and a discrete AMD/NVidia card.
Sadly I'm not one of those people because I have a desktop with an AMD Ryzen 7 5800X3D, which does not have an integrated graphics card.
However now that AMD is including integrated GPUs on every AM5 consumer CPU (if I'm not mistaken?), maybe VMs with passthrough will be more common, without requiring people to spend a lot of money buying a secondary GPU.
Yes, my Ryzen 7600 has an integrated GPU enabled. AMD's iGPUs are really impressive and powerful, but I do not have any idea what to do with it and despite that I moved to an Nvidia GPU (after 20 years of fanboyism) specifically because I was tired of AMD drivers being terrible on Windows, I STILL have to deal with AMD drivers because of that damn iGPU.
I could disable it I guess. It could provide 0.05% faster rendering if I ever get back into blender.
Anything GPU related isn't great in WSL either.
True, but I don't have the need to run applications that require GPU under WSL, while I do need to run applications that require the GPU under my current host OS. (and those applications do not run under Linux)
I completely gave up on WINEing Adobe software but I didn't know about the second GPU thing, I thought it was totally impossible. Thank you!
I will do anything to avoid Windows but I miss Premiere.
I don’t know why there aren’t full fledged computers in a GPU sized package. Just run windows on your GPU, Linux on your main cpu. There’s some challenges to overcome but I think it would be nice to be able to extend your arm PC with an x86 expansion, or extend your x86 PC with an ARM extension. Ditto for graphics, or other hardware accelerators
There are computers that size, but I guess you mean with a male PCIe plug on them?
If the card is running its own OS, what's the benefit of combining them that way? A high speed networking link will get you similar results and is flexible and cheap.
If the card isn't running its own OS, it's much easier to put all the CPU cores in the same socket. And the demand for both x86 and Arm cores at the same time is not very high.
Yes, with pci-e fingers on the ‘motherboard’ of the daughter computer. Like a pci-e carrier for the RPI compute.
Good point about high speed networking. I guess that’s a lot more straightforward.
You may be interested in SmartNICs/DPUs. They're essentially NICs with an on-board full computer. NVIDIA makes an ARM DPU line, and you can pick up the older gen BlueField 2's on eBay for about $400.
> full fledged computers in a GPU sized package
.. isn't this just a laptop or a NUC? Isn't there a massive disadvantage in having to share a case or god forbid a PCIe bus with another computer?
There is ongoing work on supporting paravirtualized GPUs with Windows drivers. This is not hardware-based GPU virtualization, and it supports Vulkan in the host and guest not just OpenGL; the host-based side is already supported within QEMU.
> I would do it the other way round: use Windows in a virtual machine from Linux.
Every Windows thread on HN is a reminder of the stark divide between people who need to use Windows for productivity apps and those who don’t.
The apps I need a Windows machine for are not the kind that virtualize nicely. Anything GPU related means Windows has to become the base OS for me.
If you’re running an occasional light tool you can get away with Windows in a VM, but it’s a no-go for things like CAD or games.
Windows in a vm with a passed through GPU is really nice. Although still pretty niche these days it's easier than it used to be. It also works with a single GPU, e.g. on a laptop.
I personally have a desktop PC with an AMD GPU and then another Nvidia GPU that I pass through to windows hosts. I have a hook that changes the display output and switches the inputs using evdev.
It's really nice if you have two seperate GPUs in your computer?
Most computers do. All laptops and most desktops have an integrated GPU on the CPU.
That's the first. Do you have sources most have more?
He’s right. Laptops have integrated graphics, but all mid-tier and higher laptops also have a dedicated GPU. Desktops are similar, though my guess is a lot of business desktops have only the integrated graphics.
As I mentioned, can be done with a single GPU as well, just makes it a bit more complicated to set up.
I prefer to just have two (or three) GPUs than have Windows as the base OS.
If you can GPU passthrough (it's quite simple to set up), this is not a large issue. You're right that Linux is sorely lacking in native creative software though!
> who need to use Windows for productivity apps and those who don’t.
LibreOffice has gotten quite good over the years, including decent(ish) MSO file format interoperability, and Thunderbird seems to support Exchange Server.
So, I suppose things like MS Project or MS Visio many not have decent counterparts (maybe, I don't really know), but otherwise, it seems like you don't need-need to use Windows for productivity apps.
Last I looked, Thunderbird used Exchange Web Services to connect to Office365 which Microsoft is getting rid of: https://techcommunity.microsoft.com/blog/exchange/retirement... (I point out Office365 since vast majority of "Exchange" users are on 365)
It also only support email and not calendaring/contacts.
That being said, Office365 Web Client is pretty good at this point and someone who doesn't live in Office all day can probably get along fine with it.
Counterpoint: things like the Valve Index for VR simply don't behave well in this environment no matter how much I've worked on getting it there.
I'm not a novice either, $dayjob has me working on the lowest levels of Linux on a daily basis. I did linux from scratch on a Pentium 2 when I was 12. All that to say yes I happen to agree but edge cases are out there. The blanket statement doesn't apply for all use cases
IMO this is the real blindspot: it's VR support, not Photoshop, or MS Office, or CAD tools (all of which I've got running fine via Wine). I'm guessing the intersection between VR users and Wine users must be really small and I suspect it's because of this that support is so lacking.
And even more worse with the Vive Pro 2 by HTC which needs a special Windows tool to use all it capabilities...
I would have switched over to Linux if it wouldn't be because of that one.
I used Linux as my daily driver for years, before finally switching back to Windows, and then to the Mac. I got tired of things like wine breaking on apps, I got tired of the half-assed replacements for software available on Windows, like GIMP compared to Photoshop. I got tired of the ugly desktop that inevitably occurs once you start needing to mix QT and GTK based apps. Linux is not a panacea.
I love how subjective these things are.
I hate the half assed commercialised approached for software on both Mac and Windows where you download 50mb+ of electron bullshit for a bash 2 liner with default tools on Linux.
Mostly for windows but when I installed 5+ tools from untrustworthy websites (which they all look like if you aren't used to that) it feels like my computer is likely forever busted with some scamware. But there is no dd, no proper editor, no removing adware and "news" without these tools.
On windows if you want to configure something it's like going into a computer museum where you start in the metro area and end up in UIs straight out of win 95. That's better on Mac, but the UI is depressing (in my opinion) and I always had the feeling my Mac wouldn't need to run that hot if it wouldn't draw shadows, mirroring and weird effects I haven't asked for.
That said. Linux is not a panacea
Running Windows from a ZFS partition with its own dedicated GPU, viewed through looking-glass on the Linux host at 1440p@120Hz, has been super useful.
I set it up originally for gaming, but nowaways I install a lot of disposable software there.
I use Linux guests VMs too (a la Qubes), but sadly there's no guest support for looking-glass on Linux. Native rendering speeds on VMs are something hard to let go.
The big difference is hardware access.
I used to do VFIO with hardware passthrough so I could have linux but still run windows software like CAD that takes advantage of the gfx card. That was a pain to set up and use.
The other way, its very simple. WSL2 can run ML tasks with just a tiny bit of overhead in moving the data to the card.
PyTorch and most of other ML stuff have native Windows ports.
Related: https://www.microsoft.com/en-us/evalcenter/evaluate-windows-... (edit: not https://developer.microsoft.com/en-us/windows/downloads/virt...)
> We currently package our virtual machines for four different virtualization software options: Hyper-V (Gen2), Parallels, VirtualBox, and VMware. These virtual machines contain an evaluation version of Windows that expires on the date posted. If the evaluation period expires, the desktop background will turn black, you will see a persistent desktop notification indicating that the system is not genuine, and the PC will shut down every hour.
Edit: Oops, dead link -- the dev tools evaluation VM hasn't been released for 6+ months. But they do offer Windows evaluations ISO's after registration.
That's how I do it. I don't see the draw for Windows as the main OS, especially with Windows 10+ being dumbed down beyond belief and having seconds of lag to do anything at all. Seems even from this thread that people just want the convenience of a gaming rig in the same box as their work (which is a security issue because games are full of remote code execution vulnerabilities).
It's funny, more than any productivity app (though I do have a few of those), the Directory Opus [1] Explorer replacement is one of the things that I've yet to find a viable replacement for on both Linux and macOS. Unparalleled customisability, scriptable actions, outstanding performance (thumbnailing 10,000 images in a folder never causes slowdown), incredible search and "huh, why doesn't anyone else do this" features everywhere. I use my file explorer a lot so the friction is felt daily.
I'm using Forklift [2] on my mac at work, but it's a pale imitation of what a file explorer can truly be. I did some searching for Linux but it's all pretty pedestrian.
[1]: https://www.gpsoft.com.au/ [2]: https://binarynights.com/
I feel like every conversation about this is the bell curve/midwit meme[1], with the middle being the argument over “Windows VM on Linux” and “Linux VM on windows”, and the edges being “own multiple computers”.
[1] https://knowyourmeme.com/memes/iq-bell-curve-midwit
[dead]
[dead]
Right! Use Linux, because it is your preference [1]. It doesn't cause harm (side-effects: incompatibility and vendor lock-in, due to mass-effect) to others.
We need to remember why Microsoft uses WSL. Microsoft wants to prevent users (i.e. developers) to migrate on Linux. It is the old approach Embrace, Extend, and Extinguish [2].
Monopolies are made by users and politics, because we don't consider vendor lock-in and mass-effect. I wish strong regulation for all information-technology. We saw the wonderful effects of regulation with AT&T {UNIX, C, Open-Source, Open-Documentation} and then a mistake was done. The company was split up, looking back a complete failure.
I've considered it, but there are two Windows features I need that sound like they'd require some time investment to set up correctly on linux.
1. I use UWF on windows (Education Edition). All disk writes to C:/ are ephemeral. On every single reboot, all changes are discarded and my pc is back to the exact same state as when I first set it up. I do keep a separate partition for documents that need persistence.
2. Miracast for screen mirroring.
as for 1. if you ever have some free time on your hands, and want to take declarative configs to the next level, you can check out Impermanence for NixOS: https://news.ycombinator.com/item?id=37218289
and 2...hm I know i've done Miracast before with GNOME Network Displays https://flathub.org/apps/org.gnome.NetworkDisplays
(1) can be done with little to no configuration on default live images of like, every single distribution.
That's sort of what Wine does. That's how I run the occasional Windows program on Linux.
That works pretty well except for gaming. A lot of games detect if they are running in a VM and refuse to let you play, as an anti-cheat measure.
I always have Windows on Parallels on a Mac, too – unfortunately VirtualBox for arm64 Mac isn't quite there yet.
I think the biggest problem with VirtualBox on arm64 is that it is only for arm64 guests, unlike the qemu-system-x86_64 which colima et al use and allow booting up "normal" guest OSes
Also, VBoxManage was created by someone who firmly subscribes to the "git UX is awesome" school of thought :-(
It is slowly improving (albeit with some egregious bugs, like losing EFI data on export) but TBH even their x86 product pales in comparison to Parallels or VMWare Fusion, in terms of machine performance.
If you're in a corporate environment, you often don't have a choice wrt Windows as your primary desktop OS.
You haven't used Photoshop for 15 years?
Okay. Then you had a Mac. Then you need to run Linux in a VM anyway because similar to Windows, macOS is also a dumpster fire. Then why bother? You are going to have a Linux VM anyway. I usually just sync my VM disk between all my laptops & desktops, no matter what host OS it runs.
WSL 2 is one of the biggest reasons I'm able to be productive as a blind software developer. With it I'm able to enjoy the best desktop screen reader accessibility (Windows and NVDA) as well as the best developer tools (Linux). I hate Microsoft's AI and ads force-feeding as much as anyone else but trust me, you'd do the same if you were in my shoes. Screen reader accessibility on Mac Os is stagnating even faster than the os itself and even though Linux / Gnome accessibility is being worked on, it's still ready only for enthusiasts who don't mind their systems being in a constant state of somewhat broken, as illustrated by this series of blog posts from just a few weeks ago: https://fireborn.mataroa.blog/blog/i-want-to-love-linux-it-d...
>Screen reader accessibility on Mac Os is stagnating
Apocryphally, a lot of this was apparently developed at the direct insistence of Steve Jobs who had some run ins with very angry visually impaired people who struggled to use the early iphone/ipad.
That said, my source for this is one of the men who claims to have spoken to Mr Jobs personally, a visually impaired man who had lied to me on several fronts, and was extremely abusive. However I couldn't find anyone inside apple management or legal who would deny his claim. And he seemed to have been set the expectation that he could call the apple CEO at any time.
Thanks for pointing this out. I'm not visually impaired but even so the graphics and presentation features on Windows seem noticeably better than the competition.
This is so awesome to hear!!
I've been using WSL on and off for Linux development for the last few years.
When it works, it's great! When it doesn't....oh man it sucks. It has been non-stop networking and VPN problems, XServer issues, window scaling issues, hardware accelerated graphics not working, etc. this whole time. I've spent more time trying to fix WSL issues then actually developing software. It's never gotten better.
It's fast. It's powerful. But using it as a daily driver is very painful in my experience. I avoid it as much as possible and do most of my work in MSYS2 instead. Sure, it's much slower. But at least it works consistently and has for years.
I'm still waiting for the day I need to install WSL, but so far git-bash is working just fine.
I use WSL as daily drive for dev. Never had any issues. Love it. I use it from VS Code.
I still can't use usb-serial devices from within wsl2.
It was possible under wsl1, but wsl1 is an entirely different thing.
"never had any issues" is a meaningless statement. I "never had any issues" with infinite things I never tried to do in the first place.
there is GUI program to automatic this progress[1]
I have been using WSL to develop Firmware in Zephyr, no problem so far.
[1]: https://blog.golioth.io/usb-support-in-wsl2-now-with-a-gui/
Not available in Win10 until recently, and broken and fixed even more recently... but thank you for the heads up. It seems this is finally a thing.
I will have to see if it actually works in my case. The devices are intolerant of timing. Even using usb-serial instead of legacy hardware, let alone the ip stack, can be a problem unless using real ftdi adapters.
Basically virtualizing rs-232's hardware flow control into usb packets was always technically invalid since the beginning, but if the host is overwhelmingly fast enough, you get away with it, usually, and by now all new serial devices have adapted to expect the behavior of usb-serial adapters since that's what everyone has. For that reason, new devices generally tolerate even worse timing and you can even get away with going over ip. But the fact is the timing is garbage by that point and not everything works.
Still, I'm sure it's working well enough for most things or else there would be more reports that it doesn't work.
Since WSL2 is basically VM now, i guess we can passthrough the usb device to VM and skip the whole IP stack, latency is still there, but much better than usbipd
These days I don't even use a WSL distro directly, but I do use it as a backend for my Docker Desktop.
I've tried WSLg for couple of times and all I run was something like xclock to ensure it works. I literally have 0 interest in running GUI Linux apps, so for me it all smooth sailing.
I think I'm still on a beta version because I'm afraid to update it and breaking all the stuff I have working.
The beta version actually updates more often than the release group. I use the beta so I get the updates sooner. It's been rock stable for me for YEARS.
I use it all the time but then I've never run a GUI application in it.
Every time I praise WSL on hn I pay the karma tax but I will die on this hill. WSL is more powerful than Linux because of how easy it is to run multiple OS on the same computer simultaneously. It's as powerful as Linux with some janky custom local docker wrappers for device support, local storage mapping, and network mapping. Except it's not janky at all. It's an absolute delight to use, out of the box, on a desktop or laptop, with no configuration required.
Edit: for clarity, by "multiple OS" I mean multiple Linux versions. Like if one project has a dependency on Ubuntu22 and another is easier with Ubuntu24. You don't have to stress "do I update my OS?"
You can accomplish the same with Distrobox on Linux, but there's definitely something to be said about having the best of both worlds by running Windows + WSL.
I honestly think Microsoft could win back some mind share from Apple if they:
* Put out a version of windows without all the crap. Call it Dev edition or something and turn off or down the telemetry, preinstalled stuff, ads, and Copilot. * Put some effort into silicon to get us hardware with no compromises like the Macbooks
I'm on Mac now, and I jump back and forth between Mac laptop and a Linux desktop. I actually prefer Windows + WSL, but ideologically I can't use it. It has potential - PowerToys is fantastic, WSL is great, I actually like PowerShell as a scripting language and the entire new PC set up can now be done with PowerShell + Winget DSC. But, I just can't tolerate the user hostile behavior from Microsoft, nor the stop the world updates that take entirely too long. They should probably do what macOS and Silverblue, etc. do and move to an immutable/read-only base and deploy image based updates instead of whatever janky patching they do now.
Plus, I can't get a laptop that's on par with my M4 Pro. The Surface Laptop 7 (the arm one) comes close, but still not good enough.
I'm not saying it's a perfect solution, but with Windows 11 Pro and group policy I was able to disable all of the annoying stuff, and because it is group policy it has persisted through several years of updates. It is annoying you have to do this, and it does take some time to get set up right. But it's a solution.
That said I'd pay for a dev edition as you described it, that would be fantastic.
You can make your own clean version, legally, with this file. https://schneegans.de/windows/unattend-generator.
I get customers and most people don't know about it but it's kind of ridiculous that techy people in a tech forum don't know how to do it.
it's kind of ridiculous that techy people in a tech forum don't know how to do it.
Why? HN has traditionally always largely been a macOS and Linux crowd. Why do we have to care about fixing an OS that is broken out of the box (that most of us don't use anyway)?
Because someone cannot make informed comments about the "other" party unless they have a reasonably deep knowledge of it, too.
Far too many Linux users, especially, make fun of Windows and if you dig a bit you see that most of their complaints are things that are solved with 5 minutes of googling. Some complaints are philosophical, and those I agree with, but even in that case, I'd be curious how consistent they are with their philosophy when for example Linux desktop environments due weird things.
Summarizing a bit: Linux users with years or decades of experience of tinkering as sysadmins with Linux frequently make junior-level user complaints about Windows usage, frequently based on outdated information about it.
I say this who has been using both Linux and Windows for a few decades now and has a fairly decent level of sysadmin skills on both.
I didn't know about this. My knowledge of Windows is very limited. I use it every day for work, but it's managed by our IT and Security departments. It's locked down. You cannot use external drives. You can't install applications yourself and you can't run un-approved applications. So, I learned over the years to never touch anything that already hasn't been approved, even settings. If you want to apply for something to be approved, you can submit a written justification co-signed by your manager. My manager has never rejected anything I requested, but it's a huge hassle. Most of us just don't bother, even developers.
This seems pretty useful, thanks! I had certainly never heard of it.
Thanks for this! I didn’t know this tool existed
There is no flavor of Windows 11 that is acceptable. Even the UI itself is a disaster. A cornucopia of libraries and paradigms from React Native to legacy APIs as if an interdimensional wave function of bad ideas had collapsed into an OS, but with ads.
Windows LTSC already exists, but Microsoft, in all their wisdom, restricts it to enterprise licensees only, and seems to actively discourage using it as a desktop OS. The first problem is of course fixable with some KMS server shenanigans, but the second can be kinda painful when it comes to keeping drivers up-to-date, installing apps that rely on features LTSC excludes (and doesn't provide an easy way to install manually), etc.
I've often said that if Microsoft had just iterated on Windows 2000 forever I'd probably still be a full-time Windows user. If Microsoft had maintained an LTSC-like Windows variant that was installable from the normal retail installation media and with a normal retail product key (at the very least Pro, but ideally Home), that also likely would have kept me on Windows full-time instead of switching to Linux as my daily driver.
I use Windows 11 IoT Enterprise LTSC, which as far as I'm aware has all the features that Pro has (plus the IoT Enterprise stuff) and zero bloat. I switched to it from my already de-bloated 11 Pro installation (because it removes some telemetry you're normally unable to disable) and have had 0 issues with it. I can't say I activated it using a normal retail product key, however, there are easy solutions to that.
Ya I totally get that. The way I view it is that windows is a glorified driver support layer and any actual work i do is almost exclusively in the Linux container.
When I used to have free time it was great for games too
> I can't get a laptop that's on par with my M4 Pro.
This is the only reason I have not requested a windows laptop from my company. WSL is better for docker development in basically every way than a mac can be (disclaimer: haven't tried orbstack yet, heard good things, but my base assumption is it can't be better than WSL2) except it is literally impossible to get hardware as good as the M3 or M4 for any other OS than macOS.
I replaced my m1 with a snapdragon laptop running Win11 and upgraded that to pro. For what I do with it, it runs great with very long battery times, for less than Apple quoted to repair the m1. I don't use the copilot features and haven't seen any ads so far, except maybe for office during setup.
Outside US and countries of similar income level, Windows is doing quite alright in mindshare, and will keep doing that unless Apples stops pretending being the computer version of audiophile.
I on the other hand cannot get an affordable Mac that has the same GPU, disk space and memory size as my workstation class laptop.
(Used 15ys OSX, now Win11)
The biggest difference between OSX and Windows is, Apple adds (some say steal) functionality from competition, and open source. They make it neat. On windows to have something working, you need a WezTerm, Everything for search, Windhawk for a vertical taskbar on the right, Powertoys for an app starter, Folder Size for disc management etc. If you spend a lot of time, Win11 can be ok to work with.
If Powerpoint and Affinity would work on Linux, I'd use Linux though.
Oh running Ice to wrangle the menu bar app icons or Rectangle to properly manage windows ('cause Apple screwed that one up) must be unnecessary.
Each OS is going to have extension applications to improve on the OOTB experience. This is an invalid argument to choosing one over the other.
Maybe just for your specific preferences. Terminal is plenty fine. Vertical taskbar on the right is straight up user preference. PowerToys for an app starter? Like Alfred? The start search does a decent enough job of that. Folder Size is nice, but enumerating all files is very taxing.
>Windhawk for a vertical taskbar on the right
Huh? Windows supports vertical taskbar.
It was removed in Win11, when they rewrote the taskbar to pretend that it's macOS dock (icons centered by default). Today your only options are horizontal taskbar along the top or the bottom edge, and icons aligned left or center.
Last time I checked, Windows 11 lost this capability and 3p solutions like Windhawk are needed. I'd be very happy if they brought this back though, feel free to share a link to some info about how to do it natively.
https://github.com/valinet/ExplorerPatcher
That was my impression too.
To the tech savvy, there is essentially only one advantage to running Windows, and that is the ability to run Windows-only software. In all technical respects - control, performance, flexibility - it is inferior to the alternatives. Don't confuse vendor lockin with technology.
I find it dismaying that people on Hacker News willingly submit to incredibly user-hostile behavior from Microsoft and call it "the best of both worlds". Presumably a nontrivial proportion here are building the next generation of software products - and if we don't even respect ourselves, how likely is it that we will respect our users?
"I find it dismaying that people on Hacker News willingly submit to incredibly user-hostile behavior from Microsoft"
And I find it funny that the crowd that spends whole days implementing user-hostile features in yet another SaaS crapware has so much to say about Microsoft's bad behavior.
There is an additional reason: Some (many?) people simply prefer the Windows UI conventions (once you remove all the enshittifications post Windows 7).
I'm not aware of any particular UI convention that's in Windows that isn't available in, say Plasma. Day to day usage is extremely similar, and where they diverge it's usually because 1) Plasma has a feature that Windows doesn't, or 2) someone at Microsoft opted for senseless change for change's sake - a toy interface is layered over a functional one, often (but not always) grudgingly allowing access to the old behavior with extra steps, in a tacit admission of no-confidence. This behavior is pervasive - the "new control panel", the new context menu ("show more options" to get to the original, an extra click that yields a menu with many of the same options but in a different order with different icons), and best of all moving the "Start button" to the center - a change which more than any other exemplifies the silliness, because it 1) at best achieves nothing, and 2) flies in face of the original UI research based on Fitt's Law that informed 30 years of Windows UI tradition.
I honestly can't imagine anyone preferring all that. </rant>
I don't think Microsoft losing the mind share has anything to do with software. Macbooks are winning the laptop war because of superior hardware.
Only on countries where people earn salaries big enough to pay for the Apple hardware tax.
What Apple hardware tax? The macbook air is the best value laptop there is. If the latest version is out of the budget, you can buy older generations used. Even m1 air would be better than any windows laptop at a comparable price point.
Yeah, because only being able to afford used stuff is such a great place to be.
Better than buying a new but crap product for sure?
70% of the world doesn't think it is crap.
https://gs.statcounter.com/os-market-share/desktop/worldwide
Superior hardware with terrible software. Also they straight up artificially limit their hardware so they don't cannibalize their sales, which is slightly understandable, but they do it in the dumbest ways. My SOs MacBook Air can only do one external monitor, even though it has the same specs as her work Pro. Oh and good luck actually getting that external display to work, I swear only like 50% of USB-C docks work on the platform.
> Superior hardware with terrible software.
Funny how that was the other way around just a few years ago. Macs had inferior hardware, but they were supposed to have better software. At least that's what the Mac users claimed.
I fell for that, years ago. No the software wasn't superior either. I remember having to manually install codecs, which on linux had been a problem many many years before but had been solved already.
My SOs MacBook Air can only do one external monitor,
The MacBook Air M4 supports two external displays now (with the lid open):
https://support.apple.com/guide/macbook-air/use-an-external-...
My SOs MacBook Air can only do one external monitor, even though it has the same specs as her work Pro.
The MacBook Pro with the non-Pro/Max chip (i.e. MacBook Pro M3) has the same limitations as the corresponding MacBook Air (i.e. MacBook Air M3).
>Macbooks are winning the laptop war because of superior hardware.
No. This is just you repeating marketing.
No Nvidia chip = B tier at best.
I have a $700 Asus with a 3060 that is better. Go ahead and scale up to a $2000 computer with an Nvidia chip and its so obviously better, there is nothing to debate.
No one cares about performance per watt, its like someone ran a 5k race, came in 3rd and said "Well at least I burned fewer calories than the winner!"
> No Nvidia chip = B tier at best.
Nvidia chip = 45 minutes of battery life
Not the one you talk to, but I'm a dev that does not need extensive battery life. All my dev computers are desktop.
You know they can be turned on or off depending on need right?
Yes, but a few problems:
1. Turning them on/off ala bumblebee isn't a solved problem. It's buggy, especially on not-windows. Even on windows, it's going to be buggy especially in regards to sleep.
2. You obviously lose the advantage of a nvidia GPU that way. If you have to always have it off to get decent battery life, which you do, then it's kind of moot. If you turn it on for your 30 minute workload then there goes 70% of your battery.
And you can never ever plug it to the power grid because?
You can, I just think it's inconvenient so I favor laptops with better battery. Besides, I almost never find myself being on the go and needing a dedicated GPU.
If you're never on the go you don't even need a laptop to be fair…
Well, I'll have to hardly disagree. You want a laptop that its battery life is not 1 hour at best. That wasn't a thing in Windows/Linux laptops until M1 started using arm64. 6 Hours of intense work? Good luck with that.
Not only that, but being able to run very intensive work (Pro Audio, Development...) seamlessly is an absolute pleasure.
Its screen is one of the best screens out there.
The trackpad (and some keyboards) are an absolute pleasure.
The robustness of the laptop is amazing.
I don't care about the marketing of Apple, I don't buy anything new they launch, and I condemn all of their obscure pricing techniques for the tech they sell. But my M1 is rocking like the first day, after four years of daily use. That's something my Windows laptops have never delivered to me.
Apple has done a lot of things wrong, and I will not buy another Apple laptop in the future, but I don't want Nvidia on a Laptop, I want it to be portable, powerful and durable.
That is changing now, and it's amazing. I want my laptop to be mine, and to be able to install any OS I like. New laptops with arm64 and Intel Lake cpus are promissing, but we're not there yet, at least not that I have experienced.
Each to their own for sure, and for you, the nvidia requisite is important. For me it's not about brands, but usability for my work and hobbies.
I can do 6 hours of work on my 10 years old thinkpad… It's nothing special really.
Please tell me which laptop.
Also, is it powerful enough to have it run a development environnent (docker compose/k3s with db & cachd, intellij/vscode, etc) without having issues?
Genuine questions, I am no fanboy of anything
I have a Thinkpad T560 with only 8GB. I develop using docker and I use kate with python3-pylsp for completion. And of course the occasional zoom/teams.
Instead of slack I normally use localslackirc, so that alone probably saves a ton of battery rather than using the electron one.
When I compile a lot I still manage to get half a day on battery. If I want to save power I just ssh to a server and do everything there :)
edit: that model has also hotswap battery so if you really really need more battery life you can buy a spare.
The 6 hours of real work battery that Apple manages with ARM is genuinely impressive, and finally I think shifted the landscape to take ARM seriously as a CPU for consumers.
But it's just not that big a deal. Sure, I COULD spend a day working without power, but it's 2025 and USB-C power delivery is a mature spec. My desk has power. My work desk has power. My living room has power. My bedroom has power. The coffee shop has power. Airplanes have power. My fucking CAR has power.
Where are you working that you need a full 6 hours of hard working power without occasional access to a power outlet and a battery bank won't meet your needs?
I would be satisfied with 2 hours of hard working battery, which is what Ryzen powered Windows laptops deliver. My girlfriend uses her $800 mid range Ryzen laptop to play games and other power hungry things off charger every single day. It's also what work laptops other than Macs have always provided. Sure, my Thinkpad from 2012 needed a giant tumor of a battery to provide that, but it was always an available option, and you could swap it out for a tiny battery if you really wanted to slim it down.
Never an option in apple land. Battery not good enough? Fuck you, too bad.
> *You* want a laptop that its battery life is not 1 hour at best.
But why?
I mean I can see why some want that. But why would I or most or devs in general want that? I very rarely code on laptop, and almost never when not at a desk.
Why would I need an Nvidia chip in my laptop?
For some groundbreaking Artificial Intelligence work, obviously.
In reality, he probably just want to play CS2 :D
This would be fantastic. But Microsoft doesn't have to do this. Their users are captives.
Some of them are.
But the increasing market share of Macs and even Linux these days plus the ever increasing of OSS initiatives from Microsoft points out that Microsoft knows a lot fewer of their users are as captive as they were in the 90's, for example.
More specifically: a lot fewer developers are as captive as they were in the 90's. And while normal users vastly outnumber developers, Microsoft has figured out that those normal users ain't inclined to stick around if those developers jump ship and stop developing for Windows.
In other words, specifically those of a former Microsoft CEO (who understood the problem but not the solution):
DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS ... YES
Even for regular users, a big chunk of regular users are looking at other platforms:
- "creatives" have always been a core Apple market and they've grown, so that market has grown; plus, since Windows is globally less dominant, a lot of "Photoshop/video editing software/3D modeling + Windows" folks are now on Macs
- gamers now have Proton + Steam on Linux + SteamOS so quite a few more of them are on Linux now, especially since Valve is pushing in this direction to keep Microsoft honest
- large number of regular office workers have iPhones, especially as you move towards the top of the hierarchy, and are far more tempted than they would have been in the past to try or use a Mac
- in many schools there are now Chromebooks instead of Windows laptops; this is primarily a US thing, but it does pop up in some other places, too
Windows is sort of stable but probably still bleeding users slowly.
There's a dedicated settings page for quickly setting popular dev settings such as showing extensions and full paths. Getting rid of the rest just involves tweaking a few other settings like don't show tips or welcome screen. I also hide the weather and news widget because it's tabloid rubbish but many people seem to love it.
> nor the stop the world updates that take entirely too long
Interesting enough, that beyond release upgrades, happening may be once a year, all or may be 99% of updates took ~5 minutes of interruption of me, including needed reboot. I really wonder how others manage to have "entirely too long" updates.
5 minutes is too long. My Debian systems never demand that I update them. When I update them, it never even takes two minutes.
That can't be helped. I go for a smoke and when come back system is already upgraded.
I've not being using Debian setups lately, but on Ubuntu, alert on need-to-reboot packages after daily unattended upgrades run is happening almost every month. I'm kinda sure that Debian is on similar schedule here.
> a version of windows without all the crap
LTSC is a version like that
> "Microsoft doesn't make any release from the Long-Term Servicing Channel available for regular consumers. The company only makes it available to volume licensing customers, typically large organizations and enterprises. This means that individual users cannot purchase or download Windows 11 LTSC from Microsoft's website."
https://www.windowscentral.com/software-apps/windows-11/what...
Just use mas
> without all the crap
as far as MS are concerned, that crap is their business.
Or, possibly, that crap is the multitude of little software empires build by the management layer now in control..
"More powerful than Linux" is silly. It's a VM. The most useful thing is that it does a bunch of convenience features for you. I am not suggesting that it is not extremely convenient, but it's not somehow more powerful than just using Linux.
You know what's even more convenient than a VM? Not needing a VM and still having the exact same functionality. And you don't need a bunch of janky wrapper scripts, there's more than one tool that gives you essentially the same thing; I have used both Distrobox and toolbx to quickly drop into a Ubuntu or Fedora shell. It's pretty handy on NixOS if I want to test building some software in a more typical Linux environment. As a bonus, you get working hardware acceleration, graphical applications work out of the box, there is no I/O tax for going over a 9p bridge because there is no 9p bridge, and there is no weird memory balloon issues to deal with because there is no VM and there is no guest kernel.
I get that WSL is revolutionary for Windows users, but I'm sorry, the reason why there's no WSL is because on Linux we don't need to use VMs to use Linux. It's that simple...
Yeah if you are working with Linux only, its better to go full linux.
WSL2 is really handy when you want to run other software though. For example, I use Solidworks, so I need to run windows. Forscan for Ford vehicles also has to run under Windows. Having WSL2 means that I can just have one laptop and run any software that I want.
My development is mainly Windows and I prefer Linux host with Windows VM guests. The experience is more stable and I can revert to a snapshot when Windows or Microsoft product update brakes something or new test configuration does. It also allows to backup and retain multiple QA environments that are rarely used, like a client's Oracle DB. It is nice being able to save the VM state at the end of the week and shut it all down so you can start the next right where you left off. Cannot do that when your development environment is the bare metal OS. Windows has known issues of waking a sleeping laptop.
I too think it would be definitely more stable Linux Host with Win VM guests, but I can see the other way around being more convenient to get support for commercially. Though with the VMWare licensing changes, I think what is by default easier for commercial support options may be changing too.
Can you share more details of how you make that work well? What hypervisor, what backup/replication, for instance? I can only imagine that being a world of irritation.
It's been a few years since I used it, but Virtualbox (free) had perfectly good suspend/restore functionality, and the suspended VM state was just a file.
I use virt-manager and suspend/restore for the same feature, doesn't using an oracle product (with all the side-effects that brings).
I use libvirt/kvm/qemu. It works fine to do all the things mentioned like snapshots.
> Windows has known issues of waking a sleeping laptop.
Doesn't Linux as well?
I'm on Lenovo Yoga 6, Gentoo, 6.12 kernel, 4.20 Xfce. Sleeps works perfect. Same on my Asus+AMD desktop. I've not had sleep related issues for years. And last time I did, it was an out-of-tree Wifi driver causing the whole mess.
I'm on Ubuntu 25.04, 128GB RAM, pcie 5 SSD, NVIDIA 5080, 9950X3D.
I discovered over the weekend that only 1 monitor works over HDMI, DisplayPort not working, tried different drivers. Suspend takes a good 5 minutes, and on resume, the UI is either turn or things barely display.
I might buy a Windows license, especially if I can't get multi-screen to work.
Be pragmatic, use the binaries provided by nvidia and not the ones provided by Ubuntu.
Or use Suse, only distro that manages that well. Forget PopOS. Really, either binaries or Suse.
If someone else here is entrenched on Arch, do this: https://github.com/Frogging-Family/nvidia-all
If on Fedora, just use the binaries... trust me.
Hope this helps someone.
Try a lower version of the Nvidia driver. The newer version was causing me and folk I work with a lot of problems.
This has been a pain point for us and our development process… not all versions of Nvidia drivers are the same… even released ones. You have to find a “good” version and keep to it, and then selectively upgrade… at least this has been the case the last 5 years, folks shout out if they have had different experiences.
Side note: our main use case is using cuda for image processing.
In my experience Ubuntu has the worst issues with displays of any distro.
To be fair I stay away from NVIDIA to, I would probably run a separate headless box for those GPU workloads if I needed to
Yeah, Ubuntu used to be the distro that "just worked" while nowadays that crown has passed to Fedora.
> In my experience Ubuntu has the worst issues with displays of any distro.
In my experience, it has zero issues. I use nvidia binary build. I have since 2006 through various nvidia GPU's.
Install Pop_OS! for better OOTB NVIDIA support.
Make sure your device is compatible with WSL this way, its very fragile and prone to breaking
Ahhh, the famous "Works on my machine!" stamp of truth.
"Works on my machine!" is stupid when it comes to software running under an OS, because a userland program that is correct shouldn't work any differently from box to box. (Exceptions you already know notwithstanding.) It is very different when it comes to an operating system.
I know people here hate this, but if you want a good Linux experience, you need to start by picking the right hardware. Hardware support is far and away the number one issue with having a good Linux experience anymore. It's, unfortunately, very possible to even set out to pick good hardware and get burnt for various reasons, like people misrepresenting how well a given device works, or perhaps just simply very similar SKUs having vastly different hardware/support. Still, i'm not saying you have to buy something from a vendor like System76 that specifically caters to Linux. You could also choose a machine that just happens to have good Linux support by happenstance, or a vendor that explicitly supports Linux as an option. I'm running a Framework Laptop 16 and it works just fine, no sleep issues. As far as I know, the sole errata that exists for this laptop is... Panel Self Refresh is broken in the AMDGPU driver. It sorta works, but it's a bit buggy, causing occasional screen artifacts. NixOS with nixos-hardware disables it for me using the kernel cmdline argument amdgpu.dcdebugmask=0x10. That's about it. The fingerprint reader is a little fidgety, and Linux could do a better job at laptop audio out of the box, but generally speaking the hardware works day in and day out. It's not held together with ducktape.
I don't usually bother checking to see if a given motherboard will work under Linux before buying it, since desktop motherboards tend to be much better about actually running Linux well. For laptops, Arch wiki often has useful information for a given laptop. For example, here's the Arch wiki regarding the Framework 16:
https://wiki.archlinux.org/title/Framework_Laptop_16
It's fair to blame Linux for the faults it actually has, which are definitely numerous. But let's be fair here, if you just pick a given random device, there is a good chance it will have some issues.
I recall having a sleep issue with linux 15 years ago, I think its been fixed long ago, except maybe on some very new hardware or if you install the wrong linux on an M series Mac you could have issues with sleep.
I had these issues with Windows, but with Linux Mint it works perfectly.
Not of you don't buy Windows hardware and slap Linux on it.
Unfortunately, most (almost all) hardware is Windows hardware. So far, System76 is the only one that I've had actually work.
The less coupled software is to hardware, the less likely it is tested in that hardware and the higher likelihood of bugs. Linux can run fine but arbitrary Linux distros may not. This is not the fault of hardware makers.
> The less coupled software is to hardware, the less likely it is tested in that hardware and the higher likelihood of bugs.
Yes, exactly! There are whole teams inside Dell etc. dealing with this. The term is "system integration." If you're doing this on your own, without support or chip unfo, you are going to (potentially) have a very, very bad time.
> This is not the fault of hardware makers.
It is if they ship Linux on their hardware.
This is why you have to buy a computer that was built for Linux, that ships with Linux, and with support that you can call.
Tell me how its not their fault ?
Hardware support is more than just kernel support. Additionally, not every kernel release works well for every piece of hardware. Each distro is unique and ensuring the correct software is used together to support the hardware can be difficult when you are not involved in the distro. This is why vertical integration between the distro and hardware leads to higher quality.
Firmware also plays a huge role these days (fan curves, ACPI, power management, etc.)
But saying it can vary largely by distro is overstating it by a lot. Mostly, distro issues are going to be based on how old their kernels are.
But definitely, modern hardware is much too complex to just slap Linux on Windows (and vice versa).
I have Linux on MacBooks from 6 different years. They all work flawlessly. I also have a Lenovo that works well.
Sorry you have had such bad luck.
System76 seems janky though if you use anything but PopOS
I run Gentoo on all but one of my system76 boxen, and have not seen any jank
I am running ChromeOS with Debian 'slapped on it' and that also experience sleep related issues.
Big fan of Linux, but saying that Linux works on system76 while they have a tiny sliver of the Linux market share seems like a nonstarter.
ChromeOS, where sleep presumably worked, is also Linux. You just exchanged a working Linux for a distro with more bugs. The fact that you're able to do that is pretty cool.
That's not to detract from the larger point here though. It's pretty funny that all of the replies in this thread identify different causes and suggest different fixes for the same symptom. Matches my experience learning Linux very well.
Turns out you get what you pay for.
You can either get hardware that works or you can deal with breakage.
You can force it to behave on Linux ;)
> My development is mainly Windows and I prefer Linux host with Windows VM guests
I've tried this in the past but I was unable to get the debugger to work from within a VM.
Has this improved, or is there a trick, or are you just going without a debugger?
In the same spirit if "it depends", there are other options that may work for people with different Linux/Windows balance points:
* Wine is surprisingly good these days for a lot of software. If you only have an app or two that need Windows it is probably worth trying Wine to see if it meets your needs.
* Similarly, if gaming is your thing Valve has made enormous strides in getting the majority of games to work flawlessly on Linux.
* If neither of the above are good enough, dual booting is nearly painless these days, with easy setup and fast boot times across both OSes. I have grub set to boot Linux by default but give me a few seconds to pick Windows instead if I need to do one of the few things that I actually use Windows for.
Which you go for really depends on your ratio of Linux to Windows usage and whether you regularly need to mix the two.
And you also can just run a windows VM when needed for a few apps if that works for your use case.
I'm struggling to find an option for running x86 Windows software on MacOS/Apple Silicon performantly. (LiDAR point cloud processing.)
The possibilities seem endless and kinda confusing with Windows on ARM vs Rosetta and Wine, think there's some other options which use MacOS's included virtualization frameworks.
Have you tried CloudCompare? Native Mac ARM support.
https://www.cloudcompare.org/
(Edit: just so you know, the UI is a bit weird, there is a bit of a learning curve. But the app behaves in a very sane manner, with every step the previous state is maintained and a new node is created. It takes time to get used to it, but you'll learn to appreciate it.
May your cloud have oriented normals, and your samples be uniformely distributed. Godspeed!)
Have you tried to install Windows 11 ARM under UTM on Mac? UTM is a kind of open source Parallels. Then you'll run x86 software using Windows' variant of Rosetta. Probably slower than Rosetta but perhaps good enough.
In case others were similarly confused, I thought that UTM was commercial but it is Apache 2 https://github.com/utmapp/UTM/blob/v4.6.5/LICENSE
I wanted to play around with Windows 11 for a while now. It boots in UTM just to the degree that I can confirm my suspicions that Windows 11 sucks compared to Windows 10, but is not otherwise usable. (MacBook Air M3, slightly outdated macOS)
That’s interesting; I’d expect something techie like that to have good Linux programs.
Try Whisky: https://github.com/Whisky-App/Whisky
> Forscan for Ford vehicles also has to run under Windows.
I've successfully run it with WINE. Thought, my Forscan executable was 3 years old or so and that may have changed, but I doubt it.
The thing about WINE is that it's not necessarily solid enough to rely on at work. You never know when the next software upgrade will break something that used to work.
That's always true, of course. But, compared to other options, relying on WINE increases the chances of it happening by an amount that someone could be forgiven for thinking isn't acceptable.
In my mind, I almost feel like the opposite is true. Wine is getting better and better, especially with the amount of resources that Valve is putting into it.
If you want a stable, repeatable way to wrangle a Windows tool: Wine is it. It's easy to deploy and repeat, requires no licenses, and has consistent behavior every time (unless you upgrade your Wine version or something). Great integration with Linux. No Windows Updates are going to come in and wreck your systems. No licensing, no IT issues, no active directory requirements, no forced reboots.
You can fix this issue by using a wine "bottle manager" like... Bottles. This allows you to easily manage multiple instances of wine installations (like having multiple windows installations) with better and easy to use tooling around it. More importantly, it also allows you to select across many system agnostic versions of wine that won't be upgraded automatically thus reducing the possibility of something that you rely breaking on you.
Or pony up for CodeWeavers. Their code goes into WINE, and they are (the?) major WINE devs. They've had bottles for years, if not decades now.
I used to a long time ago but even back then I was getting more value out of q4wine (a defunct project now) than from CodeWeavers stuff. Granted, I was perhaps too "enthusiast" using git versions of wine with staging patches and my own patches rolled into it, so q4wine (and I guess now Bottles) more DIY approach won me over.
That all said, I haven't tried CodeWeavers in almost 10 years so it might have improved a lot.
No, if wine itself breaks a bottle won't save you.
When I hear cases of using Wine etc as a substitute, I can't help but think of the "We have McDonald's at home" meme!
Wine is fantastic, but it is fantastic in the sense of being an amazing piece of technology. It's really lacking bits that would make it a great product.
It's possible to see what Wine as a great product would look like. No offense to crossover because they do good work, but Valve's Steam Play shows what you can really do with Wine if you focus on delivering a product using Wine.
Steam offers two main things:
- It pins the version of Wine, providing a unified stable runtime. Apps don't just break with Wine updates, they're tested with specific Proton versions. You can manually override this and 9 times out of 10 it's totally fine. Often times it's better. But, if you want it to work 10 out of 10 times, you have to do what Valve does here.
- It manages the wineserver (the lifecycle of the running Wine instance) and wine prefix for you.
The latter is an interesting bit to me. I think desktop environments should in fact integrate with Wine. I think they should show a tray icon or something when a Wineserver is running and offer options like killing the wineserver or spawning task manager. (I actually experimented with a standalone program to do this.[1]) Wine processes should show up nested under a wineserver in system process views, with an option to go to the wineprefix, and there should be graphical tools to manage wine prefixes.
To be fair, some of that has existed forever in some forms, but it never really felt that great. I think to feel good, it needs to feel like it's all a part of the desktop system, like Wine can really integrate into GNOME and KDE as a first-class thing. Really it'd be nice if Wine could optionally expose a D-Bus interface to make it so that desktop environments could nicely integrate with it without needing to do very nasty things, but Wine really likes to just be as C/POSIX/XDG as possible so I have no idea if something like that would have a snowball's chance in hell of working either on the Wine or desktop environment side.
Still, it bums me out a bit.
One pet peeve of mine regarding using Wine on Linux is that EXE icons didn't work out of the box on Dolphin in NixOS; I found that the old EXE thumb creator in kio-extras was a bit gnarly and involved shelling out to an old weird C program that wasn't all that fast and parsing the command line output. NixOS was missing the runtime dependency, but I decided it'd be better to just write a new EXE parser to extract the icon, and thankfully KDE accepted this approach, so now KDE has its own PE/NE parser. Thumb creators are not sandboxed on KDE yet, so enable it at your own risk; it should be disabled by default but available if you have kio-extras installed. (Sidenote: I don't know anything about icons in OS/2 LX executables, but I think it'd be cool to make those work, too.) The next pet peeve I had is that over network shares, most EXE files I had wouldn't get icons... It's because of the file size limit for remote thumbnails. If you bump the limit up really high, you'll get EXE thumbnails, but at the cost of downloading every single EXE, every single time you browse a remote folder. Yes, no caching, due to another bug. The next KDE frameworks version fixes most of this: other people sorted out multiple PreviewJob issues with caching on remote files, and I finally merged an MR that makes KIO use kio-fuse when available to spawn thumb creators instead of always copying to a temporary file. With these improvements combined, not just EXE thumbnails, but also video thumbnails work great on remote shares provided you have kio-fuse running. There's still no mechanism to bypass the file size limit even if both the thumbcreator and kio-fuse remote can handle reading only a small portion of the file, but maybe some day. (This would require more work. Some kio slaves, like for example the mpt one, could support partially reading files but don't because it's complicated. Others can't but there's no way for a kio-fuse client to know that. Meanwhile thumb creators may sometimes be able to produce a thumbnail without reading most of the file and sometimes not, so it feels like you would need a way to bail out if it turns out you need to read a lot of data. Complicated...)
I could've left most of that detail out, but I want to keep the giant textwall. To me this little bit of polish actually matters. If you browse an SMB share on Linux you should see icons for the EXE files just like on Windows, without any need to configure anything. If you don't have that, then right from the very first double-click the first experience is a bad one. That sucks.
Linux has thousands of these papercuts everywhere and easily hundreds for Wine alone. They seem small, but when you try to fix them it's not actually that easy; you can make a quick hack, but what if we want to do things right, and make a robust integration? Not as easy. But if you don't do that work, you get where we're at today, where users just expect and somewhat tolerate mediocre user experience. I think we can do better, but it takes a lot more people doing some ultimately very boring groundwork. And the payoff is not something that feels amazing, it's the opposite: it's something boring, where the user never really has any hesitation because they already know it will work and never even think about the idea that it might not. Once you can get users into that mode you know you've done something right.
Thanks for coming to my TED talk. Next time you have a minor pet peeve on Linux, please try to file a bug. The maintainers may not care, and maybe there won't be anyone to work on it, and maybe it would be hard to coordinate a fix across multiple projects. But honestly, I think a huge component of the problem is literally complacency. Most of us Linux users have dealt with desktop Linux forever and don't even register the workarounds we do (anymore than Windows or Mac users, albeit they probably have a lot less of them.) To get to a better state, we've gotta confront those workarounds and attack them at the source.
[1]: https://github.com/jchv/winemon just an experiment though.
If you (or whoever is reading this) want(s) a more refined Wine, I highly recommend CodeWeavers. Their work gets folded back into open source WINE, no less.
> To get to a better state, we've gotta confront those workarounds and attack them at the source.
To my eye, the biggest problem with Linux is that so few are willing to pony up for its support. From hardware to software.
Buy Linux computers and donate to the projects you use!
That's true, but even when money is donated, it needs to be directed somewhere. And one big problem, IMO, is that polish and UX issues are not usually the highest priority to sort out; many would rather focus on higher impact. That's all well and good and there's plenty of high impact work that needs to be done (we need more funding on accessibility, for example.) But if there's always bigger fires to put out, it's going to be rather hard to ever find time to do anything about the random smaller issues. I think the best thing anyone can do about the smaller issues is having more individual people reporting and working on them.
If your at work, it's probably a Windows shop. Use windows. At home you can chance a bad update, and probably also have access to windows. Can always use a VM, wine is great in some cases, like WSL. Both don't meet every use case.
Same about windows upgrades nowadays really, there's a ton of software which just stopped working.
They named it “Forscan?” They really named it that, not thinking it could sound close to something else entirely unrelated?
Surely you don't think the executives at Ford expect us to Power Stroke without FORScan?
Ford’s own software is called FDRS.
Forscan was developed independently by some Russian gentlemen, probably with plenty of reference to FDRS/IDS internals.
Volkswagen's equivalent is VAG-COM
why bring wine into a vm discussion? just run windows in a vm too. problem solved without entering the whining about wine not being better than windows itself
I work in embedded systems. In that space, it's pretty common to need some vendor-provided tool that's Windows-only. I often need to automate that tool, maybe as part of a CI/CD pipeline or something.
If I were to do it with a Windows VM, I'd need to:
If I do it with Wine instead, all I need to do is:Did you know that Forscan works flawlessly under Wine if you're not using Bluetooth?
Im sure with enough tinkering I could get Solidworks to run. The thing is I don't want to spend time tinkering, I want to spend time doing. WSL2 gives me the optimal solution for all of that + dev.
I really want to like Windows 11, and I enjoy using WSL, but Microsoft treats me too much like an adversary for me to tolerate it as a daily driver. Only a complete scumbag of a product manager would think pushing Candy Crush ads is a good idea.
I’ve got an airgapped Toughbook that I use for the few Windows apps I really need to talk to strange hardware.
I suggest looking into Windows LTSC. It has solved most of the annoyances for me.
You don't need LTSC, you just need Windows Pro versions.
Lots of people bitch and moan about Windows problems that only exist because they buy the cheaper "Home" or whatever license and complain that Microsoft made different product decisions for average users than for people who have bought the explicitly labeled "power user" version.
Remember, the average computer user IS a hostile entity to Microsoft. They will delete System32 and then cry that Windows is so bad! They will turn off all antivirus software and bitch about Windows being insecure. They refuse to update and then get pwned and complain. They blame Microsoft for all the BSODs that were caused by Nvidia's drivers during the Vista era. They will follow a step by step procedure in some random forum from ten years ago that tells them to turn off their entire swap file despite running with lots of RAM and spinning rust and then bitch that Windows is slow.
Don't expect Microsoft to not deal with morons using their software. Buy the Pro versions if you don't want the version meant for morons.
I’m on Enterprise.
I shouldn’t need to spend this much time and energy turning off AI rubbish, bypassing cloud features, or knobbling telemetry and ads because some shitbag at Microsoft decided this was a good way of getting a promotion.
My computer is supposed to work for me, not the other way around.
Windows is only free if you don't value your time, it seems :-)
You do need to get Win 11 pro to be able to disable all of those features.
I run Windows in a VM where I need windows. It’s so much easier to fix a broken Linux installation than a broken Windows installation.
> For example, I use Solidworks, so I need to run windows.
Right. One of the things a lot of people don't get is the extent to which multidisciplinary workflows require Windows. This is particularly true of web-centric software engineers who simply do not have any exposure to the rest of the engineering universe.
Years ago this was the reason we had to drop using Raspberry Pi's little embedded microcontroller. The company is Linux-centric to such an extent that they simply could not comprehend how telling someone "Just switch to Linux" is in a range between impossible and nonsensical. They were, effectively, asking people to upend their PLM process just for the sake of using a little $0.50 part. You would have to do things like store entire OS images and configurations just to be able to reconstruct and maintain a design iteration from a few years ago.
WSL2 is pretty good. We still haven't fully integrated this into PLM workflows though. That said, what we've done on our machines was to install a separate SSD for WSL2. With that in place, backing-up and maintaining Linux distributions or distributions created in support of a project is much, much easier. This, effectively, in some ways, isolates WSL2 distributions from Windows. I can clone that drive and move it from a Windows 10 machine to a Windows 11 machine and life is good.
For AI workflows with NVIDIA GPU's WSL2 is less than ideal. I don't know if things have changed in this domain since I last looked. Our conclusion from a while back was that, if you have to do AI with the usual toolchains, you need to be on a machine running Linux natively rather than a VM running under Windows. It would be fantastic if this changed and one could run AI workflows on WSL2 without CUDA and other issues. Like I said, I have not checked in probably a year, maybe things are better now?
EDIT: The other reality is that one can have a nice powerful Linux machine next to the Windows box and simply SSH into it to work. Most good IDE's these days support remote development as well. If you are doing something serious, this is probably the best setup. This is what we do.
My coworkers stubbornly try to use WSL instead of Linux directly. They constantly run into corner cases and waste time working around them compared to just using Linux. Some tooling detects that it is running on Windows, and some detects that it is running on Linux. In practice, it's the worst of both worlds.
Saying running full Linux avoids wasting time on fiddly workarounds kinda blows my mind.
Full hardware support is still not a given, and Windows emulation is still need for so many cases (e.g. games, specialized software etc).
Until I can choose any machine based on form factor and specs alone and just run Linux on it, WSL will the best version of Linux it can run.
> Full hardware support is still not a given
What may hap be your workload? The only thing that aren't working on Linux day 1 are GPU's, and it's mostly because kernel/distro timings (we haven't had a GPU release without support for mainline kernel in years).
I am into small and portable, decently powerful, high DPI laptops (battery be damned), ideally with touch support. And this category just gets no love in the linux world.
I was holding hopes for the Framework 12" but they cheaped on the screen to target the student market, with no upgrade option at this point.
Or the wireless chipset that your corporate laptop happens to have. Or Bluetooh. Or it won't suspend properly.
Or a way worse touchpad experience. No swiping geastures. No smooth scrolling. FN-buttons not working. Or any other million issues. I have never been able to install Linux on a laptop and getting things to work within a weekend. And then reverting becuase I need my computer.
Run wayland instead of xorg… Also get better laptops.
> better laptops
The absolute best built laptops on the market right now don't come with Linux support...
If you're thinking of apple… as a former apple owner and current thinkpad owner… the built quality of apple is severely overrated. Please come back with comments that are not just shilling.
Buy a System76
That was kind of my point: we're still at a stage where checking a list of supported laptops and vendors is pretty much mandatory.
This is totally laptop vendors' fault, but that doesn't change the fact of the matter.
PS: it would be fine if there was a few good options in all categories. Right now I see nothing comparable to an Asus Z13 but with first class Linux support for instance.
In the case of some "compatibility" subsystem, it's absolutely true. It's complexity that requires fiddly workarounds.
Just use Linux.
> Full hardware support is still not a given,
If you're not buying your hardware from a vendor you can call and get support with Linux from, you're going to have a hard time.
> Full hardware support is still not a given
I bought an iphone and then got angry it didn't run android
Why would your primary work device be running an OS not supported by the device vendor? That's just bizarre.
I use Linux as my primary OS, and while Proton/Steam are pretty good now I'm still rebooting into (unactivated) Windows for some games. It's fine. It's also the only thing I use Windows for.
On an unrelated note, I'm frankly confused about who wants Apple's janky OS, because I've been forced to use it for work and it is very annoying.
What modern hardware isn't supported by Linux? I haven't had driver problems in probably over a decade. I don't even target Linux for my builds, it just works. Same with the pile of random laptops I've installed it on. Wifi out of the box etc.
> What modern hardware isn't supported by Linux?
Fingerprint sensors and IR login cameras that are pre-installed on many laptops, and have Windows-only drivers.
As an end-user (yes, I'm an engineer too, but from the perspective of the OS and driver developers I am an end-user) I don't care who is in charge of getting the device to work on an OS—I only care that it works or not. And these devices don't, on Linux. So, they are broken.
My fingerprint scanner works, but I don't use it because typing my password is faster.
yeah those are weird since huge chunk of the drivers are userland.
What detects it is running on windows out of interest?
I use WSL extensively, with lots of languages, and I’ve never had anything do that.
It’s running in a VM, so that would be some kind of weird VM escape?
It's easy, it's right there in uname -r.
I would love to hear about these edge cases and which tooling fails to detect they they were launched form linux.
Sounds a lot like a picnic problem but you didn’t give nearly enough details.
Yesterday, they tried to get a Python library that built a native library using Meson to work. They were working under WSL, but somehow, Meson was attempting to use the MSVC toolchain and failing.
And they were using pip/uv whatever from linux, the linux version.
One of the most common issues is calling a windows executable from within wsl… it’s a “convenience” feature that takes about 2 seconds to disable in the wsl config but causes these kinds of weird bugs
If on WSL2, they need
[interop]
appendWindowsPath=false
section in /etc/wsl.conf.
Then everything will go flawlessly.
> "More powerful than Linux" is silly. It's a VM.
I don't think it's silly. Sure, it's a VM, but it's so nice that I barely reboot into Linux. You get the best of both worlds with WSL.
For me, the best part of running Linux as the base OS is not having to deal with Windows.
No ridiculous start menu spam; a sane, non-bloated operating system (imagine being able to update user space libraries without a reboot, due to being able to delete files that other processes still have opened!); being able to back up my data at the file level without relying on weird block-level imaging shenanigans and so much more.
How is inverting the host/guest relationship an improvement on that?
> For me, the best part of running Linux as the base OS is not having to deal with Windows.
This is correct, but let's not pretend that linux is perfect. 99% of linux _for me_ is my terminal environment. WSL delivers on that _for me_.
I don't see any start menu spam because I rarely use it, when I do I type what I'm looking for before my eyes even move to look at that start menu.
oh, I can play destiny 2 and other games without shenanigans. Also don't need to figure out why Slack wants to open links in chromium, but discord in firefox (I have to deal with edge asking to be a default browser, but IMO it's less annoying).
Oh and multi-monitor with multiple DPI values works out of the box without looking up how to handle it in one of the frameworks this app uses.
> when I do I type what I'm looking for before my eyes even move to look at that start menu.
That's a /s, right? When I start typing immediately after the windows button, the initial letters are lost, the results are bad either way, and most turn into just web suggestions rather than things named exactly like the input.
> That's a /s, right? When I start typing immediately after the windows button, the initial letters are lost, the results are bad either way, and most turn into just web suggestions rather than things named exactly like the input.
No, I rarely have issues with search in start menu.
Turn off web suggestions then?
I did. They come back after some of the updates.
> imagine being able to update user space libraries without a reboot
That's... a very weird criticism to level at Windows, considering that the advice I've seen for Linux is to reboot if you update glibc (which is very much a user space library).
Why? It directly results in almost every Windows update requiring a reboot to apply, compared to usually only an application restart or at most desktop logout/login on Linux.
Having to constantly reboot my computer, or risk missing important security patches, was very annoying to me on Windows.
I've never had to reboot after updating glibc in years of using Linux, as far as I can remember.
You got some moderately bad advice.
Running programs will continue to use the libc version that was on disk when they started. They won't even know glibc was upgraded. If something is broken before rebooting, it'll stay broken after.
This is not true. Different programs on the same system that interoperate and use different versions of the same shared library can absolutely cause issues.
For a trivial change to glibc, it won't cause issues. But there's a lot of shared libraries and lots of different kinds of changes in different kinds of libraries that can happen.
I still haven't nailed if it was due to a shared library update, but just the other day, after running upgrades I was unable to su or sudo / authenticate as a user until after rebooting.
It does happen, but it's pretty rare compared to Windows in my experience, where inconvenience is essentially guaranteed.
Firefox on Linux did not really enjoy being updated while running, as far as I remember; Chrome was fine with it, but only since it does some extra work to bypass the problem via its "zygote process": https://chromium.googlesource.com/chromium/src/+/main/docs/l...
Works fine with Nix and Guix since it doesn't replace JS or other shared config files in-place to perform updates
The only time I need to reboot my Linux Mint is when the Linux kernel is updated. I understand why.
I responded "This is not true" to a sibling comment about this same topic, but about "shared libraries", which is the opposite problem (multiple programs could load the same shared library and try to interact).
This is absolutely not true for Linux kernel updating. While you won't be using the new kernel before rebooting, there's 0 risk in not rebooting, because there's exactly 1 version of the kernel running on the machine -- it's loaded into memory when your computer starts.
There's of course rare exceptions, like when a dynamically linked library you just installed depends on a minimum specific version of the Linux kernel you also just installed, but this is extremely rare in Linux land, as backwards compatibility of programs with older kernels is generally a given. "We do not break userspace"
One problem not rebooting with the kernel is drivers. They aren’t all built in.
Most distros leave the current running kernel and boot into the new one next time.
Some, like Arch, overwrite the kernel on an update, so modules can’t be loaded. It is a shock the first time you plug in a USB drive and nothing happens.
Good point, thanks for the insight!
I have a theory that 99.9% of preferring Windows or Linux comes down to "do ads in the start menu trigger my OCD".
It runs much deeper than that for me.
Windows at its core just does not seem like a serious operating system to me. Whenever there are two ways to do something, its developers seem to have picked the non-reasonable one compared to Unix – and doing that for decades adds up.
But yes, first impressions undoubtedly matter too.
I have no idea what Windows does with the various network services but my Pi-Hole gets rate-limited* when it connects to the network--there's just constant DNS lookups to countless MS domains, far beyond what could reasonably be expected for a barebones install.
This isn't even a corpo-sloptop with Qualys and Zscaler and crap running, Just a basic WIndows box I rarely boot. It's deeply offensive to me.
When you compare thing on API level, NT is generally superior to POSIX - just look at what a mess fork() is for one example, or fd reuse, or async I/O.
Want to talk about how each process has to implement their own custom escaping and splitting of the command line string?
That's much more complicated and error prone than fork.
The C runtime will do that for you, and it has been a standard OS component since Win10.
But also, no, it's not worse than fork. Fork literally breaks every threaded app.
> standard OS component since Win10.
So, basically yesterday, and not default like how it is with execve, and you can never know if the command you're trying to call implements it the same way or does a different escaping.
Care to explain how fork "breaks" threaded apps? You can't mix them for doing multiprocessing, but it's fine if you use one model or the other.
Win10 has been around for literally a decade now. So much so that it's going out of support.
fork() breaks threaded apps by forking the state of all threads, including any locks (such as e.g. the global heap lock!) that any given thread might hold at that moment. In practice this means that you have to choose either fork or threads for your process. And this extends to libraries - if the library that you need happens to spawn a background thread for any reason, no more fork for you. On macOS this means that many system APIs are unusable. Nor is any of this hypothetical - it's a footgun that people run into regularly (just google for "fork deadlock") even in higher level languages such as Python.
How long has fork() existed? Is it less than 10 year? Is it much much more?
> just google for "fork deadlock"
I did, results were completely unrelated to what you're talking about.
Anyway libraries spawning hidden threads… I bet they don't even bother to use reentrant functions? I mean… ok they are written by clueless developers. There's lots and lots of them, they exist on windows too. What's your point?
That's
It is not the standard in Windows land to run processes by handing them fifty commandline arguments. Simple as that. Win32 apps have strong support for selecting multiple files to pass to the app from within the file select dialog, as long as you follow the documentation.
It's like complaining that Unix is hard to use because I can't just drop a dll into a folder to hook functionality like I can on Windows. It's a radically different design following different ideologies and you can't magically expect everything to transfer over perfectly. If you want to do that on Linux land, you learn about LD_PRELOAD or hook system calls.
If you want to build powerful, interoperable modules that can pipe into each other and compose on the commandline, Powershell has existed since 2006. IMO, passing well formed objects from module to module is RADICALLY better than passing around text strings that you have to parse or mangle or fuck with if you want actual composibility. Powershell's equivalent of ls doesn't have to go looking at whether it is being called by an actual terminal or by an app Pipe for example in order to support weird quirks. Powershell support for Windows internals and functionality is also just radically better than mucking around in "everything is a file" pseudo folders that are a hacky way to represent important parts of the operating system, or calling IOCntrls.
I also think the way Windows OS handles scheduled tasks and operations is better than cron.
I also think Windows Event logging is better than something like dmesg, but that's preference.
Also EVERYTHING in Windows land is designed around remote administration. Both the scheduled tasks and Event Logging systems are transparently and magically functional from other machines if you have you AD setup right. Is there anything in Linux land like AD?
> Win32 apps have strong support for selecting multiple files to pass to the app from within the file select dialog
The problem is when you want to click a file on your file manager and you want it to open in the associated application. Because the file manager can only hope the associated application parses the escapes the same way it generates them. Otherwise it's file not found :)
I'm not going to bother to reply point by point since you completely missed the point in the first few words.
I have used Windows for years, and I loved it. I never understood why Linux and Mac users kept bashing on it. I just didn't know any better.
These days I'm avoiding booting into Windows unless I really have no choice. The ridiculousness of it is simply limitless. I would open a folder with a bunch of files in it and the Explorer shows me a progress bar for nearly a minute. Why? What the heck is it doing? I just want to see the list of files, I'm not even doing anything crazy. Why the heck not a single other file navigator does that — not in Linux, not on Mac, darn — even the specialized apps built for Windows work fine, but the built-in thing just doesn't. What gives? I would close the window and re-open the exact same folder, not even three minutes later and it shows the progress bar again. "WTF? Can't you fucker just cache it? Da fuk you doing?"
Or I would install an app. And seconds after installing it I would try to search for it in the Start menu, and guess what? Windows instead opens Edge and searches the web for it. wat? Why the heck I can't remove that Edge BS once and for all? Nope, not really possible. wat?
Or like why can't I ever rebind Cmd+L? I can disable it but can't rebind it, there's just no way. Is it trying to operate my computer, or 'S' in 'OS' stands for "soul"?
Or for whatever reason it can't even get the time right. Every single time I boot into it, my clock time is wrong. I have to manually re-sync it. It just doesn't do it, even with the location enabled. Stupid ass bitch.
And don't even let me rant about those pesky updates.
I dunno, I just cannot not hate Windows anymore. Even when I need to boot in it "for just a few minutes", it always ends up taking more time for some absolute fiddlesticks made of bullcrap. Screw Windows! Especially the 11 one.
> Or for whatever reason it can't even get the time right. Every single time I boot into it, my clock time is wrong.
Dual booting will do that because linux & windows treat the system clock differently. From what I recall one of them will set it directly to the local time and the other always sets it to UTC and then applies the offset.
The most reliable fix is to get Windows to use UTC for the hardware clock, which is usually the default on Linux. (It's more reliable because it means the hardware clock doesn't need to be adjusted when DST begins or ends, so there's no need for the OSs to cooperate on that.)
https://wiki.archlinux.org/title/System_time#UTC_in_Microsof...
That flag has been broken for at least several Windows versions, unfortunately. A shame, given that that's the only sane way of using the RTC in the presence of DST or time zone shifts...
That's exactly the type of Windows-ism I'm talking about. Two options (use UTC or the local time), and Windows chose to pick the nonsensical one.
Yeah, well, I use ntfs in Linux. It somehow knows how to treat the partitions. Even though it can't fix the issues when they arise (which almost never happens) — there's no chkdsk for Linux. So, I just don't understand why Windows can't automatically sync the clock (as it explicitly set to do it) when it boots? Why does one have to get creative to fix the darn clock? If I can't even trust the OS to manage the time correctly, what can I trust it with, if anything at all?
Windows syncs the clock to time.windows.com OOTB. This can be changed to any time provider.
https://learn.microsoft.com/en-us/windows-server/networking/...
I have the same issue and don’t dual boot.
I loved windows XP and Windows 7. They were a bit brittle regarding malware, but I was using a lot of pirated software at the times, so that may have been me. Win 8 was bad UX wise, but 8.1 resolved a lot of the issues. But since then, I barely touched windows.
I want a OS, not an entertainment center, meaning I want to launch a program, organize my files, and connect to other computers. Anything that hinders those is bad. I moved from macOS for the same reason, as they are trying to make those difficult too.
> I want a OS, not an entertainment center
Exactomundo! I'm a software developer, not a florist. I don't care about all those animations, transitions, dancing emojis, styled sliding notifications, windings and dingleberries. If I want to rebind a fucking key I should be able to. If I want to replace the entire desktop with a tiling manager of my choosing — that should be possible. And definitely, absolutely, in no way, should just about any kind of app, especially a web-browser, be shoved in my face. "Edge is not that bad", they would say. And would be completely missing the whole point.
Are you one of those guys that fiddles with registry settings and decrapifiers? To me, it sounds like you turned off file indexing. I turn it off when doing audio recording and yeah, that slows down file browsing.
> fiddles with registry settings
nope, that's with a pristine, freshly installed Windows Pro instance.
The reason varies by the decade. Microsoft has a tendency to fix one thing, then break another.
That said, a distaste for advertising goes beyond OCD. Advertisers frequently have questionable ethics, ranging from intruding upon people's privacy (in the many senses of the word) to manipulating people. It is simply something that many of us would rather do without.
I would say in my case it’s less about OCD and more about, inexplicably, dignity.
Advertising triggers a lot more than OCD in me outside of my start menu. On my machine, where I spend most of my waking hours, it was certainly the last straw for me.
But there's also the thing where Microsoft stops supporting older machines, creating a massive pile of insecure boxes and normie-generated e-waste; and the thing where it dials home constantly; and the thing where they try and force their browser on you, and the expensive and predatory software ecosystem, and the insane bloat, and the requiring a Microsoft account just to use my own computer. Oh yeah, and I gotta pay for this crap?!
I went full Linux back when Windows 11 came out and will only use it if a job requires. Utterly disgusting software.
Seems sorta not cool toward people with OCD to use their condition for rhetorical effect.
Take a chill pill.
What makes you think I’m not chill already? You engaged in a slightly rude trope, and I provided a very mild push back, at least from my point of view the stakes are all correctly low.
But you still get the worst of the Windows world, which is more than many are willing to deal with. I was using windows for years as my main gaming OS, but after they announced W11 being the only way forward. Switching to Linux on the desktop was like a breath of fresh air. I'll leave it at that.
If I were to run an OS on a VM it's gonna be windows, not Linux
> You get the best of both worlds with WSL.
You obviously don't. Maybe WSL is the best compromise for people who need both Windows and Linux.
But it's ridiculous to think that WSL is better than just Linux for people who don't need Windows at all. And that's kind of what the author of this thread seems to imply.
I think that case could be made. For example for people who have a laptop that is not well supported by linux. With WSL they get linux and can use all of their hardware.
If it’s impossible to massage Linux into working well with your laptop – sure. But you’re missing out so much, like, well, not having to deal with Windows.
Similarly powerful would be totally fine. More powerful really is silly. Personally I couldn't make a lot of my workflows work very well with WSL2. Some of the stuff I run is very memory intensive and the behavior is pretty bad for this in WSL2. Their Wayland compositor is also pretty buggy and unpolished last I used it, and I was never able to get hardware acceleration working right even with the special drivers installed, but hopefully they've made some progress on that front.
Having Windows and Linux in the same desktop the way that WSL2 does obviously means that it does add a lot of value, but what you get in the box isn't exactly the same as the thing running natively. Rather than a strict superset or strict subset, it's a bit more like a Venn diagram of strengths.
By default wsl2 grabs half of the memory, but that's adjustable. The biggest pain point I have is to run servers inside wsl that serve to non-localhost (localhost works auto-magically).
I am surprised you had such problems with wsl2 graphics acceleration. That just worked for me, including CUDA accelerated workloads on the linux side.
Technically it's not a VM, it's a subsystem, the same way Win32, Win64, Posix, OS/2, etc. are.
It's a feature of the NT-family of kernels where you can create many environments sharing the same underlying executive and HAL.
It's a quite interesting way to build an OS: https://en.wikipedia.org/wiki/Architecture_of_Windows_NT
As everyone said, WSL2 is actually virtual machines and it is what most people are actually using now. That said, I feel the need to chime in and say I actually love WSL1 and I love Windows NT the kernel. It bums me out all the time that we probably won't get major portions of the NT kernel, even an out-of-date version, in some open source form.
I like Linux, and I use Linux as my daily desktop, but it's not because I think Linux or even UNIX is really that elegant. If I had to pick a favorite design it would be Windows NT for sure, even with all its warts. That said, the company behind Windows NT really likes to pile a lot of shit I hate on top of that pretty neat OS design, and now it's full of dubious practices. Automatic "malware submission" on by default, sending apps you download and compile yourself to Microsoft and even executing them in a VM. Forced updates with versions that expire. Unbelievable volumes of network traffic, exfiltrating untold amounts of data from your local machine to Microsoft. Ads and unwanted news all over the UI. Increasing insistence in using a Microsoft account. I could go on and on.
From a technical standpoint I do not think the Linux OS design is superior. I think Linux has some amazing tools and APIs. dmabufs are sweet. Namespaces and cgroups are cool. BPF and it's various integrations are borderline insane. But at its core, ... It's kinda ugly. These things don't all compose nicely and the kernel is an enormous hard-to-tame beast. Windows NT has its design warts too, all over, like the amount of involvement the kernel has in the GUI for historical reasons, and the enormous syscall surface area, and untold amounts of legacy cruft. But all in all, I think the core of what they made is really cool, the subsystems concept is super cool, and it is an OS design that has stood up well to time. I also think the PE format is better than ELF and that it is literally better for the capabilities it doesn't have w.r.t. symbols. Sure it's ugly, in part due to the COFF lineage, but it's functionally very well done IMO.
I feel the need to say this because I think I probably came off as a hater, and tbh I'm not even a hater of WSL2. It's not as cool as WSL1 and subsystems and pico processes, but it's very practical and the 9p bridge works way better than it has any right to.
Thanks for pointing this out.
Put another way: Worse is Better
It used to be. They moved to a VM.
Turns out that it's easier to emulate a CPU than syscalls. The CPU churns a lot less, too, which means that once things start working things tend to keep working.
> Turns out that it's easier to emulate a CPU than syscalls
I don't think WSL2 supports CPU emulation. It might not even support (or at least rely on) driver emulation, though Hyper-V itself does.
WSL 2 is actually virtualized despite the name
WSL1 was a subsystem. WSL2 is mostly a VM.
They had to give that up because it was too slow, I think for IO. Unfortunate.
It's complicated. WSL1 is much faster at accessing the drives mounted in Windows, but much slower at accessing its own emulated drive.
If you have control over where you put your git repo, WSL2 will hit max speed. If you want it shared between OSes, WSL2 will be slower.
It also didn't have working fsync, and corrupted SQLite databases. I think that's more important.
You're thinking of the POSIX personality of Windows NT of old. This was based on Interix and has been deprecated about two decades ago and is now buried so deep that it couldn't be revived.
The new WSL1 uses kernel call translation, like Wine in reverse and WSL2 runs a full blown Linux kernel in a Hyper-V VM. To my knowledge neither of these share anything with the aforementioned POSIX subsystem.
I mean... WINE does the same on windows, but microsoft refuses to release their API docs for all internal APIs. They release WSL by relying on Linux's open-ness, while refusing the same for themselves.
Then they discontinue WSL1 and just do a VM instead because... reasons. I really don’t understand how MSFT works on the inside.
A big one of those reasons was Docker. Docker was still fairly niche when WSL was released in 2016, but demand for it grew rapidly, and I don't think there was any realistic way they could have made it work on the NT kernel.
The integration between Windows and the WSL VM is far deeper than a typical VM hypervisor.
You cannot claim with a straight face that Virtualbox is easier to use.
It's deeper but let's not overblow it.
I think the two fairly deep integrations are window's ability to navigate WSL's filesystem and wslg's fairly good ability to serve up guis.
The filesystem navigation is something that AFAIK can't easily be replicated. wslg, however, is something that other VMs have and can do. It's a bit of a pain, but doable.
What makes WSL nice is the fact that it feels pretty close to being a native terminal that can launch native application.
I do wish that WSL1 was taken further. My biggest grip with WSL is the fact that it is a VM and thus takes a large memory footprint. It'd be nice if the WSL1 approach panned out and we instead had a nice clean compatibility wrapper over winapi for linux applications.
> The filesystem navigation is something that AFAIK can't easily be replicated.
The filesystem navigation getting partially open sourced is one of the more interesting parts being open sourced per this announcement. The Plan9 file server that serves files from Windows into Linux is included in the new open source dump. (The Windows filesystem driver that runs a Plan9 client on the Windows side to get files from Linux is not in the open source expansion.)
It's still fascinating that the whole thing is Plan9-based, given the OS never really succeeded, but apparently its network file system is a really good inter-compatibility file communication layer between Linux and Windows.
> I do wish that WSL1 was taken further.
WSL1 survives and there's still a chance it will see more work eventually, as the tides shift. I think the biggest thing that blocked WSL1 from more success was lack of partners and user interest in Windows Subsystem for Android apps. That still remains a potentially good idea for Windows if it had been allowed "real" access to Google Play Services and App Store, rather than second rate copy of Amazon's copy of Google Play Services and Fire App Store. An actual Google partnership seems doomed given one of the reasons to get Windows Subsystem for Android competitive was fear of ChromeOS, but Google still loves to talk about how "Open" Android is despite the Google Play Services moat and that still sounds like something that a court with enough fortitude could challenge (even if it is probably unlikely to happen).
> The integration between Windows and the WSL VM is far deeper than a typical VM hypervisor.
Sure, but I never claimed otherwise.
> You cannot claim with a straight face that Virtualbox is easier to use.
I also didn't claim that. I wasn't comparing WSL to other virtualization solutions.
WSL2 is cool. Linux doesn't have a tool like WSL2 that manages Linux virtual machines.
The catch 22 is that it doesn't need one. If you want to drop a shell in a virtual environment Linux can do that six ways through Sunday with no hardware VM in sight using the myriad of namespacing technologies available.
So while you don't have WSL2 on Linux, you don't need it. If you just want a ubuntu2204 shell or something, and you want it to magically work, you don't need a huge thing with tons of integration like WSL2. A standalone program can provide all of the functionality.
I have a feeling people might actually be legitimately skeptical. Let me prove this out. I am on NixOS, on a machine that does not have distrobox. It's not even installed, and I don't really have to install it since it's just a simple standalone program. I will do:
Here's what happened: No steps omitted. I can install software, including desktop software, including things that need hardware acceleration (yep, even on NixOS where everything is weird) and just run them. There's nothing to configure at all.That's just Fedora. WSL can run a lot of distros, including Ubuntu. Of course, you can do the same thing with Distrobox. Is it hard? Let's find out by using Ubuntu 22.04 instead, with console output omitted:
To be completely, 100% fair: running an old version of Ubuntu like this does actually have one downside: it triggers OpenGL software rendering for me, because the OpenGL drivers in Ubuntu 22.04 are too old to support my relatively new RX 9070 XT. You'd need to install or copy in newer drivers to make it work. There are in fact ways to do that (Ubuntu has no shortage of repos just for getting more up-to-date drivers and they work inside Distrobox pretty much the same way they work in real hardware.) Amusingly, this problem doesn't impact NVIDIA since you can just tell distrobox to copy in the NVIDIA driver verbatim with the --nvidia flag. (One of the few major points in favor of proprietary drivers, I suppose.)On the other hand, even trying pretty hard (and using special drivers) I could never get hardware acceleration for OpenGL working inside of WSL2, so it could be worse.
That aside, everything works. More complex applications (e.g. file browsers, Krita, Blender) work just fine and you get your normal home folder mapped in just like you'd expect.
Distrobox seems a lot like WSL to me. You can run many different Linux distros, each well integrated into the host system.
Except that Distrobox does not require a VM of course as the host kernel is Linux.
Yes, yes I can. Also does most of everything. WSL has severe issues with hardware translation.
> I get that WSL is revolutionary for Windows users
It is... I'm working these days on bringing a legacy windows only application to the 21st century.
We are throwing a WSL container behind it and relying on the huge ecosystem of server software available for Linux to add functionality.
Yes that stuff could run directly on windows, but you'd be a lot more limited in what's supported. Even for some restricted values of supported. And you'd have to reinvent the wheel for a few parts.
And if they think that this version of Linux "isn't janky" but regular Linux is, than idk what to say.
With WSL you can use “Linux the good parts” (command line tools, efficient-enough paradigms for fork() servers) and completely avoid X Windows, the Wayland death spiral, 100 revisions of Gnome and KDE that not so much reinvent the wheel but instead show us why the wheel is not square or triangular…
It's all opinion of course, but IMO Windows is the most clumsy and unintuitive desktop experience out there. We're all just used to the jank upon jank that we think it's intuitive.
KDE is much more cohesive, stable, and has significantly more features.
>the Wayland death spiral
That sounds like Wayland getting worse, but it's actually been slowly improving and it's pretty good now. Only took a decade+ to get there.
Mir was good from year one.
Judging from what happened to X11, that means wayland will be deprecated very soon. /s
Not unlike Win10 vs 11.
It blows my mind that people can complain about the direction KDE is going when trying to paint a picture about how it's so much nicer to use Windows. I know the boiling frog experiment is fake, but just checking: are you sure the water isn't getting a little uncomfortably warm in the Windows pool right now?
I know you're saying you don't have to use it, but for any that didn't know, WSL2 does ship with it's own Wayland. And it does have some weird bugs.
After having used i3 and Sway, Windows is surprisingly bad at handling windows for an OS called Windows.
It requires a bit of work to setup to your liking of course, but hey, at least you have an option to set it up to your liking
Agreed. I used tiling WMs for a long while (ion3, XMonad) and it was such a productivity boost.
Then I was forced to use a Mac for work, so I was using a floating WM again. On my personal machine, ion3 went away and I never fully got around to migrate to i3.
By the time I got enough free time to really work on my personal setup, it had accumulated two huge monitors and was a different machine. I found I was pretty happy just scattering windows around everywhere. Especially with a trackball's cursor throw. This was pretty surprising to me at first.
Anyway this is just my little personal anecdote. If I go back to a Linux install I'll definitely have to check out i3 again. Thanks for reminding me :)
Compiling and testing cross-platform software for Linux lately (Ubuntu and similar)... You can't even launch an application or script without CLI. Bad UX, IMO. For these decisions, There are always reasons, a justification, something about security. I don't buy it.
> You can't even launch an application or script without CLI.
Care to elaborate? I'm not sure I understand what you're saying here.
I compile my program using WSL, or Linux native. It won't launch; not an executable. So, into the CLI: chmod +x. Ok. It's a compiled binary program, so semantically I don't see the purpose of this. Probably another use case bleeding into this. (I think there's a GUI way too). Still can't double click it. Nothing to launch from the right-click menu. After doing some research, it appears you used to be able to do it (Ubuntu/Gnome[?]), but it was removed at some point. Can launch from CLI.
I make a .desktop file and shell script to move it to the right place. Double click the shell file. It opens a text editor. Search the right click menu; still no way. To the CLI we go; chmod +x, and launch if from the CLI. Then after adding the Desktop icon, I can launch it.
On windows, you just double click the identified-through-file-extension executable file. This, like most things in Linux, implies the UX is designed for workflows I don't use as a PC user. Likely servers?
This sounds very weird to me. Any sane build toolchain should produce a runnable executable that already has +x. What did you use to compile it?
Removing double-click to run an executable binary certainly sounds like something either Gnome or Ubuntu would do, but thankfully that's not the only option in town. In KDE I believe the same exact Windows workflow would just work.
>Any sane build toolchain should produce a runnable executable that already has +x. What did you use to compile it?`
`cargo build --release`
Good to know KDE doesn't do that!
Even stranger then. Just to make sure I'm not missing something, I just tried this on my Mac:
Are you sure it's not because the package in question does some kind of weird custom build steps?Might have got lost in translation when I moved it from WSL to a windows-made zip file. I think that workflow nukes permissions.
Yeah the typical way programs are run is by using a .desktop file that's installed. The reason nobody cares is because running random executable that have a GUI is a pretty rare use case for Linux desktops. We don't have wizards or .msi installers, we just install using the package manager. And then it shows up where it needs to.
If you're on KDE, you can right-click the start menu and add the application. Also, right-click menu should give you a run option.
Just FYI, you may also enjoy systemd-machine. It's essentially the same thing as toolbx but it handles the system bus much more sanely, and you can see everything running inside the guest from the host's systemctl.
This is very much YMMV thing. There is no objectively best platform. There are different users and requirements.
I’ve been a software developer for 20 years and in _my_ opinion Windows is the best platform for professional software development. I only drop of to linux when need some of the excellent posix tools but my whole work ergonomy is based on Windows shortcuts and Visual Studio.
I’ve been forced to use Mac for the past 1.5y but would prefer not to.
Why would Windows be superior for me? Because that’s where the users are (for the work stuff I did before this latest gig). I started in real time graphics and then spent over a decade in CAD for AEC (developing components for various offerings including SketchUp). The most critical thing for the stuff I did was the need to develop on the same platform as users run the software - C++ is only theoretically platform independent.
Windows API:s are shit for sure for the most part.
But still, from this pov, WSL was and will be the best Linux for me as well.
YMMV.
I fully agree with you - "YMMV" is the one true take. Visual Studio has never been particularly attractive to me, my whole workflow is filled with POSIX tools, and my code mostly runs on Docker and Linux servers. Windows is just another thing to worry about for me, be it having to deal with the subtle quirks of WSL not running on raw metal or having to deal with running UNIX-first tooling (or finding alternatives) on Windows. If it wasn't for our work provided machines being Windows by default, and at home, being into VR gaming and audio production (mostly commercial plugins), I'd completely ditch Windows in a heartbeat.
It's a VM plus some usability automation. Can't ignore the usability benefits.
If Windows provided easier access to hardware, especially USB, from WSL it would be nice. In fact, if WSL enumerated devices and dealt with them as native Linux does, even better.
Windows has many useful software that is not available on Linux.
So, for me Windows + WSL is more productive than just using Linux. The UI is still better on Windows(basic utilities like File Explorer and Config Management is better on Windows). No Remoting Software beats RDP. When I remote to a Windows workstation through RDP, I can't tell the difference. VNC is always janky. Of course there is Word/Excel/Illustrator which is simply not available on Linux
File Explorer is better on Windows? How? I tried Windows 11 for the first time a month ago and it takes several seconds for file explorer to open, it's asynchronously loading like 3 different UI frameworks as random elements pop in with no consistency, there's two different rightclick menus because they couldn't figure out how to make the new one have all the functionality of the old one so they decided to just keep the old one behind "Show More Options", and it's constantly pushing OneDrive in your face. I'm offended that this is what they thought is good enough to ship to a billion users.
The File Explorer on Windows 11 is the worst experience ever. Windows 7 was snappy as hell, but I don't know what they did to damage it that badly. I use XYplorer, which is written in Visual Basic (so a 32 bit application), but is so much faster the native explorer (and is full with features).
> No Remoting Software beats RDP. When I remote to a Windows workstation through RDP, I can't tell the difference. VNC is always janky
Any recent distro running Gnome or KDE has built-in support for connecting and hosting an RDP session. This used to be a pain point, you don't need to use VNC anymore.
It's actually worse on windows since you need to pony up for a pro license to get RDP hosting support...
> The UI is still better on Windows(basic utilities like File Explorer and Config Management is better on Windows).
5 years ago, we would be comparing old GNOME 3 or KDE Plasma 5 on X11 and Windows 10. I would be forced to agree. The Windows UI was better in many ways at that point.
Today we have KDE Plasma 6.3 on Wayland and Windows 11. This is an entirely different ball game. It's hard to explain. Wayland feels like it has taken an eternity to lift off, like well over a decade, but now things change dramatically on the scale of months. A few months ago HDR basically didn't work anywhere. Right now it's right in front of me and it works great. You can configure color profiles, SDR applications don't break ever, and you even get emulated brightness. Display scaling? Multiple monitors with different scale factors? What about one monitor at 150% and another at 175% scale factor? What about seamlessly dragging windows between displays with different scale factors? Yes, Yes, Yes, and Yes. No `xrandr` commands. You configure it in the GUI. I am dead serious.
File Explorer? That's the application that has two context menus, right? I think at this point Windows users might actually be better off installing KDE's Dolphin file manager in Windows for the sake of their own productivity. If I had the option to use Windows File Explorer on KDE I would impolitely decline. I have not encountered any advertising built into my file explorer. I do not have an annoying OneDrive item in the menu on the left. I have a file tree, a list of Places, and some remote file shares. When I right click it does not freeze, instead it tends to show the context menu right away. And no, I'm not impressed by Tabs and Dark Mode, because we've had that on Linux file managers for so long that some people reading this were probably born after it was already supported.
Windows still has the edge in some areas, but it just isn't what it used to be. The Linux UI is no longer a toy.
> When I remote to a Windows workstation through RDP, I can't tell the difference. VNC is always janky.
I don't really blame you if you don't believe me, but I, just now, went into System Settings, went to the Remote Desktop setting, and clicked a toggle box, at which point an RDP server spawned. Yes, RDP, not VNC, not something else. I just logged into it using Reminna.
Not everything on Linux is seamless and simple like this, but in this case it really is. I'm not omitting a bunch of confusing troubleshooting steps here, you really can do this on a modern Linux setup, with your mouse cursor. Only one hand required.
> Of course there is Word/Excel/Illustrator which is simply not available on Linux
True, but if you want to use Linux and you're held back by needing some specific software, maybe it's not the end of the world. You have many options today. You can install VirtualBox and run your spreadsheets in there. You can use Office 365 in a browser. You can run Crossover[1] and emulate it. You can use an office alternative, like possibly WPS Office. You can dual boot. You can go the crazy route and set up a KVM GPU passthrough virtual machine, for actually native performance without needing to reboot.
The point I'm making here is not "Look, Linux is better now! Everyone go use it and get disappointed ASAP!" If you are happy with Windows, there's literally no point in going and setting yourself up for disappointment. Most people who use Linux do so because they are very much not happy with Windows. I'm sure you can tell that I am not. However, in trying to temper the unending optimism of Linux nerds, sometimes people go too far the other way and represent Linux as being in far worse of a state than it actually is. It really isn't that bad.
The worst thing about modern Linux is, IMO, getting it to work well on your hardware. Once you have that part figured out, I think modern Linux is a pretty good experience, and I highly recommend people give it a shot if they're curious. I think Bazzite is a really nice distro to throw on a random spare computer just to see what modern Linux is actually capable of. It's not the absolute most cutting edge, but it gives you a nice blend of fairly up-to-date software and a fairly modern RPM ostree base system for better stability and robustness, and it's pretty user-friendly. And if you don't like it, you can easily get a full refund!
[1]: https://www.codeweavers.com/compatibility/crossover/microsof...
> You can use an office alternative, like possibly WPS Office.
Or ONLYOFFICE, which is FOSS (and what I use personally). Or LibreOffice (also free/libre software, of course). I don’t miss MS Office one bit, the compatibility is nothing short of excellent nowadays, and the speed and UX both surpass it.
There are specialized software packages that are Windows-only, of course, but at least office programs ain’t it.
The last time I deployed Linux servers on bare metal was about 2010.
Apparently Linux VMs on other people's computers is very much appreciated.
I definitely prefer working in Linux.
But having Windows tightly integrated when needed is nice.
If only I could run replace the Windows shell with a Linux DE...
Is it a VM? It seems to be much faster than most VMs I've used.
Literally built on top of MS's Hyper-V.
IDK how many VMs you've used, but there has been a lot of work specifically with x86 to make VMs nearly as fast as native. If you interact with cloud services everything you do is likely on a VM.
It's handy if you have other services that are Windows-based, though. And, being a VM, it's fairly convenient to have multiple versions and to back up.
So, how you run Windows on Linux like WSL does?
Methods I know are using qemu/Wine/proxmox/VirtualBox.
But he was acting as if Linux didnt need VMs ;)
Linux doesn't need VMs, people need VMs. If you spend most of your time in Windows-exclusive apps and use WSL2 on occasion, then you already know what you want, why are you worried about arguing about it on the Internet?
For many software engineers, a lot of our work is Linux, and it wouldn't be atypical to spend most of the time doing Linux development. I work on Linux and deploy to Linux, it's just a no-brainer to run Linux, too, aside from the fact that I simply loathe using modern Windows to begin with.
(Outside of that, frankly, most people period live inside of the web browser, Slack, Discord, and/or Steam, none of which are Windows-exclusive.)
My point isn't that Linux is better than Windows, it's that WSL2 isn't better than literally running Linux. If you need to do Linux things, it is worse than Linux at basically all of them.
Steam by itself is irrelevant, what matters is whether the game you want to play runs on Linux.
For anything that is PvP multiplayer, this is very much not a given because of how pervasive kernel-level anti-cheat solutions are today.
You still have to go and make sure that what you want is there and works, but it's not a bad bet. With a few major omissions aside, there is a pretty big library of supported games.
> For anything that is PvP multiplayer, this is very much not a given because of how pervasive kernel-level anti-cheat solutions are today.
To be fair, though, you probably still have a better shot of being able to play the games you want to under Linux than macOS and that doesn't seem to be that bad of an issue for Mac users. (I mean, I'm sure many of them game on PC anyways, but even that considered macOS has greater marketshare than Linux, so that's a lot of people either able to deal with it or have two computers.)
Speaking as a Mac user, it's really bad. Much worse than Linux/SteamOS actually. Not only most games just aren't there, many games that are advertised as Mac-compatible are actually broken because they haven't been updated for a long time, and macOS is not particularly ABI-stable when it comes to GUI. Sometimes they just don't support hi-DPI, so you can play it but forget about 4K. But sometimes it just straight up won't start.
I do indeed have two computers with a KVM setup largely for this reason, with a secondary Windows box relegated to gaming console role.
Fair point. I know it was rough when Apple made the break-away with 32-bit.
Still, the point is that you can make it work if you want to make it work. Off the top of my head:
- Two computers, completely separate. Maybe a desktop and a laptop.
- Two computers, one desk and a KVM like you suggest.
- Two computers, one desk. No proper KVM, just set up remote desktop and game streaming.
- (on Linux) KVM with GPU passthrough, or GPU passthrough with frame relay. One computer, one desk.
- Game streaming services, for more casual and occasional uses.
- Ordinary virtualization with emulated GPU. Not usually great for multimedia, but still.
- And of course, Steam Play/Heroic Launcher/WINE. Not as applicable on macOS, but I know CodeWeavers does a lot to keep macOS well-supported with Crossover. With the aforementioned limitations, of course.
Obviously two computers has a downside, managing two boxen is harder than one, and you will pay more for the privilege. On the other hand, it gives you "the real thing" whenever you need it. With some monitors having basic KVM functionality built-in, especially over USB-C, and a variety of mini PCs that have enough muscle to game, it's not really the least practical approach.
I suspect for a lot of us here there is a reasonable option if we really don't want to compromise on our choice of primary desktop OS.
I heard 2025 was the year of Linux on the desktop!
> You know what's even more convenient than a VM? Not needing a VM and still having the exact same functionality.
Exactly.
Your comment that you can do Linux things on Linux missed the point entirely.
Where is the reverse WSL on Linux, where Windows is deeply embedded and you have all the Windows features in your hands?
You can use Wine/Crosseover, which is cool, but even now the number of software products it supports is tiny. Steam has a lot of games.
You can run a virtual machine with Windows on it. That is identical to what you can do on Windows with Linux.
WSL2-> is a virtual machine with unique tooling around it that makes it easier to use and integrates well with Windows.
Windows supports Linux because the latter is open source, it's a lot easier than the reverse.
Linux, on the other hand, barely supports Windows because the latter is closed, and not just closed, windows issues component updates which specifically check if they run in wine and stop running, being actively hostile to a potential Linux host.
The two are not equivalent, nobody in the Linux kernel team is actively sabotaging WSL, whereas Microsoft is actively sabotaging wine.
> whereas Microsoft is actively sabotaging wine
Do you have a link to where I can read more about this? My understanding is that Microsoft saw Wine as inconsequential to their business, even offloading the Mono runtime to them [1] when they dropped support for it.
[1] https://www.mono-project.com/
> Until 2020, Microsoft had not made any public statements about Wine. However, the Windows Update online service will block updates to Microsoft applications running in Wine. On 16 February 2005, Ivan Leo Puoti discovered that Microsoft had started checking the Windows Registry for the Wine configuration key and would block the Windows Update for any component.[125] As Puoti noted: "It's also the first time Microsoft acknowledges the existence of Wine."
https://en.m.wikipedia.org/wiki/Wine_(software)
This. Windows needs to open source its operating system. End of story.
Microsoft seems to be taking a outside-in "component at a time" approach to open sourcing Windows. Terminal, Notepad, Paint, Calculator, the new Edit.com replacement, a lot of WSL now, etc.
This approach has been fascinating so far, but yeah not "exciting" from "what crazy things can I do with Windows like put it in a toaster" side of things.
It would be great to see at least a little bit more "middle-out" from Windows Open Source efforts. A minimal build of the NT Kernel and some core Windows components has been "free as in beer" for a while for hobby projects with small screens if you really want to try a very minimal "toaster build" (there's some interesting RPi out there), but the path to commercialization is rough after that point and the "small screens" thing a bit of a weird line in the sand (though understandable given Microsoft's position of power on the desktop and sort of the tablet but not phone).
The NT Kernel is one of the most interesting microkernels left in active use [0], especially given how many processor architectures it has supported over decades and how many it still supports (even the ones that Windows isn't very commercially successful on today). It could be a wealth of power to research and academia if it were open source, even if Microsoft didn't open source any of the Windows Subsystems. It would be academically interesting to see what sort of cool/weird/strange Subsystems people would build if NT were open source. I suppose Microsoft still fears it would be commercially interesting, too.
[0] Some offense, I suppose to XNU here. Apple's kernel is often called a microkernel for its roots from the Mach kernel, but it has rebuilt some monoliths on top of that over the years (Wikipedia more kindly calls it a "hybrid kernel"), and Mach itself is still so Unix flavored. NT's "object oriented" approach is rather unique today, with its more VMS heritage, a deeply alternate path from POSIX/Unix/Linux(/BSD).
I doubt it would happen, large projects that aren't open source from the onset and are decades old can have licensed or patented code, Microsoft would have to verify line by line that they can open source it.
Wait long enough and it will happen, the question is just "how long". (Microsoft has open-sourced OS and languages from the 1980s) Some days it seems like Microsoft is more interested in Azure, Copilot and GAME PASS and Windows is an afterthought.
I would certainly love it if Microsoft stopped trying to sell Windows and just open sourced it. I think Windows is a much more pleasant desktop operating system than Linux, minus all the ads and mandatory bloat Microsoft has put in lately. But if Windows was open source the community could just take that out.
I really don't see it happening any time in the next decade at least, though. While Windows might not be Microsoft's biggest focus any more it's still a huge income stream for them. They won't just give that up.
I preferred WSL to running linux directly even though I had no need for any windows only software. Not having to spend time configuring my computer to make basic things work like suspend/wake on lid down/up, battery life, hardware acceleration for video playback on the browser, display scaling on external monitor and so on was reason enough.
I use Windows with wsl for work, and Linux and MacOS at home. Windows is a mess, it blows my mind that people pay for it. Sleep has worked less reliably on my work machine than my Fedora Thinkpad, and my Fedora machine is more responsive in pretty much every way despite having modest specs in comparison. Things just randomly stop working on Windows in a way that just doesn't happen on other OSes. It's garbage.
All this usually works out of the box now, especially if you pick your hardware accordingly.
That was certainly not the case ~2 years ago, the last time I installed linux on a laptop.
It also doesn't appear to be the case even now. I searched for laptops available in my country that fit my budget and for each laptop searched "<laptop name> linux reddit" on google and filtered for results <1 year old. Each laptop's reports included some or other bug.
https://www.reddit.com/r/linuxhardware/comments/1hfqptw/linu...
https://www.reddit.com/r/linuxhardware/comments/1esntt3/leno...
https://www.reddit.com/r/linuxhardware/comments/1j3983j/hp_o...
https://www.reddit.com/r/linuxhardware/comments/1k1nsm8/audi...
The laptop with the best reported linux support seemed to be Thinkpad P14s but even there users reported tweaking some config to get fans to run silently and to make the speakers sound acceptable.
https://www.reddit.com/r/thinkpad/comments/1c81rw4/thinkpad_...
You are going to find issues for any computer for any OS by looking things up like this.
And yeah, it's best to wait a bit for new models, as support is sorted out, if the manufacturer doesn't support Linux itself. Or pick a manufacturer that sells laptops with Linux preinstalled. That makes the comparison with a laptop with Windows preinstalled fair.
> You are going to find issues for any computer for any OS by looking things up like this
I wasn't cherry-picking things. I literally searched for laptops available in my budget in my country and looked up what was the linux support like for those laptops as reported by people on reddit.
> Or pick a manufacturer that sells laptops with Linux preinstalled
I suppose you are talking about System76, Tuxedo etc. These manufacturers don't ship to my country. Even if I am able to get it shipped, how am I supposed to get warranty?
You weren't cherry picking but the search query you used would lead to issue reports.
HP, Dell and Lenovo also sell Linux laptops on which Linux runs well.
I sympathize with the more limited availability and budget restrictions, but comparisons must be fair: compare a preinstalled Windows and a preinstalled linux, or at least a linux installed on hardware whose manufacturer bothered to work on Linux support.
When the manufacturer did their homework, Linux doesn't have the issues listed earlier. I've seen several laptops of these three brands work flawlessly on Linux and it's been like this for a decade.
I certainly choose my laptops with Linux on mind and I know just picking random models would probably lead me to little issues here and there, and I don't want to deal with this. Although I have installed Linux on random laptops for other people and fortunately haven't run into issues.
As a buyer, how am I supposed to know which manufacturer did their homework and on which laptops?
> it's been like this for a decade
Again, depends on the definition of "flawlessly". Afaik, support for hardware accelerated videoplayback on browsers was broken across the board only three years ago.
> As a buyer, how am I supposed to know which manufacturer did their homework and on which laptops?
You first option is to buy a laptop with linux preinstalled from one of the many manufacturers that provides this. This requires no particular knowledge or time. Admittedly, this may lead you to more expensive options, entry grade laptops won't be an option.
Your second best bet is to read tech reviews. Admittedly this requires time and knowledge, but often enough people turn to their tech literate acquaintance for advice when they want to buy hardware.
> Afaik, support for hardware accelerated videoplayback on browsers was broken across the board only three years ago.
Yes indeed, that's something we didn't have. I agree it sucks. Now, all the OSes have their flaws that others don't have, and it's not like the videos didn't play, in practice it was an issue if you wanted to watch 4K videos for hours on battery. Playing regular videos worked, and you can always lower the quality if your situation doesn't allow the higher qualities. Often enough, you could also get the video and play it outside the browser. I know, not ideal, but also way less annoying that the laptop not suspending when you close the lid because of a glitch or something like this.
> You first option is to buy a laptop with linux preinstalled
I have earnestly tried for >20 minutes trying to find such a laptop with any reputed manufacturer in my country (India) and come up empty-handed. Please suggest any that you can find. Even with Thinkpads, the only options are "Windows" or "No Operating System".
>Your second best bet is to read tech reviews.
Which tech reviews specifically point out linux support?
>Playing regular videos worked, and you can always lower the quality if your situation doesn't allow the higher qualities
The issue was never about whether playing the video worked. CPU video decoding uses much more energy and leads to your laptop running hot and draining battery life.
Can we at least agree to reduce the timeframe for things working flawlessly to "less than two years" instead of "a decade"? Yes you were able to go to the toilet downstairs but the toilet upstairs was definitely broken.
"Thinkpad linux" with region set to India on DDG yields many results, including https://www.lenovo.com/us/en/d/linux-laptops-desktops/
If buying with Linux is not an option at your place, you can always buy one of the many models found with this search without OS and install it yourself. Most thinkpads should be all right. Most elitebooks should do. Dell laptops sold with Ubuntu somewhere on the planet should do. I'm afraid I can't help nore, you'll have to do your search. Finding out which laptops are sold with Linux somewhere should not be rocket science. I don't buy laptops very often, I tend to keep my computers for a healthy amount of time, I can't say what it's like in India in 2025.
> Can we at least agree to reduce the timeframe for things working flawlessly to "less than two years" instead of "a decade"? Yes you were able to go to the toilet downstairs but the toilet upstairs was definitely broken.
No. I understand that it can be a dealbreaker for some, but that's a minor issue for me on laptops, even unplugged, and I do watch a lot of videos (for environmental reasons I tend to avoid watching videos in very high resolutions anyway, so software rendering is a bummer but not a blocker). There are still things that don't work, like Photoshop or MS Office, so you could say that it's still not flawless, still, that doesn't affect me.
>many results, including https://www.lenovo.com/us/en/d/linux-laptops-desktops/
Many results, including a US-specific page of the Lenovo website.
>If buying with Linux is not an option at your place, you can always buy one of the many models found with this search without OS and install it yourself.
>Finding out which laptops are sold with Linux somewhere should not be rocket science.
It should not. Given the amount of time I have already spent on trying to find one, it is fair to say that there are none easily available in India, at least in the consumer laptop market.
> I understand that it can be a dealbreaker for some, but that's a minor issue for me on laptops
Stockholm syndrome.
> Stockholm syndrome.
Stockholm Syndrome was bullshit made up on the spot to cover for the inability of the person making it up to defend their position with facts or logic, and...that fits most metaphorical uses quite well, too, though its not usually the message the metaphor is intended to communicate.
> Many results, including a US-specific page of the Lenovo website.
Are you failing to see that this US-specific page gives you a long list of models you can consider elsewhere?
> Stockholm syndrome.
Yeah, no. It just appears I have different needs than you and value different tradeoffs. It appears that the incredible comfort Linux brings me offsets the minor inconvenience software rendered browser video playback causes me.
I'm done in this discussion, we've been quite far away the kind of interesting discussions I come to HN for for a few comments now.
On Windows, I don't have to pick my hardware accordingly.
I have to onboard a lot of students to work on our research. The software is all linux (of course), and mostly distribution-agnostic. Can't be too old, that's it.
If a student comes with a random laptop, I install WSL on it, mostly ubuntu. apt install <curated list of packets>. Done. Linux laptops are OK too, I think, but so far only had one student with that. Mac OS used to be easy, but gets harder with every release, and every new OS version breaks something (mainly, CERN root) and people have to wait until it's fixed.
> On Windows, I don't have to pick my hardware accordingly.
Fair enough. I think the best way to run Linux if you want to be sure you won't have tweak to stuff is to buy hardware with linux preinstalled. That your choice is more limited is another matter than "linux can't suspend".
Comparing a preinstalled Windows with a linux installed on random laptop whose manufacturer can't be bothered to support is a bit unfair.
Linux on a laptop where the manufacturer did their work runs well.
Yes, machines with Linux preinstalled normally work quite well. But it's still a downside of choosing Linux that the choice of laptops is so much smaller. Similar to the downside of Mac OS that you are locked in to pricey-but-well-built laptops, or the downside of Windows that "it runs Windows" doesn't mean the hardware is not bottom-of-the-barrel crap with a vendor who doesn't care about Linux compatibility. WSL allows to run a sane development environment even then :)
100% agree
> You can use Wine/Crosseover, which is cool, but even now the number of software products it supports is tiny. Steam has a lot of games.
This isn't really the case, and hasn't been for some years now, especially since Valve started investing heavily in Wine. The quality of Wine these days is absolutely stunning, to the point that some software runs better under Wine than it does on Win11. Then there's the breadth of support which has has moved the experience from there being a slight chance of something running on Wine, to now it being surprising when something doesn't.
This is really the case and has been for years now.
This is a list of software I run or have run and that I keep checking on every 6th month or so.
Most of them simply dont work, some have unstable limited features.
https://www.codeweavers.com/compatibility/crossover/microsof... https://www.codeweavers.com/compatibility/crossover/adobe-cr... https://www.codeweavers.com/compatibility/crossover/corel-pa... https://www.codeweavers.com/compatibility/crossover/corel-pa... https://www.codeweavers.com/compatibility/crossover/visual-s... https://www.codeweavers.com/compatibility/crossover/microsof... https://www.codeweavers.com/compatibility/crossover/affinity... https://www.codeweavers.com/compatibility/crossover/affinity... https://www.codeweavers.com/compatibility/crossover/affinity... https://www.codeweavers.com/compatibility/crossover/snagit13 https://www.codeweavers.com/compatibility/crossover/evernote...
> Where is the reverse WSL on Linux, where Windows is deeply embedded and you have all the Windows features in your hands?
https://github.com/Fmstrat/winapps
Enjoy.
I was actually looking for something like this.
> You know what's even more convenient than a VM? Not needing a VM and still having the exact same functionality
I mean this is basically heresy now.
most code is virtualised, or sandboxed, or in a VM, or a docker container, or several of the above at the same time.
The important bit though is that Docker containers are not VMs or sandboxes, they're "just" a combination of technologies that give you an isolated userland using mostly Linux namespaces. If you're running a Linux host you already have namespaces, so you can just use them directly. Distrobox gives you basically the same sort of experience as WSL2 except it doesn't have any of the weird parts of running a VM because it's not VMs.
> WSL is more powerful than Linux
This is the kind of statement that makes you pay the karma tax. WSL is great, I use it on a day to day basis. I also use Linux on a day to day basis. And as great as WSL is, for running Linux software on supported hardware, Linux beats WSL hands down. And I mean, of course it does, do you expect a VM to beat native? In the same way that Windows software runs better on Windows. (with a few exceptions on both sides).
Compared to Linux, WSL I/O is slow, graphics is slow and a bit janky, I sometimes get crashes, memory management is suboptimal, networking has some quirks, etc... These problems are typical of VMs as it is hard for the host and guest OS to coordinate resource use. If you have an overpowered computer with plenty of RAM, and are mostly just using the command line, and don't do anything unusual with your network, then sure it may be "better" than Linux. But the truth is that it really depends on your situation.
[flagged]
Do you believe the 600+ people with the same problem here: https://github.com/microsoft/WSL/issues/4197
I knew which issue this was before I clicked it. Oh hey, there's me commenting in the issue a year ago!
WSL 1 had fast IO but couldn't support all features.
WSL 2 supports all features but has famously slow IO.
Example:
1. Shell into WSL
2. Clone a repo
3. Make a bunch of changes to the repo with a program within WSL
4. Run git status (should finish in less than a second)
5. Open repo from a Windows IDE
6. Run git status. This makes windows change each file's permissions, ownership, etc... so it can access the files as git status recursively travels through every file and folder
7. Go for coffee
8. Go for lunch
9. Git status finished after 35 minutes.
10. Close IDE
11. Shell back into WSL
12. Make a change in WSL
13. Run git status from within WSL
14. Wait another 35 minutes as Windows restores each file's ownership and permissions one by one
------------------------------------
The IO overhead is so bad that Microsoft built two new products just to get around it:
1. VSCode WSL remote-client architecture.
VSCode acts as a server within WSL and a client within Windows. Connect both VSCode instances (through proxy/tunnel if needed) and the server can perform the client's File IO ops on behalf of the client rather than letting an Application on Windows try to interact with any of WSL's file systems.
2. Windows DevDrive
Basically set aside a virtual-disk/partition and set it up as a different file system (ReFS) that doesn't use Window's file permissions, ownership and doesn't decrypt then decompress on each file input, doesn't compress then encrypt each file output, and doesn't virus scan the files on usage.
TL;DR Store the files on a network drive and hope race-condition ops from both WSL and Windows don't corrupt any files.
Well, WSL is Linux. It's really just a VM of it (since WSL2, WSL1 was actually running on the windows kernel which was pretty cool).
The big drawback to WSL to me is the slow filesystem access because NTFS sucks. And having to deal with Windows in the first place.
Ps I wouldn't worry about your karma. It's just a number :P
NTFS is not the problem.
The problem is Windows IO filters and whatnot, Microsoft Defender trying to lazily intercept every file operation, and if you're crossing between windows and Linux land, possibly 9pfs network shares.
WSL2's own disk is just a VM image and fairly fast - you're just accessing a single file with some special optimizations. Usually far, far more responsive than anything done by windows itself. Don't do your work in your network-shared windows home folder.
>The problem is Windows IO filters
Not the biggest issue of them, 'find' and 'git status' on WSL2 in a big project is still >100 times slower on windows dev drive which avoids those filters than it is with WSL 1 on dev drive.
WSL 1 on regular ntfs with defender disabled is about 4x slower than WSL1 on dev drive, so that stuff does cause some of it, but WSL2 feels hopelessly slow. And wsl 2 can't share memory as well or take as much advantage of the filesystem cache (doubling it if you use the windows drive in both places I think, unless the network drive representation of it doesn't get cached on the WSL2 drive.
WSL2, in my testing, is orders of magnitude faster at file heavy operations than anything outside WSL, dev drive or not. We have an R&D department that's using WSL2 and jumping through hurdles of forwarding hardware because it's night and day compared to trying under windows on the same machine. It provided other benefits too, but the sheer performance was the main selling point.
WSL2 does not take less advantage of filesystem caches. Linux's block cache is perfectly capable. HyperV is a semi-serious hypervisor, so it should be using a direct I/O abstraction for writing to the disk image. Memory is also balloning, and can dynamically grow and shrink depending on memory pressure.
Linux VM's is something Microsoft has poured a lot of money into optimizing as that's what the vast majority of Azure is. Cramming more out of a single machine, and therefore more things into a single machine, directly correlates with profits, so that's a heavy investment.
I wonder why you're seeing different results. I have no experience with WSL1, and looking into a proprietary legacy solution with known issues and limited features would be a purely academic exercise that I'm not sure is worth it.
(I personally don't use Windows, but I work with departments whose parent companies enforce it on their networks,
> Linux's block cache is perfectly capable. HyperV is a semi-serious hypervisor, so it should be using a direct I/O abstraction for writing to the disk image.
Files on the WSL2 disk image work great. They're complaining about accessing files that aren't on the disk image, where everything is relayed over a 9P network filesystem and not a block device. That's the part that gets really slow in WSL2, much slower than WSL1's nearly-native access.
> Memory is also balloning, and can dynamically grow and shrink depending on memory pressure.
In my experience this works pretty badly.
> a proprietary legacy solution with known issues and limited features
Well at least at the launch of WSL2 they said WSL1 wasn't legacy, I'm not sure if that has changed.
But either way you're using a highly proprietary system, and both WSL1 and WSL2 have significant known issues and limited features, neither one clearly better than the other.
> WSL2 does not take less advantage of filesystem caches.
My understanding is when you access files on the windows drive, the linuxvm in WSL2 caches it in its own memory, and the windows side caches it in its: now you have double the memory usage on disk cache where files are active on both, taking much less advantage of caches than if you had used WSL1 where windows serves as the sole cache for windows drives.
I'm only comparing working on windows filesystems that can be accessed by both. My use case is developing on large windows game projects, where the game needs the files fast when running, and WSL needs the files fast when searching code, using git, etc. WSL1 was usable on plain NTFS, and now much closer to ext4 with dev drive NTFS. WSL2 I couldn't make fast.
You could potentially have the windows files on a network drive on the WSL2 side living in native ext4, but with that you get the double filesystem caching issue, and you might slow a game editor launch on the windows side by way too much, your files are inaccessible during upgrades and you have to always have RAM dedicated to WSL2 running to be able to read your files. MS store versions of WSL2 will even auto upgrade while running and randomly make that drive unavailable.
Running WSL2 on Dev Drive means that you're effectively doing network I/O (to localhost); of course it's slow. It's also very pointless since your WSL2 FS is already a separate VHD.
Not pointless if you are working on a windows project but using unix tools to search code, do commits, etc. WSL2 just isn't usable for it in large projects. git status can take 5 minutes on unreal engine.
If you just need the stock Unix command line tools, MSYS2 will give you them at native speed, no VM needed, no funky path mappings etc.
WSL is for when you actually need it to be Linux.
I use it, I am required to use Windows, and it’s a huge improvement over doing Data Science on native Windows, but the terrible filesystem access ruins what otherwise would be a seamless experience.
It’s fine for running small models but when you get to large training sets that don’t fit in RAM it becomes miserable.
There is a line where the convenience of training or developing locally gives way to a larger on demand cloud VM, but on WSL the line is much closer.
Slow IO is why I still use wsl1.
This. WSL was SO much more interesting in v1 times.
I liked the networking in WSL1 more too
Corporate networking is why I still use WSL1 (I didn’t spend enough time to check why it doesn’t with WSL2, zScaler could be the culprit maybe).
However it’s not perfect, for example I hit this bug when trying to run node a few days ago https://github.com/microsoft/WSL/issues/8219#issuecomment-10... and I don’t think they’re fixing bugs in WSL1 anymore
still use WSL1 also because VMWare runs so dreadfully slow with any kind of Hyper-V enabled - if so, VMWare must also use it, so you get a Type-2 running under a Type-1 the lag is untennable lag and performance.
>The big drawback to WSL to me is the slow filesystem access because NTFS sucks
Thats if you are going from VM/host. If you use the allocated space for VM, its pretty fast.
Is it really a NTFS issue ?
The culprit would be the plan9 bits (think of smb or nfs but .. wilder ? why are they using 9P again ?)
I'm guessing they use plan9 because distros already ship support for it, and it's super simple compared to NFS? It doesn't seem like CIFS/NFS would be any faster, and they introduce a lot more complexity.
Where are you experiencing filesystem slowness? I've been using WSL in some advanced configurations (building Win32 apps by cross-compiling from Linux CLANG and dropping the .exe into a Windows folder, copying large files from Linux->Windows and vice versa, automating Linux with .BAT files, etc.) and I haven't seen this slowness at all.
> NTFS sucks
Watch https://www.youtube.com/watch?v=qbKGw8MQ0i8 please.
While I can see the subtle distinction you're trying to draw people's attention to (NTFS is not the problem, filesystem operations generally on Windows are the problem) I have to say it seems like a distinction without a difference in real terms. They made a range of changes that seem to produce more complicated code everywhere because the overhead of various filesystem tasks are substantially higher on this OS vs every other OS.
But in the end they had to get the OS vendor to bless their process name anyway, just so the OS would stop doing things that tank the performance for everybody else doing something similar but who haven't opened a direct line up with the OS vendor and got their process name on a list.
This seems like a pain point for the vendor to fix, rather than everybody shipping software to their OS
I find it to be incredibly janky. Pretty much every every time my computer sleeps (so every morning, at least) I have to restart it because somehow the VM-host networking gets screwed up and VS code connections into the VM stop working. You also can't just put things in your Windows User directory because the filesystem driver is so slow that git commands will take multiple seconds, so now you have two home directories to keep track of. There were also some extremely arcane things I had to fix when setting it up involving host DNS and VPN adapter priority not getting propagated into the VM so networking was completely broken. IIRC time would also not match the host after a sleep and get extremely far out of sync, though I haven't run into that for a while since now I have to reboot Windows constantly anyway.
I don't have a need to run multiple OSes though. All of my tools are Linux based, and in companies that don't let people run Linux, the actual tools of the trade are almost all in a Linux VM because it's the only reasonable way to use them, and everything else is cross-platform. The outer OS just creates needless issues so that you now need to be a power user with two operating systems and their weird interactions.
> somehow the VM-host networking gets screwed up
> extremely arcane things I had to fix when setting it up involving host DNS and VPN adapter priority not getting propagated into the VM so networking was completely broken
Are you sure you set up the VPN properly? Messing around with Linux configs is a good way to end up with "somehow" bugs like that.
I don't know how it's set up. That's kind of my point though. I have to now be an expert in Linux and Windows to debug this stuff, which is a waste of my time as someone who's job it is to develop (server, i.e. Linux) software. I had exactly zero issues when I was using Fedora. At one point my company made all of the Linux users move off (we do now have an IT-supported Linux image, but I haven't found the time to re-set up my laptop and don't fully trust that it will work without a bunch of trouble/IT back-and-forth because they also made Windows users start using passkeys), and since then I've seen way more issues with Windows than Linux (e.g. one day my start menu just stopped reacting to me clicking on programs), in addition to things like ads in the lock screen and popups for some XBox pass thing that I had to turn off, which is just insane in a "professional" OS. A lot of days I end up having to hold down the power button to reboot because it just locks up entirely.
OSX was a bit janky with docker filesystem slowness, homebrew being the generally recommended package manager despite being awful (why do I sometimes tap a cask and sometimes pour a bottle? Don't tell me; I don't care. Just make it be "install". Also, don't take "install" as a cue to go update all of my other programs with incompatible versions without asking), annoying 1+ second animations that you can't turn off that make it so the only reasonable way to use your computer is to never maximize a window (with no tiling support of course), and completely broken external monitor support (text is completely illegible IIRC), but Windows takes jank to another level.
By contrast, I never encounter the issues people complain about on Linux. Bluetooth works fine. Wifi works fine. nVidia GPUs and games work fine. Containers are easy to use because they're natively part of the OS. I prefer Linux exactly because I stopped enjoying "tinkering" with my computer like 10 years ago, and I want it to just quietly work without drawing attention to itself (and because Windows 8 and the flat themes that followed were hideous and I was never going to downgrade to that from Windows 7).
Thats odd. I have none of these problems. Sleep doesnt interrupt the VM. And I regularly use the git CLI through WSL on projects living within windows user directories. Both work fine.
FWIW, you can run a VPN (e.g. tailscale) in WSL2. I have WSL2 start up on boot and I can remotely ssh to WSL2 without logging into Windows at all.
I also have tailscale running on Windows itself and they don't conflict.
I think you might want to give more context.
I use linux. I don't need WSL at all. Not at work nor at home.
So you praise WSL because you use Windows as your main system? Than yes its great. It definitly makes the Windows experience a lot better.
OpenSSH for Windows was also a game changer. Honestly, i have no clue why Microsoft needed so long for that.
Openssh should have been a game changer but they made a classic openssh porting bug (not reading all bytes from the channel on close) and have now been sat on the fix in “prerelease” for years. I prodded the VP over the group about the issue and they repeatedly made excuses about how the team is too small and getting updates over to the windows team is too hard. That was multiple windows releases ago. Over on GitHub if you look up git receive pack errors being frequent clone problems for windows users you’ll find constant reports ever since the git distribution stopped using its own ssh. I know a bunch of good people at Microsoft, but this leadership is incapable of operating in a user centric manner and shouldn’t be trusted with embedded OSS forks.
I'm a simple man, if I open the shell and `ssh foo@bar.com` doesn't work, I don't use that computer. Idk if Windows has fixed that yet or why it's so hard for them. Also couldn't even find the shell on a Chromebook.
putty is longer necessary? That would be a wild upgrade in usability for the work laptop, shall go try it
openssh has been an optional windows component for... almost a decade now? including the server, so you can ssh into powershell as easily as into any unix-like. (last time I set it up there was some fiddling with file permissions required for key auth to work, but it does work.)
OpenSSH on Windows is great for the odd connection and SFTP session, but I still feel strongly that any serious usage should just stick with PuTTY and WinSCP. The GUI capabilities these provide are what Windows users are used to. The only benefit of built-in SSH is if you're working with some minimal image stuff, like Windows Server Core or Tiny11. IMHO.
IIRC (it's been a while) I used the server with vscode remote ssh extension.
imo the interesting part in opensssh into Windows.
I feel old but its only 6 years not a decade :P
I guess 'before covid' and 'decade ago' is the same in my mind ;) I might have been using a preview build back then, too
I dislike using putty, I use the ssh client from WSL. Just feels .. better. And bash/fish history helps.
https://xkcd.com/963/
On the other hand sometimes the GUI on WSL decides to break and you have to restart the whole thing.
Aged like fine milk
Running a Linux VM on Windows is nicer than just booting into Linux? That's quite a take. Windows is so user-hostile these days that I feel bad for those who have to deal with it. Calling it delightful must be symptomatic of some sort of Stockholm syndrome.
> symptomatic of some sort of Stockholm syndrome
I have since moved to macbooks for the hardware, but until not too long ago WSL was my linux "distro" of choice because I didn't want to spend time configuring my computer to make basic things work like suspend/wake on lid down/up, battery life, hardware acceleration for video playback on the browser, display scaling on external monitor and so on.
Who deals with this? All this is fine out of the box on a modern Linux distro.
That was certainly not the case ~2 years ago, the last time I installed linux on a laptop.
It also doesn't appear to be the case even now. I searched for laptops available in my country that fit my budget and for each laptop searched "<laptop name> linux reddit" on google and filtered for results <1 year old. Each laptop's reports included some or other bug.
https://www.reddit.com/r/linuxhardware/comments/1hfqptw/linu...
https://www.reddit.com/r/linuxhardware/comments/1esntt3/leno...
https://www.reddit.com/r/linuxhardware/comments/1j3983j/hp_o...
https://www.reddit.com/r/linuxhardware/comments/1k1nsm8/audi...
The laptop with the best reported linux support seemed to be Thinkpad P14s but even there users reported tweaking some config to get fans to run silently and to make the speakers sound acceptable.
https://www.reddit.com/r/thinkpad/comments/1c81rw4/thinkpad_...
> linux
Which Linux? Each distro is essentially a different operating system.
I thought you said everything should work seamlessly on any modern distro.
Not all distros that exist in the current year are "modern". Mint for example, still ships with X11 and old forks of Gnome. Lots of people are running Arch with weird components that don't work well for whatever reason. And so on...
Modern means systemd, pipewire, Wayland, Gnome, an up to date kernel, etc... So the current Ubuntu and Fedora releases.
I've had 100% working laptops for 15 years now. Because I always run the newest Ubuntu.
I run Ubuntu and suspend is pretty much a nightmare to the point I just gave up pretending it exists. These are Dell computers sold with supposed Ubuntu support. Close the lid and put it in a backpack is inevitably an invitation for a hot laptop or empty battery when you pull it out a few hours later (for the record: Windows isn't any better at this in my experience so WSL never solved that problem either).
Previous laptops (all ThinkPads) used to be able to get everything all to work (debian) but it did take effort and finding the correct resources. Unfortunately all the old documentation about this stuff is pre-systemd and UFI and it's not exactly straightforward anymore.
Google "Dell suspend issues". It's just their computers, it doesn't work any better on Windows. My wife has had 2 Dell laptops now, neither suspended properly ever (and she only runs Windows). According to the internet, this is a Dell problem. One of her laptops also had the Wifi card break within 4 hours of use, brand new. But she likes the "design" and is stubborn.
Google harder. It's a general Windows problem. Microsoft can't even get it to work on their own Surface devices. Show me a Windows laptop that suspends properly and I'll show you a liar.
Well there you go. Meanwhile Linux suspend does work more often than not in my experience. I've had a ThinkPad, Acer and MSI laptop with working suspend on Linux.
Other than an up to date kernel, your list of what "modern" means is entirely wrong. The rest of the entries are polarizing freedesktop-isms. There's nothing out of date about, e.g., KDE Plasma.
Afaict, all the reporters used the newest available Ubuntu/Fedora/Arch.
I read all the links, most of the problems weren't bugs (Fan runs loud? Fans run under Windows as well... Only modern suspend? Literally created for Windows...). From all those links the only thing that was a bug was an issue with a kernel regression and 4/5 distros he listed weren't one I listed.
Maybe I was too positive on Fedora (I was going by it's reputation, I use Ubuntu for work). Ubuntu is solid.
Issues reported:
Link 1: screen only updating every 2 seconds, visual glitches. Link 2: brightness reset to full on screen unlock, fans turning on when charging. Link 3: bluetooth troubles, speakers cant be muted if headphone jack is on mute. Link 4: audio quality and low volume, wifi not coming back after sleeping. Link 5: fans being too loud, poor sound quality.
Either your Stockholm syndrome is affecting your reading comprehension or you just take bugs like these as part of the normal "working perfectly" linux experience.
Aren't these issues almost always kernel-related?
Nothing works out of the box with Linux. They may "seem" to work out of the box but you realize how many little tweaks go into making a laptop/consumer device work fully when you work as an embedded dev. It is quite difficult to get to the same power consumption levels and same exact hardware / software driver capabilities under Linux. There are simply no APIs for many things. So the entire driver has to live in userspace using some ioctls to write random stuff to memory or it cannot exist. There are also algorithms that the hardware manufacturer wants to keep closed.
Note that NVIDIA drivers didn't get better since they are more open source now. They are not. GPUs are now entire independent computers with their own little operating system. Some significant parts of the driver now runs under that computer.
Yes the manufacturers may allocate some people to deal with it and the corrosiveness of the kernel community. But why? Intel and AMD uses that as a marketing and sales stragtegy. If the hardware manufacturer is the best one there is, where is the profit for supporting Linux? Even Thinkpads don't have 100% support of all the little sensors and PMICs.
HiDPI issue hasn't been solved yet completely. Bluetooth is still quite unreliable. MIPI support should be the best due to the number of devices, until you realize everybody did their own shitty external driver and there are no common good drivers for MIPI cameras so your webcam doesn't work. USB stack is still dodgy. Microsoft in 90s had a cart of random hardware populating the USB tree completely and they just fucked with the NT kernel plugging and unplugging until it didn't break anymore for love's sake. Who did that level of testing with Linux?
This is why you buy computers designed for Linux, with Linux preinstalled, and with support that you can call to get help if there is an issue.
Then you cannot claim that Linux works out of the box. It doesn't if you need to select hardware for it. However, I already know that since I actually used Linux for 15 years. Both on the consumer side as a normal user for 15 years and now I am actually an embedded Linux developer. The underlying architecture of GNU/Linux distros is heavily server biased which often is the polar opposite of a consumer system.
Except for Apple (and maybe Framework), all laptops are designed by contract original design manufacturers (ODMs) Taiwan, Korea and China. Your usual Linux laptop OEMs like System76 and Tuxedo just buy better combinations of the whitelabel stuff. They are inferior to actual big OEMs designs which contain more sophisticated sensors and power management and extra UEFI features. This includes business laptops Dell Latitudes, HP Elitebooks and Lenovo Thinkpads. None of those manufacturers actually do Linux-based driver development. All the device development, manufacturing and testing is done under Windows and only for Windows. The laptops are booted with Windows to do functional tests at factory not Linux.
Linux is an afterthought for all OEMs. After Windows parts are released and tested, the kernel changes to Linux is added. They are rudimentary support which doesn't include 100% of the featureset. Many drivers today have quite proprietary user-space side. You'll get none of that from any laptop manufacturer. You may say you don't care about those and you're okay with 10 - 20% power loss. That's not the definition of out-of-the box for me.
> Then you cannot claim that Linux works out of the box. It doesn't if you need to select hardware for it
That is not what that means. At all.
> Your usual Linux laptop OEMs like System76 and Tuxedo just buy better combinations of the whitelabel stuff.
This is not what System76 do, actually.
> Many drivers today have quite proprietary user-space side. You'll get none of that from any laptop manufacturer.
Not with System76
> You may say you don't care about those and you're okay with 10 - 20% power loss.
I'm not. That's why I stopped buying Windows hardware and started buying Linux hardware!
Apple users's whole identity is based on thinking linux users do this daily.
You need new reasons to hate Linux, because all those issues were solved a while ago.
There is a reason why 1) people whose main environment is Linux feel (correctly) that these problems have been solved a long time ago, and 2) people whose main environment is not Linux but who try Linux occasionally feel (correctly) that these problems still occasionally crop up.
People whose main environment is Linux intentionally buy hardware that works flawlessly with Linux.
People who try Linux occasionally do it on whatever hardware they have, which still almost always works with Linux, but there are occasional issues with sketchy Windows-only hardware or insufficiently tested firmware or flaky wifi cards, and that is enough for there to be valid anecdotes in any given comments section with several people saying they tried it and it isn't perfect. Because "perfect" is a very high bar.
>People whose main environment is Linux intentionally buy hardware that works flawlessly with Linux.
Hm, recently I bought a random "gamer PC" for the beefier GPU (mainly to experiment with local LLMs), installed Linux on it, and everything just worked out of the box. I remember having tons of problems back in 2009 when I first tried Ubuntu, though. I have dual boot, just today I ran a few benchmarks with Qwen3. On Windows, token generation is 15% slower. Whenever I have to boot into Windows (mainly to let the kid play Roblox), everything feels about 30% slower and clunkier.
At work, we use Linux too - Dell laptops. The main irritating problem has been that on Linux, Dell's Dock Stations are often buggy with dual monitors (when switching, the screen will just freeze). The rest works flawlessly for me. It wasn't that long ago when my Windows (before I migrated to Linux) had BSODs every other day...
My random "gamer PC" won't even boot into any Linux live CD, so I can't install it at all.
Anecdotes are like that.
> people whose main environment is Linux feel (correctly) that these problems have been solved a long time ago
There is also the quiet part to this. People who religiously use Linux and think that it is the best OS that can ever be, don't realize how many little optimizations go into a consumer OS. They use outdated hardware. They use the lower end models of the peripherals (people still recommend 96 DPI screens just for this). They use limited capabilities of that hardware. They don't rely on deeply interactive user interfaces.
I own a 2011 thinkpad, a 2014 i7 desktop and a "brand new" 2024 zen5 desktop. They all work wonderfully and all functionality I paid for is working. I haven't had a single problem with the newest machine since I bought it other than doing the rigmarole to get accelerated video encoder/decoder to work on Fedora. Sucks but I can't complain.
The older machines I've owned since around 2014 and I remember the hardware support was fairly competent but far from perfect and graphics and multimidia performance was mediocre at best and ZERO support for accelerated video encode/decoder. Fast forward to around the last year or two and linux on both of these machines is screaming fast (within those machines capabilities...), graphics and multimidia is as good as you could get on windows (thanks wayland and pipewire!) and acc. video decode/encode works great (still have to do the rigmarole in fedora, but it's ootb in manjaro).
Both the 2014 machine and the 2025 sport a 4k display @120hz (no frame drops!) with no issues using 200% scaling for hi-dpi usage. Pretty much all of the apps are hi-dpi aware, with the exception of a few running on WINE which until a few months wasn't HI-DPI aware. (this feature is experimental and among many other improvements in WINE may take another year to mature and be 100% stable)
200% is just rendering the same pixels and them drawing them 4 times and driving a single monitor at the single resolution is easy stuff. Would your HiDPI system with one monitor at 125%, one at 100% and another at 150% scaling? This is when the font rendering gets fucked up and your Hi-DPI native toolkits start blurring icons. That's my setup. Windows is perfectly capable to make this work. GTK wasn't able to do fractional scaling until recently and Qt has 100s of papercuts.
I got a Thinkpad to just run this setup under Linux 2020. AMD didn't solve the problem in their driver until 2022 when I was able to drive all of them at 60 Hz.
No, 200% is rendering 4 pixels with "features" 2x larger in each axis. You may get 200% scaling as you said with some legacy apps that give zero fucks about dpi scaling but are still scaled trough some mechanism to properly match other apps.
Fractional scaling has been a problem across all platforms, but I agree Linux has taken its time to get it right and still have some gotchas. You should try to avoid it in any platform honestly, you can get sometimes get blurry apps even in Windows. AFAIK KDE is the first to get it right in this complex situations where you mix multiple monitors with different fractional scaling ratios and have legacy apps to boot. GNOME has had experimental fractional scaling for a while but it's still hidden behind a flag.
It also helps to not have nVidia trash on your old (and sometimes even new) computers if you want longevity. My old machines have intel and AMD graphics with full support from current kernel and mesa.
Linux is basically everyone's go to for older devices. Windows 10 will run like shit on a 10 year old laptop with 4GB RAM but latest Ubuntu is nice and snappy.
I have a 13 year old laptop that runs Windows 10. I cannot run Linux because neither nouveau nor Nvidia drivers support its GPU. It has 8 GiBs of RAM and it works perfectly for light browsing and document editing.
I don't need new reasons to hate Linux. Like I said, I have moved to macbooks as my personal computing device because of the better hardware.
> solved a while ago
Can not be the case because I was facing these issues less than a couple of years ago.
I was responding to the "Stockholm syndrome" comment specifically because there are a number of hardware and software problems (e.g. https://jayfax.neocities.org/mediocrity/gnome-has-no-thumbna...) with using linux as a desktop operating system that linux users have to find their way around, so I found the comment rather full of irony.
PS: I already know that the file-picker issue has been fixed. That does not take away from the fact that it was in fact broken for decades. It is only meant as an example.
> Can not be the case because I was facing these issues less than a couple of years ago
Just like with Mac and Windows, you choose the supported hardware, and everything is flawless.
If there's some set of fully Linux-capable laptops out there, it's a small subset of the Windows-capable ones.
And it's not clear what the Linux ones are. Like, our dept ordered officially Linux-supported Thinkpads for whoever wanted them, and turns out they still have unsolved Bluetooth audio problems. Those people use wired headphones now.
This is true. Until people pay reliably for Linux hardware instead of Windows, that will always be the case, just as it is for Mac.
Just like Mac, though, the key is to buy from a vendor that ships hardware designed for Linux, with Linux preinstalled, and with support for Linux.
Unlike, Mac, though, Linux won't block you from installing it on Windows hardware, so it's not as obvious that you're on your own.
And what is supported hardware here? What even is "support"?
I'm writing this from Purism Librem 14, which works flawlessly, including suspend. There's also System76, Framework and more. See also: https://news.ycombinator.com/item?id=32964519.
As far as I can tell, Chromebooks are the only truly supported GNU/Linux laptops.
System76 is my go-to. There are others. You can even get some major vendors (Dell, Lenovo) to ship with Linux preinstalled, though I don't know if the firmware or chips diverge from the Windows variants.
Basically any thinkpad
There's no way, especially if you include Bluetooth in that list.
> Running a Linux VM on Windows is nicer than just booting into Linux
Indeed, it does. Having stable system and not dealing with Linux on Desktop, clear tradoffs (like "just add another 16gb RAM stick in laptop/desktop and you are golden") is great for peace of mind.
The average uptimes on my laptops (note for plural) is ~3 weeks, until next Windows Update to be applied. I don't have nostalgia on the days of using Linux on desktop (~2003 student times, ~2008 giving it one more try, ~2015 as required by dayjob)
Of course it adds up that I can tell people around me (who are not tech guys often, but smart enough to know basic concepts and be able to run bash scripts provided to them) - "yep, machine with 32GB+ of RAM will work fine, choose any you like" - and it works.
I'm confused, in what world does running Linux require more RAM than Windows?
The suspend/hibernate on laptops isn't that great, but tbh I never had great results on windows either (macos is decent though).
And uptimes for desktop systems are similarly just limited by whenever there's a kernel update.
I meant overhead you need to spend in RAM to run WSL2
This is the opposite of what I've heard. Most often you hear of people installing Linux on old machines due to it performing better than Windows on low resources.
I'm talking about more regular situation when you deal with new hardware- why on earth I'd go with outdated and limiting me T480 when T16gen4 is around the corner. Or ARM based laptops.
If for some reason I could never use a MacBook again, it wouldn't be easy to decide between Windows or Linux as the host OS on a laptop. Do I want something that's intentionally user-hostile or something that's unintentionally broken a lot?
I'd at least try Linux cause I abhor Microsoft, but idk if it'd work out.
Maybe it is both-sidesism but the motd you get by default on Ubuntu these days is as bad as any OS. (“Ubuntu Advantage” sounds about as good as https://prospect.org/health/2024-01-12-great-medicare-advant...)
At least the nags in Windows look like modern web-based UI (so far that ‘use Electron’ seems to be the post-Win 8 answer to ‘how to make Windows apps’) in contrast to MacOS which drove my wife crazy with nag dialogs that look like a 1999 refresh of what modal dialogs looked like on the classic Mac in 1984.
My acid test for WSL2 was to install the Linux version of Google Chrome in it, and then play Youtube videos fullscreen with that. It worked. Somehow WSL1 was the more impressive hack but how can you argue with what works? WSL2 works fine.
Also 1980s style X11 widgets on the Windows desktop in their own windows? Cool.
I have to say too, though, once you get the hang of the way an EFI system boots, it's really good for dual boot. I let the Linux installer mount the undersized existing one as /boot/orig_efi and made a new, bigger EFI system partition. Not only was the UEFI on that particular laptop fine with it, scanning both EFI system partitions for bootable stuff, but also, grub2 installed in the new one automatically included the Windows boot in the old one as a boot option.
Cool because nothing about how Windows boots is intercepted; you can just nuke the new partitions (or overwrite them with a new Linux installation). I still prefer a native Linux boot with "just in case" Windows option to WSL.
But not having to dual boot and just get both worlds at the same time definitely beats having to dual boot.
I don't think people are using WSL to avoid problems with dual booting. Dual-booting has become about as simple as it can be, thanks to UEFI, but it's still not exactly fun to have to close all of your open apps to switch to another OS to run just one app.
You get much nicer window decorations if you use the wayland support instead of X11.
> You get much nicer window decorations if you use the wayland support instead of X11.
Wayland supports window managers ?
Step it up a notch and see if Netflix works w/ its DRM.
Forced to work on Windows for ++nth job, I was looking forward to WSL. Indeed, while it worked, it was magic. Sadly, I have had no end of bizarre bugs. The latest one almost crashed my whole desktop - as far as I can piece together, something crashed, leading to a core dump the size of my desktops entire memory - half the machine's RAM. This in turn put WSL in a weird state - it would neither run, not be uninstallable. Googling found bug reports with similar experiences, no responses from Microsoft and magic incantation that maybe worked for some people - but not for me.
It might be due to my corpo's particular setup etc. but for me 95% of the value of WSL would be the ability to run it on "corporate" Windows boxes. Alas.
I'm sure that feature is important for whatever works you're doing, but that's a feature I've _never_ desired, and WSL is missing plenty of features that are important for my work.
Hardware performance counters basically do not work in WSL2, which among other issues, makes it extremely difficult to use rr. https://github.com/rr-debugger/rr/issues/2506#issuecomment-2... Some people say they got it working, but I and many other users encounter esoteric blockers.
The Dozen driver is never at feature parity with native Linux Vulkan drivers, and that's always going to be the case.
By default, WSL security mitigations cause GCC trampolines to just not work, which partly motivated the opt-in alternative implementations of trampolines last year. https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=28d8c680aaea46...
gWSL is also a terrible X11 server that makes many very basic window management configurations impossible, and while I prefer VcXsrv, it has its own different terrible issues.
I can imagine that WSL2 looks attractive if all you want to do is run command line apps in multiple isolated environments, but it is miserable for anything graphical or interactive.
> I can imagine that WSL2 looks attractive if all you want to do is run command line apps in multiple isolated environments, but it is miserable for anything graphical or interactive.
Indeed, that's my case - using CLI mostly for ssh/curls/ansible/vim over ansible and Puppet, so on.
For GUI part, Windows is chosen and shines for me.
I think it really depends on what you do and whether the Linux side of it has hard dependencies on system packages. Personally, at work I much prefer working directly on my Linux workstation, and at home have even switched to using Linux for my gaming desktop. I really don't like the direction Windows has been trending for the past few years, and with the specter of a forced Windows 11 upgrade on the horizon I decided it's time to go all in. My system runs better and I can still play all my games. The jankiest thing I do is I have a mingw toolchain so I can compile some game mods into Windows DLLs to be loaded by Wine, but even that ended up being pretty seamless. Just install the toolchain and the project just compiled.
I don't understand. Docker/podman/distrobox/lxc all allow you to do the exact same thing without the virtual machine overhead. I think the real win of WSL is that its a best of all worlds. You get to use Windows with access to every game ever made plus all of the proprietary apps everyone needs to use, with all of the upside of having a full and complete linux command line experience.
You get all of Windows telemetry, vulnerabilities and backdoors, the always fun game of spot the new Advertising opportunity, AI “copilot” spyware I mean feature, updates that reset your machine at will, a terrible UAC model that encourages “just click OK already!”, and dependence on a company that has gone out of their way to prove how much of an unstoppable behemoth they are; and best of all you get to pay for the privileges above.
I know… every year is the year of the Linux desktop… but seriously the AI spyware included was enough to get me gone for good.
It's hard to pick the Windows feature I hate the most, but floating around at the top is Defender. It can't be disabled, at least not easily, and it demolishes IO performance. And Windows update takes the computer hostage, and takes ages to do anything giving no feedback in the process, meanwhile APT can update to a new major version in like 5-10 minutes.
You can setup local and limited user accounts under Windows. Many applications including every development tool out there doesn't need any admin permissions.
Spyware and adware is a government policy / regulation problem. Thanks to GDPR and DMA, using Windows in EU is significantly better experience (try setting a Windows desktop with an EU image). You can remove almost all of the apps including Edge and Copilot. There are no ads in the UI. Neither in Explorer nor in Start menu.
The current process to install windows11 with a local account… is to, press SHIFT + F10 at a screen in the middle of install after the first reboot, enter into the command prompt: ODBE/BYPASSNRO, and disconnect from any internet options, and/or ipconfig disable your networking…
But guess what? Fuck You because that is the old way of doing it now, and now the new command is start ms-chx:localonly
This is a company that fucking hates you.
Or you just use Rufus to build the USB installation disk.
ventoy does it too
Yes, you get Windows telemetry which enabled fixing bugs without a bug report, you get minimal ads in the start menu (if you're playing "spot the new advertising opportunity" I found it. It's in the start menu. You can stop playing now), AI "copilot" which isn't spyware just because you think it is, updates that ASK you nicely multiple times to update (I don't want to be ableist, if you suffer from a Christopher Nolan Memento-like disability where you don't remember the warnings, you might think it's "resetting at will", but I assure you, it isn't), a great UAC model that's a lot better than "just type your root password into this terminal already, and just hope the binary wasn't hijacked in some way to keylog you, because unlike UAC, there is no visual evidence that you're not getting hacked", and dependence on a company that SV_BubbleTime thinks "has gone out of their way to prove how much of an unstoppable behemoth they are" with no evidence or clarity so they must just be making FUD, and best of all the OS costs so little you can pay it in 8 hours of working as a software developer.
I don't even care about privacy. Windows is too slow, nagging, and plastered with ads.
Stockholm’s my man.
Sunk cost, my man.
Good you diagnosed yourself
with the virtual machine overhead.
Because it's easier to set up a local dev environment in WSL than in any of those.
How is it easier to setup a linux dev environment in WSL than in https://containertoolbx.org/ or https://distrobox.it/ or just in Linux directly?
I meant if you're using Windows to begin with
> WSL is more powerful than Linux ...
Are you a Windows user who is happy to have a good way to run Linux on Windows, or are you a Linux user trying to convince other Linux user that instead of using Linux, they should use Linux in a VM running on Windows?
I am a longtime Linux user, and I can't see a reason in the universe why I would want to access my Linux through a VM on Windows. That seems absolutely insane.
Gnome (a linux desktop environment) ships a "Boxes" app [0] that is very impressive. You can, with a few clicks, install one of a huge number of Linux distros in an auto-provisioned VM, enable hardware passthrough for USB devices and host 3D acceleration, and manage files with drag-and-drop from the host system. I also use it for Windows and MacOS VMs (don't tell Apple), but you need to provide your own images.
[0]: https://apps.gnome.org/Boxes/
Look I get it. I’m forced to use Windows at work and I thank the lord WSL is a thing. But I would switch to Linux base in a heartbeat if I could. WSL is jank as fuck compared to just using Linux.
> WSL is more powerful than Linux because of how easy it is to run multiple OS on the same computer simultaneously.
I'd venture to say this depends on which OS you're more comfortable with. I'm more comfortable with Linux, so I'd say it's easier/better/less janky to use Linux as a host OS.
> Like if one project has a dependency on Ubuntu22 and another is easier with Ubuntu24. You don't have to stress "do I update my OS?"
Once you're a developer who's been burned by this enough times, you do this with containers or dedicated dev VMs. You do not develop on your host OS and stay sane.
I will also die on this hill - NixOS on WSL + Windows + komorebi[1] for tiling window management is peak productivity for me.
[1]: https://github.com/LGUG2Z/komorebi
Why not a Linux distro with i3wm, instead? What could possibly hold you back from upgrading?
I've yet to find anything comparable feature-wise on Linux - and they all come with the huge downside of having to roll your own cohesive settings widget ecosystem for basic everyday things like WiFi and Bluetooth connectivity. I run Cosmic Epoch on my old Macbook which is better, but again, feature-wise, it's just not comparable for serious work.
Thanks for your reply, but as a Linux user for over 20 years, all I take away from your post is that you haven't really tried, probably because the variety of distros vastly exceeds the two classic options of mac vs windows.
I understand the "roll your own" argument very well. In my time, I've experienced quite the variety of configs and dotfiles, but I'm not young anymore so I've settled with using Regolith which is an opinionated set of tools, including my favourite i3wm, on top of Ubuntu, and I simply use defaults for the most things.
Anyway, it's much easier to use Linux as a daily driver than it's ever been. The choice of distro is simply which package manager to use, and everything else just works, as long as it's in the package manager's inventory.
I haven't compiled my own computer's kernel in 6 years (but I still cross compile for rpi and other IoT), and I haven't used my dotfiles in 3 years, just defaults.
> Thanks for your reply, but as a Linux user for over 20 years, all I take away from your post is that you haven't really tried, probably because the variety of distros vastly exceeds the two classic options of mac vs windows.
A very big and very incorrect assumption. This reads like you asked the initial question without any actual curiosity behind it.
Thank you for the details!
> having to roll your own cohesive settings widget ecosystem
What gets you that on windows? The builtin stuff is far from cohesive.
I just run NixOS, but that feels like a respectable answer.
I think it depends a lot on what you're trying to do. I found that anything GPU-related was a nightmare of drivers and configuration which was a show-stopper for me. Now I just run arch/kde and that all works fine out of the box
Well, I'd still rather just use linux, but I take your meaning.
Me too. Particularly after having to do Docker things a few years ago, destroying my productivity due to file system speed.
However, for those of us that went Linux many years ago, and like our free open source, in 2025, is it better to go back to the dark side, to run Windows and have things like a LAMP stack and terminals run with WSL?
I don't play games or run Adobe products, I use Google Docs and I don't need lots of different Linux kernels. Hence, is it better to run Linux in Windows now? Genuinely asking.
As someone who occasionally does use WSL, I definitely think it's not better no. But I'm still biased, because I know a lot more about using linux than I do about using windows, and WSL is still windows.
for me,
> is it better to run Linux in Windows now? Genuinely asking.
definitely is. Servicing takes ~ 1 minute per month to click on "yeah, let's apply those updates and reboot". Peace of mind with no worrying on external hardware won't work or monitor will have issues or laptop won't sleep or during the call battery will discharge faster due to lack of hardware acceleration or noise cancellation not working or ...
wsl2 is linux
*on bare metal
not on a shitty wrapper running on an ad-platform.
I would rather use Linux, outside of VM.
While I mostly agree with this sentiment, sidestepping the power management and sleep issues as well as better driver support and touchpad handling on some laptops makes it quite a bit better.
If you have sleep and power management issues l, your hardware does not support Linux.
This is not a Linux issue, it's a "I bought a Windows computer, slapped Linux on it, and expected that to work" issue.
I've been installing Linux almost universally on "Windows computers" [sic] for the past two decades or more, per your characterization. Sometimes great, sometimes meh. Your point? I am simply illustrating there's a value for WSL over bare metal in some cases, not playing the whose fault it is game.
Sic? You don't understand the argument at all then.
Buy computers that were designed for and ship with Linux, and with support you can call to get help. Modern hardware is far too complex to handle multiple OSes without a major effort. Assuming they even want to support anything but Windows, which most don't.
Two things:
First, that's not the discussion at all. The question is does WSL have valid use cases and benefits over bare metal Linux. The answer is absolutely yes. For whatever reason you have the computer in front of you and you have the choice between the two modalities (many times you don't buy it, employer does, etc.)
Second, if everyone had your attitude, seeing PCs as "Windows computers" and stayed in their lanes in the 90s and 2000s, you would not have the option of three and a half supported "Linux computers" you are alluding to today. Viva hackers who see beyond the label.
WSL is better than no option, sure. It's not as good as Linux on Linux hardware.
The hackers sure. Reverse engineering takes a lot of skill and my hat's off to them.
Almost everyone here, though, are not in either camp. Most have the means and ability to buy a Linux computer if they so choose. But they don't and then complain when Linux fails to run well on a system that never has had a team of dedicated system integration work on it.
I agree. Back in the day (10+ years ago), I used to argue with people about why I ran VMs instead of just partitioning the disk and booting up the OS I needed.
XAMPP did not work out of the box with me on Windows (skill issue on my part, I know), so my preferred setup was to run a Ubuntu Server VM (LAMP stack) and then develop whatever I had on a Windows IDE.
I could have done that under full Linux, I just did not want that. Then Vagrant came into existence, which I'd say was for my use case (but never came around to adopt it).
I'm really happy with my WSL2 setup. I stopped using VMware Workstation when WSL2 broke it, but WSL2 is exactly what I needed to match my use case.
> XAMPP did not work out of the box with me on Windows (skill issue on my part, I know), so my preferred setup was to run a Ubuntu Server VM (LAMP stack) and then develop whatever I had on a Windows IDE.
Why wouldn't you have just spent 5 minutes to get XAMPP working?
It's really a skill issue on my part.
LAMP stack worked for me perfectly on Linux out of the box, whether Ubuntu Server or any RHEL-based distro (even with SELinux enabled!).
I spent some solid 8+ hours on that, saw it uneconomical and went the VM way.
I stopped using VMware Workstation when WSL2 broke it
Is it still broken?
Nope, VMWare added the capability to work as a sort of nested hypervisor atop Hyper-V (which WSL2 and newer Windows security features depend on).
That being said, there is a performance impact.
WSL gave me the push to switch from macOS to Windows. And I couldn't be happier, tbh. There was a lot lacking in my Hackintosh/Windows dual boot setup.
> Edit: for clarity, by "multiple OS" I mean multiple Linux versions. Like if one project has a dependency on Ubuntu22 and another is easier with Ubuntu24. You don't have to stress "do I update my OS?"
For this part, I just create systemd-nspawn containers.
Last time I wanted to test something in a very old version of WebKit, creating a Debian Jessie container takes a few minutes. Things run at native speed.
You use distrobox (https://distrobox.it/) and move on with your life. At work I use multiple versions of Ubuntu seamlessly without messing with VMs on a host fedora box without issue. That includes building things like .deb packages.
> Like if one project has a dependency on Ubuntu22 and another is easier with Ubuntu24. You don't have to stress "do I update my OS?"
Have you tried lxd? It's far less janky than Docker (IMHO) to achieve what you describe. Docker is uniquely unsuited to your use case.
I love WSL, but you can do these things with Distrobox.
I'm with you - after years of messing with dualboot Linux, including (foolishly) running multiday Gentoo builds, WSL + Windows now gives me everything I want from Linux with zero friction.
In fact, I'm a little annoyed that I can't get a comparably smooth experience on my MacBook without spinning up a full QEMU VM. I know it's a bit hypocritical since, like most people, I run WSL2 (which is container/VM-based), not WSL1 (the original magic syscall translation vision).
Does anyone know why there's no lightweight solution on macOS - something like LXC plus a filesystem gadget - that would let me run stuff like "apt-get install chromium"?
Try https://tart.run/
>Native performance Tart is using Apple’s native Virtualization.Framework that was developed along with architecting the first M1 chip. This seamless integration between hardware and software ensures smooth performance without any drawbacks.
Is this close enough? https://github.com/lima-vm/lima
Perhaps with the Mac Hypervisor someone is working on it.
But Qemu (via UTM) starts up pretty quickly for me. No slower than WSL2 under Windows. My only issue is that it seems to drain power even when idle.
> WSL1 (the original magic syscall translation vision).
Actually, the OG "magic syscall translation" is Cygwin[0], which dates back to 1995[1].
[0] https://cygwin.com
[1] https://en.wikipedia.org/wiki/Cygwin
Edit: Fixed prose.
Absolutely! I remember playing and struggling with Cygwin back in the day… I meant original in the sense of the original vision for WSL.
I think WSL is great but if your only goal is to run several Linux OSes, any hypervisor will do. I think Proxmox is better suited to your use-case (hosted on Linux).
I love WSL because it lets me have the best of Windows and Linux.
I like that wsl is a thing when I'm on a windows machine, but it can also serve as a reminder of the often unnecessary frictions that exist between operating systems.
When the answer to a "how do I do X on windows" question begins with "start WSL", my primary reaction is frustration because they're basically saying "there's not a good way to do that on Windows, so fire up a Linux VM".
Just to pick my most recent example, from today. I wanted to verify the signatures on some downloaded rpm files, and the rpm tools work on linux. I know, rpm files are native to a family of linux distros, so it's not surprising that the tools for retrieving and verifying their signatures don't work on windows but... it also seems reasonable to want a world where those tools can install and run on windows, straight from a PowerShell session, with no VM.
Multiply that by all the little utilities that that can't be deployed across multiple operating sytems, and it just seems like some incompatibility headaches are never really going to go away.
Jumping on the anti-wsl bandwagon; I just can't abide the loss on control on windows, will the next update ignore/reset/override my privacy settings? What Gordian knot must I slay to have a local only account (Thanks Rufus!) How do I turn off/uninstall a million things I don't want, Xbox game bar?!?
Linux or *BSD give so much more respect to the user, on windows you are the product! Stand up for yourself and your data!
Is it not the case that wsl2 is a vm; it requires hyperV enablement; and that turns your main windows OS into effectively a type of privileged vm, since hyperV is a type 1 bare metal hypervisor?
This is not often discussed, so it took me a lot of digging a couple of years ago, but I'm still surprised this is never discussed as a consequence / side effect / downside of wsl2. There are performance impacts to turning on hyper V, which may or may not be relevant to user (e.g. If this is also their gaming machine etc:)
> It's an absolute delight to use, out of the box, on a desktop or laptop, with no configuration required.
I have been using it since the beginning of WSL 1 with a very terminal heavy set up but it has some issues.
For example WSLg's clipboard sharing is buggy compared to VcXsrv. It doesn't handle pasting into Linux apps without introducing Windows CRs. I opened an issue for this https://github.com/microsoft/wslg/issues/1326 but it hasn't gotten a reply.
Also, systemd is still pretty sketchy. It takes over 2 minutes for systemd services to start and if you close a WSL 2 terminal for just a few minutes systemd will delay a new terminal from opening for quite some time. This basically means disabling systemd to use WSL 2 in your day to day.
Then there's this 6 year old issue with 1,000+ upvotes https://github.com/microsoft/WSL/issues/4699 around WSL not reclaiming disk space. It means you need to routinely shut everything down and compress your VM's disk or you'll run out of space.
Beyond that is does work well so I'm happy it exists.
never had problems of systemd/2 minutes delays
not sure what would be the correct test here, but:
root@LP-T16:~# uname -rn
LP-T16 5.15.167.4-microsoft-standard-WSL2
root@LP-T16:~# time systemctl restart ssh
real 0m0.039s
user 0m0.008s
sys 0m0.001s
The delay is related to starting WSL 2, not starting a systemd service btw.
Maybe it's specific to Windows 10 Pro, who knows. I'm using the latest WSL 2 from the MS app store.
I just know when I installed Docker directly into WSL 2, when I launched a terminal I could not run `docker info` and connect to the Docker daemon for 2 minutes. The culprit was the Docker service was not available. I was able to reproduce this on Arch and Ubuntu distros.
Separate to that systemd also delayed a terminal from opening for ~15 seconds (unrelated to Docker).
After ~10 minutes of the terminal being closed, both issues happened. They went away as soon as I disabled systemd.
First opening of my main wsl2 Ubuntu 22.04 instance takes roughly 20 seconds, the next new terminals opens in ~1s. As it happens once a 3 weeks or so when Windows rebooted for updates, I don't care much.
It takes me more time to fill passwords for ssh keys to agent anyways.
Granted, I'm not using native docker inside.
Also, systemd is still pretty sketchy. It takes over 2 minutes for systemd services to start and if you close a WSL 2 terminal for just a few minutes systemd will delay a new terminal from opening for quite some time. This basically means disabling systemd to use WSL 2 in your day to day.
That doesn't sound good. I was planning to set up a Windows/WSL2 box, but this gives me second thoughts. Where can I read more about this?
It's still ok even without systemd. Technically systemd is disabled by default, you have to turn it on with systemd=true in /etc/wsl.conf.
I can't find a definitive source with an open ticket but if you Google around for "WSL 2 systemd delay startup" you'll find assorted folks talking it about with a number of different reasons.
I just went by my end results of there is a delay with systemd enabled and no delay with it disabled.
You don't stress about Windows updates? Hard to believe it.
Yeah exactly ... I want Windows running in Linux, not the other way around, so I actually control the software and the updates!
I actually just tried WINE for the FIRST time (surprisingly, I have been out of the Windows world for so long)
https://www.winehq.org/
And as long as I installed the binaries from their repo, not Debian 12, it worked very well
Wine is an impressive project too. It's not a VM, which has upsides and downsides, but I was able to run GCC-TDM, Python 3, and git bash in it!
What do you mean by that?
As a reply to: You don't have to stress "do I update my OS?"
I'm also not sure on your question, over the last 5 years, average interruption time is ~ 5 minutes to apply update, which happens roughly once a 3 weeks or so. Once or twice per year, release updates happen and that takes may be 30 minutes of interruption (not totally sure here as I usually grab my coffee and cigarrets and go reading news on balcony, which may easily take ~1h for me).
So for me, updates practically doesn't affect my workflow at all.
Congrats. I'm linux desktop user, still had to waste hours of time nursing miscellaneously recalcitrant windows updates that annoyed people around.
It doesn't work on any of my 3 Windows machines, all completely different hardware. Jank factor 100% for me. I wish I was seeing what you're seeing.
Still somewhat janky. I use it on my work machine (since it at least seems a bit faster than using VirtualBox) and regularly run into issues where npm won't build my project due to the existence of symlinks [1,2]. wslg windows also don't yet have first-party support from the windowing system [3]. I also remember having trouble setting up self-signed certs and getting SSL working.
1. https://stackoverflow.com/questions/57580420/wsl-using-a-wsl... 2. https://github.com/microsoft/WSL/issues/5118 3. https://github.com/microsoft/wslg/issues/22
Now if they could only do Windows 12 by taking baby steps in yearly release of Windows 11.1, 11.2 etc.
Iterating on improvements and polishing on Screens and Design that they haven't touched in the past 30 years. Improving on ARM support etc. And STOP adding Ads on the OS.
And the Surface Laptop continues to push Hardware quality forward. From Speaker, Touchpad, Screen, Motherboard etc.
It is really good but honestly would prefer something a little more like:
- Linux that works great on a laptop / does the right thing when closing the lid - Linux that doesn't have worse battery life than Windows / macOS - Seamlessly runs Windows when you need to run something (e.g. click on Excel) - Isn't necessarily free (prefer quality over low price in this situation)
Windows of course has many of these traits and WSL is a pretty good compromise, but I would prefer to boot into Linux and use Windows only when necessary (since my need for it is less common).
Install Promox or TrueNAS on a bare metal desktop to experience the true power of multiple operating systems running simultaneously. On most days, I am running multiple VMs with these OSes in parallel: Windows Server 2025, Windows 11 Pro, and these flavours of Linux - TrueNAS/Debian, Ubuntu, Manjaro, Zorin OS. I also have a dozen or more lightweight containers running, some with LXC on the bare metal host and others with Docker inside the TrueNAS VM.
This setup automatically backs up my data and is resilient to disk failures. It’s the ultimate form of power and bliss.
I like WSL for this single reason too - it gives me space to run isolated experiments without touching my primary OS. So if that's what windows users get out of it, cool.
You can do the same thing with many other technologies on most other operating systems. I've used, in chronological order: FreeBSD jails, VMs, Cloud-hosted VMs, Docker, K8s, and Nix flakes. WSL is probably somewhere in around K8s.
My point is, we've had the ability to run "subsystems" for decades, by different names, on every OS. WSL is cool but quite late to the game, far from being "more powerful than linux".
I used to agree with this for WSL1. Syscall translation gave solid performance, decent FS integration, and interop within WSL with windows executables. I really liked it.
WSL2 has been such a pain. You're basically managing a VM with VMWare Tools somewhat more integrated. I gave up on WSL2 after a few months and went back to booting my arch installation most of the time. Now I'm on a mac for the first time in a long time because windows has gotten so bad.
This is doubly sad because the NT kernel is so well designed to host multiple OSes due to the OS/2 stuff decades ago. All wasted.
Perhaps "more powerful" is also a factor of who is the computer user. For example, Linux is not as "powerful" if the computer user is someone who knows little about how to use it.
For a person who will not invest the time to learn, e.g., how to avoid or minimise dependencies, indeed something like Windows with WSL may appear "more powerful".
The point of this comment is that "power" comes from learning and know-how as much as if not more than simply from choice of operating system. That said, some choices may ultimately spell the difference between limitations or possibilities.
WSL is great if you're on Windows, but I wouldn't say it's more powerful than Linux. Distrobox on Linux covers your "multiple OS" use case quite well.
I share your sentiments. Makes testing my builds against windows, Ubuntu 22, Ubuntu 24, etc a breeze. It pretty much 'just works' and I can take it to go on my laptop. Even though I do most my work in Linux, Windows is a convenient 'compatibility layer'. I was skeptical at first when my friend suggested I try this, but daily usage has won me over.
I've lived in WSL for 3 years now, and have zero complaints. It has worked with no issues what so ever. In 2025, Windows is the best Linux UI.
The development experience is relatively cumbersome compared to using a native Linux distribution and containerizing application dependencies where needed.
Last time I used it they kept hogging some common keyboard shortcuts for whatever Windows stuff even though the VM-window was focused. Did they stop that?
And yet when I reboot my computer windows has shown me an entirely new place I can see ads - this week it was my lock screen.
So I left - I am willing to do more work to be spied on less, to be used as a product less, and to fight with my computer about who owns it less.
> and to fight with my computer about who owns it less.
This is a great way of saying it and expresses the uneasy feeling windows has given me recently. I use Linux machines but I have 1 windows machine in my home as a media PC; and for the last several years windows has made me feel like I don’t own that computer but I’m just lucky to be along for the ride. Ramming ads on the task bar and start menu, forcing updates on me, forcing me to make a Microsoft account before I can login (or just having a dark UI pattern so I can’t figure out how to avoid it, for the pedantic).
With Linux I feel like the machine is a turing complete wonderbox of assistance and possibility, with windows it feels like Microsoft have forced their way into my home and are obnoxiously telling me they know best, while condescendingly telling me I’m lucky to be here at all. It’s a very different feeling.
Yeah, "Weather and More" is such a joke. I like the idea of Weather on my lock screen in theory, and I sometimes miss Windows 8's great support for Lock Screen live data, but I have huge problems with almost everything else in the "and More" (news, no thanks, ads, definitely no thanks, tips, maybe not). Thankfully it is still really easy to turn off "Weather and More", but I wish they'd give us a "Weather and Nothing Else". (Same reason one of the first things I do is disable the "Widgets" display on the taskbar in Windows 11. Weather is great, everything else I don't want and/or actively hate.)
Yeah this is what pisses me off the most about windows. Telemetry that can't be turned off normally. Ads everywhere. Microsoft deciding when I must restart for updates. Microsoft trying to manage my behaviour telling me to try new features. Screw that. My computer is my own and must do what I choose.
This feature thing is really one of their strategies. At work they send us "adoption managers" that run reports to check whether people use feature xyz enough and set up stupid comms campaigns to push them to do so.
I really hate that. I decide how I use my computer. Not a vendor.
You're right, it is incredibly nice. Just the other day I got a Windows-only developer to install and use the POSIX/*NIX toolkit we use for development/deployment. In 30 minutes he was editing and deploying left and right with our normal open source stack. No messing around with Cygwin or MSYS or anything, it all just worked in Ubuntu on WSL. It's fantastic.
Using WSL on Win11. I would prefer Linux but I never got used to Open Office/Gimp/... and need to use PowerPoint / Affinity. But WSL mostly works, and added some tools and config to make it useful with WezTerm
https://www.amazingcto.com/upgrading-wsl-with-zsh-and-comman...
> Edit: for clarity, by "multiple OS" I mean multiple Linux versions. Like if one project has a dependency on Ubuntu22 and another is easier with Ubuntu24. You don't have to stress "do I update my OS?"
You can run multiple Linux distributions in chroots or containers, such as docker containers. I have showed people how to build packages for Ubuntu 22.04 on Ubuntu 20.04 for example.
This is what tools like toolbx or distrobox solve. You can have easy to use containers with libs from any distro with a few commands, using podman or docker as the backend.
WSL is massively slower than Linux. Not just the 10% or so for VM, but probably 50-90% slower for disk access. It takes many times longer to start tmux. It has update bugs that crash open terminals and that's not even part of the regular windows forced-update fiasco. In short, it's garbage. It's one of the primary reasons I moved back to Linux for my daily driver.
It's a... VM? Like the Linux VMs running on Linux computers in the cloud?
Sorry but not sorry, it's not easier to run than on linux. It requires the Windows store to work, and to use Hyper-V (which breaks VMware workstation, among other things).
It's in a better package, to be sure, but it's not "easier to run multiple OS on the same computer". It's easier to use multiple OSes (no SSH, GUI forwarding, etc), as long as all those OSes are Linux flavors supported by WSL.
Want FreeBSD or Windows? Nope!
Does it really need the store? I thought you could just go "wsl install" on the console.
The files, including and especially the distro files, `wsl install` installs still originate from the Store's CDN, so the truly paranoid that distrust the Store (including some corporate environments) and just entirely block Store CDN access at the DNS and/or firewall level still break WSL installs.
There's a --web-download argument which helped with issues when I had limited access to the store.
You're likely right, I haven't used it in ages. Though I recall that at one point you had to get distributions from the Store, but it may have been that long ago that it was still being called "Bash for Windows".
As of 24H2, you can just "wsl install" from the commandline and it'll do all necessary setup to get you up and running, including installation of Hyper-V components if needed.
You don't need the store.
> Want FreeBSD or Windows? Nope!
Well, it is windows subsystem for Linux :) not windows subsystem for windows or FreeBSD for that matter :)
Ps I wonder if you can make your own image? After all its really just Hyper-V with some config candy.
It's a bit more than just some candy, there's substantial glue on both the Linux/Windows sides to get Plan9, WSLG, and the other components to work.
That said, the kernel they distribute is open source and you're not limited to just the distros they're working with directly. There are a number of third party (e.g. there's no Arch from Arch or Microsoft, but there's a completely compatible third party package that gives you Arch in WSL2)
>e.g. there's no Arch from Arch or Microsoft, but there's a completely compatible third party package that gives you Arch in WSL2
No longer true since last month.
https://lists.archlinux.org/archives/list/arch-dev-public@li...
I'm shocked. They were adamant it wasn't going to happen for a long long time.
The main complaint was the market place TOS that gave Microsoft a free-pass on any trademarked assets. The new WSL2 installation way avoids all of this.
Along with the glibc hacks needed by WSL1.
(I was part of the discussion and also very adamant about this not happening)
Haha yes, I was being cheeky :)
I'm pretty sure that with the opensourcing, we'll see freebsd or more exotic systems popping up quite quickly. Heck, macOS would be fun!
> Heck, macOS would be fun!
Especially in licensing! /sarcasm
That would make it even funnier in my book!
I'm old enough to remember that before docker there was chroot. It's fairly easy to put lots of different user mode portions of Linux distros into directories and chroot into them from the same kernel. It seems a bit like what you're asking for.
There's also debootstrap which is useful for this technique, not sure if it also works on Ubuntu.
debootstrap absolutely works in Ubuntu
My only big gripe with WSL right now is GUI applications. wslg is not good, and the only good experience is when applications have a good remote development UX such as vscode.
Another, smaller, gripe is networking. Because of how WSL is networked, I've run into edge-case issues with connecting to networked applications running in WSL from Windows.
You need to make sure that they use Wayland. Running X11 apps is significantly slower in wslg. Native Wayland apps run much faster.
Run a rootless X server (XWin, Xming) on Windows, network the two (SSH tunnel), you have GUI Linux apps on Windows.
Lack of all packet types disqualified it for me. Is there any hope for nmap, etc?
I use WSL, but I'm actively looking for a way to move away from it. The only thing holding me back are languages like Ruby or Python, which are designed to work in a Unix-like environment. I briefly considered forking Ruby and stripping out all of the Unix-isms but in the end I gave up and just installed Linux (WSL).
docker is pretty easy to use on linux (even rootless docker isn't particularly painful) and KVM using QEMU is also pretty easy for running Windows things. I used WSL quite a bit but ultimately have switched back to running Ubuntu as my main.
Here's the main difference between making Windows vs Linux the main OS from my POV: Windows is a lot of work and only the corporate editions can be converted into not-a-hot-mess-of-distractions (supposedly). Out of the box Linux doesn't have all of the bullshit that you have to spend time ripping out of Windows. You can easily re-install Linux to get the "powerwash" effect. But if you powerwash Windows you have to go back and undo all the default bullshit again.
Having said that Windows+WSL is a very nice lifeline if you're stuck in Windows-land. It's a much better combo than MacOS.
WSL gives you no support for USB devices, which is a massive pain for embedded development when IT forces you to use Windows. Also, this might just be specific to my setup but WSL networking is very finicky with my company's VPN, and breaks completely if the VPN ever drops out requiring a full reboot.
WSL2 can forward USB devices
https://learn.microsoft.com/en-us/windows/wsl/connect-usb
I regularly run ADB through WSL2 using this.
That doesn't work for mass storage devices without a custom kernel, and that's just too much hassle to bother with.
https://askubuntu.com/a/1533361
There are always going to be niche cases. In general USB storage devices are slow to transfer data anyways, so you are better off in copying the files directly from windows mounted location.
For me it was slow, full of compatibility issues, and glitchy. Some simple packages wouldn't even install in the official Ubuntu WSL distro. To be honest I don't know what the use case for this is, other than to run some one-off Linux thing once in a while without having to use another box.
How long ago did you try that?
I use WSL2 to handle Linux (and Windows cross-) compilation regularly, along with running a number of native tools that are specific to Linux.
I've never had any issues with that, even to the point that I've been able to run MAME natively from Linux and have it show up like any other windowed app.
I agree with your opinion on WSL. I psy a similar "tax" when I defend ChromeOS, and I will not stop it, like you won't.
The Linux on Desktop is finally approaching, in more than one "shape", none of which is the shape some people expected/wanted.
Windows 10 with WSL(2) is/was peak Windows for me. You could build stuff and edit MS Office documents in the same place. Sadly, it wasn't meant to last. I have no intention of giving W11 a try, not yet decided what I'll be using come this fall.
> WSL is more powerful than Linux because of how easy it is to run multiple OS on the same computer simultaneously.
I do that with KVM too, and each has their own kernel, not one shared kernel made and controlled by one vendor.
I'm a daily driver. It completely changed the way I work. Am I curious if something will compile? Open a terminal and type make. The files are all already there. You can even run graphics apps. It's wonderful.
I'll second you, WSL makes Windows a first class experience because now I can seamlessly have Linux and Windows apps in one laptop. Yes, I could run VMWare Workstation or HyperV, etc, but this is just better integrated.
As of a couple of years ago the integration was not that great and I switched to just using a full-fledged VM instead. For example, trying to use binaries in WSL from within Visual Studio or vice versa was not great.
I use Ubuntu 22 in a LXC container on Ubuntu 24 (because of Webex).
I also run other Linux instances with KVM.
I even run a Linux x86_64 executable on an ARM SBC using QEMU.
I just feel that Linux is so much more flexible than Windows.
I heart WSL. Years ago I was going to switch to MAC OS to have a more unix like experience/workflow. Then WSL came out and I stayed because Linux is the environment I spend most of my time in.
I agree it is a convenient way to run multiple Linux VMs, but it comes with the drawback of having to use Windows, which is a major impediment to anything I may want to do with my computer.
You can run multiple linux distros on linux just fine via KVM/QEMU, there is nothing special WSL offers except that it is a must if you're doomed to use windows.
The power of linux with the professionalism of paid MSFT engineers
WSL sucks and I much prefer having a true VM in Hyper-v. WSL is full of weird behaviour and gotchas. Docker?? no pid 1, weird kernel etc ...
I used to love WSL when I had a Windows machine because I used lots of docker containers, but now that I am in a Mac with Apple Silicon, there is no going back.
qemu on Linux solves a bunch of these problems as well. But yeah, UX-wise WSL is pretty good at solving the problem of “provide Windows devs a POSIX environment”.
Qemu is nothing like wsl UX wise. The UX on windows is double click gimp and then a window for gimp opens. For qemu it opens a new window for the wm, has awkward input focus interactions, you probably have to log in to the vm, and it can not be easily setup to automatically open the app you want.
ELI5 does it allow me to run windows programs in Linux?
no
I still have issues with the networking but I agree. Its a fantastic system and it shits me only that it could be a bit better.
>WSL is more powerful than Linux because of how easy it is to run multiple OS on the same computer simultaneously
Is VMWare more powerful than Linux?
It’s a delight to use if you don’t mind your computer conducting 24/7 surveillance on you for a multinational corporation.
Previously, I had dual boot with ubuntu and windows. Sometime last year I just removed ubuntu, and haven't regretted it.
wsl works good enough.
If you want to “run multiple versions of Linux at once” and don’t like plain Docker, maybe check-out Podman Desktop.
Most people have little use for running multiple OSes, and that drops a lot when you just abandon Windows entirely.
On linux, I've been using lxc (now incus) for years to get different distros.
I want to know what limitations and tradeoffs am I embracing when using WSL vs booting linux off a usb stick.
I agree with you. Maybe if it had AI shoehorned in, hn would be happy.
Can do the same with FreeBSDs Linuxulator. I run Arch Linux on FreeBSD, emulated.
WSL is so incredible. But support for it from 3rd party dev tools is so terrible.
I tried it and found it to be such an abomination. I can’t understand why any self respecting software developer would use Windows with a bastard linux like WSL instead of just using actual Linux. Feels like a massive skill issue.
it's for when your corpo provides you with a windows laptop for development. hope that helps with your understanding
We already have Distrobox that does same thing.
You can do the same on Linux. Distrobox exists.
I'm not the biggest fan of WSL2, but it's definitely good enough for people to like it. it's worked well enough for me in the past, but the last time I used it, there were problems with mDNS and BPF that it just made more sense for me to boot into leenucks.
But you're definitely not crazy for liking it. And people should chill out instead of downvoting for someone who just says what works for them.
I haven't tried Win11 and probably won't unless my employer forces me to. But if Win11+WSL2 works for you, more power to you.
It's not literally true that "Weasels Ripped My Flesh"[1] but WSL2 did rip the python support in QGIS by polluting my PATH with space characters.
[1] https://en.m.wikipedia.org/wiki/Weasels_Ripped_My_Flesh
Windows treats you like a baby. You cannot learn the internals of it and it forces decisions on you. With Windows, the computer that you paid for is not yours.
Have you used Distrobox?
I won't downvote you, but I will die on the other hill - the one over there that has a guy sitting down with his arms folded sporting an angry face every time someone something positive about WSL. There's at least three of us on that hill. And we're not going anywhere.
> Like if one project has a dependency on Ubuntu22 and another is easier with Ubuntu24.
Sounds like you could benefit from Qubes OS, which runs everything in VMs with a great UX. Including Windows.
Real talk. And anybody who argues is taking a heavy dose of copium to justify their use of Linux and the ensuite of compatibility issues that entails. Let them have their sense of superiority :' )
I'll second this, and I'm someone who ran a certain alternative OS to Linux before Linux was viable instead of run Windows, worked as a developer of Win16 and Win32 apps early in my career which gave me a deep love-hate of the platform, couldn't stand Microsoft's monopoly tactics back in the 1990s and 2000s, and remain ever-sceptical of Microsoft's open source and Linux initiatives...
... but WSL is an excellent piece of work. It's really easy to deploy apps on. Frankly, it can be easier to a deployment there than on a Linux or macOS system, for example the reasons detailed above.
You can run multiple OSes simultaneously on Linux itself - Linux can run VMs just fine. I.e. Linux guests on Linux host and so on. Take a look for example at virt-manager (libvirt / qemu + kvm).
And WSL is a limited VM using HyperV anyway. If you want to run a VM, you can as a well run a proper one which isn't limited and runs a full blown distro with sane configuration.
So WSL is definitely not more powerful than normal Linux.
This would be a great point if WSL didn't require running Windows
For WSL 1, I kinda agree. It was basically the Posix Subsystem re-implemented and improved. Technically amazing, and running parallel to Windows without virtualization. Too bad it had so many performance issues.
But WSL2 is just a VM, no more, no less. You can do the same with VMware Workstation or similar tools, where you even get a nice accelerated virtual GPU.
> Every time I praise WSL on hn I pay the karma tax
Hmm...
> WSL is more powerful than Linux
Oh.
Well I guess now you just need to add WSL support to wine.
... You know that you can run VMs, or full-OS containers on a Linux desktop right?
Or on a macOS Desktop. Bonus: doing so on either platform doesn't also mean your host OS is running under a hypervisor, as it does with WSL2.
Bigger bonus: you don't have to run fucking Windows.
> Bonus: doing so on either platform doesn't also mean your host OS is running under a hypervisor
Why do you think, technologically, this is some form of "bonus"?
Because it broke/put restrictions on the ability to run other hypervisors as the user.
Windows by default runs on a hypervisor since some Windows 11 version.
That just sounds like another reason not to use Windows at all honestly.
That may all very well be, but uuh, you're then forced to use Windows
> WSL is more powerful than Linux because of how easy it is to run multiple OS on the same computer simultaneously.
This is why you pay karma tax. This statement is so clearly representative of a falsity.
My linux can run multiple linuxes as well without VM overhead. Something Windows can’t do. Furthermore WINE allows me to forgo running any vm to run windows applications.
I developed on WSL for 3 years and consistently the biggest issue was the lack of ability to use tooling across the shared OSes.
Your karma depleting statements are biased, unfounded, and it shows as you do not really provide counter evidence. That’s why you lose karma.
Except Wine cant cover all of Windows (partly due to fault of Windows). I can't run UWP apps for example. Windows is not a good operating system but if you need it. WSL creates way more intuitive working environment for you. So even if you can run multiple Linux OSes in Linux you can't run Windows as easily you can do linux on Windows. So OPs statement is not incorrect.
There are virtual machines for Linux with seamless window integration, so upgrading to Linux is still recommended imo.
OP's statement remains incorrect, because their assumption is that the WSL experience can't be reproduced in Linux.
Still can't run everything. Especially apps or games that does vm detection.
Another thing is GUI integration is not as good as WSL. You can't make Windows windows as Linux windows. You can do that easily with WSL.
I've never seen a good UWP app. My biggest issue with Wine is that it can't run anything that needs a driver. That means any hardware with garbage Windows-only control software (hello Roboteq) needs a proper VM.
Is anything using UWP? It's a complete dead end.
[dead]
[dead]
I totally agree and will join you on the hill. I used Linux exclusively at my job for two years straight and now do the same job but from Windows 11 with WSL 2 on the same physical ThinkPad T41 laptop. Windows gets the basics right more than Linux did (sleep states, display, printing). And as the OP notes; it makes it easy to run multiple distributions and never fear that something I install or reconfigure within the WSL2 terminal will screw up my host. Having a different OS improves isolation in this regard, not at a technical level but for me making mistakes and entering commands in the wrong place, since Windows does not accept Linux commands. JetBrains and VSCode both have great support for WSL2.
Given the layoffs round from last week, in a record earnings year, I wonder if this is a side effect of those layoffs.
How would a 3% layoff in a big company affect anything unless they want to specifically axe some project? It’s just lubrication for the machine. 3% is less than nothing compared to the bloat in any bigco and let me tell you Microsoft’s reputation is not the leanest of the bunch.
They're not uniform across every team and project. Certain projects can be hit very hard while others are not. Outside looking in, all we can really do is speculate.
Sure we can speculate that 3% is not news. Again, it’s a one way conclusion: I concede if they want to axe a project deliberately, that could show up in the layoff, but projects won’t incidentally get impacted because of a 3%. The causal relationship would be the opposite.
Didn’t Microsoft use to have annual 10% layoffs? Just culling the lowest performers every year.
If you mean stack ranking, the hard 20/70/10 bucketing was in force >15 years ago, but even then it didn't mean that those 10% automatically get fired.
It's really hard to cut actual bloat when running layoffs, because the more you work the less time you have to do politics and save your ass, so the less productive type of people tend to be pretty resilient to layoffs.
Have you worked at any of these large companies? It’s really easy actually (practically, not emotionally). It’s usually very obvious and there’s consensus who the bottom 10% are. Politics would affect promotions much more than layoff.
> It’s usually very obvious and there’s consensus who the bottom 10% are
But the latest layoffs were not performance based. Are you just confidently commenting without knowing about the event being discussed?
You believe what you want to believe. That’s the lie of the century. Every single layoff is performance based to some degree. Sure you want to consolidate a couple orgs or shut down a project or an office and you lump that together with your performance based stuff.
(Also I was responding to a more generic comment saying doing layoff is bad and makes org more political.)
> It’s usually very obvious and there’s consensus who the bottom 10% are.
Sigh, and company keep them for sentimental reasons I guess…
You’re being sarcastic but it is for sentimental reasons (for the immediate manager and team who doesn’t want to make the hard choices and do the work) as well as the empire building reasons (managers’ universal dick measuring contest is org size [1]).
[1]: the real debate is not “who’s my lowest performer” for each manager. It is about why I should cut rather than my sibling manager. If you force everyone to cut one person they all know who it will be.
It's funny because in this response you are arguing exactly the same thing as I was in my first comment: team sizes are always defined by political reasons (at manager's level, I didn't mention that above because I thought that was obvious, but here we are).
The duds who are the best at telling stories about how important their project is are the ones who can get the budget their team growing, and they are also the ones who are the most likely to defend their interests in the event of a layoff. Because, as you noted yourself, it is never about every individual manager selecting their lowest performers and laying them off, and much more about individual managers (at all levels) defending their own perimeter.
And in practice, being good at this type of games isn't a good proxy for knowing which managers are good at fostering an efficient team under them.
The point I am making is it does not matter if you are cutting 3%. Sure you might end up taking out a third of the bottom 0-10% instead of 0-3% but what difference does it make? It won't be a material political concern for your 50+ percentile employee base.
It does, however, make a difference on the promotion side.
> Sure you might end up taking out a third of the bottom 0-10% instead of 0-3% but what difference does it make?
That's not how it works! You'd have entire projects or department being sacked, with many otherwise very competent people being laid off, and projects deemed strategic being completely immune from layoff.
And even inside departments or projects, the people best seen by management will be safe, and the people more focused on actual work will be more at risk.
The harsh truth is that an organization simply has no way to even know who the “bottom 10% performance-wise” are. (And management assessment tend to correlate negatively with actual performance)
Can't help but be pessimistic about this or any news coming out of Build, given the circumstances.
>Given the layoffs round from last week, in a record earnings year, I wonder if this is a side effect of those layoffs.
Decisions, preparations and execution to open source such projects in big corporations to not happen within a week, two or month.
you could probably say the same about layoffs
But the knowledge about layoffs is at very high levels at the beginning
Managers learn about lay offs day or two before engineers
Unless they're just flat out lying, no:
> This is the result of a multiyear effort to prepare for this
People lie in court under oath, so excuse my sceptism when key people across .NET, Typescript, Python and AI frameworks have been let go.
WSL is a landmine of bad design. I lost all my data once, and that incident made me switch to a Mac.
Here's how you can lose all your data - and Microsoft engineers won’t care: https://github.com/microsoft/WSL/issues/8992 https://github.com/microsoft/WSL/issues/9830 https://github.com/microsoft/WSL/issues/9049#issuecomment-26...
I read your issue, and it's not so different from `sudo rm -rf /` as opposed to an actual design flaw.
`sudo rm -rf /` requires you to be a superuser or provide a password, whereas running `wsl --unregister` does not require elevated privileges.
I've hit real data loss bugs in WSL, as well— files disappearing, sometimes even rendering the WSL guest unbootable.
Mac IS the Sotate of the Art at the developer experience. The only annoyance was the virtualisation on Arm but having UTM/Multipass/Virttualbox now, it is the best. If you are up to too many containers, a linux box would be more preferable.
I still can't believe how people use windows as their main system with all the extremely invasive telemetry and bogus "AI" features that hogs a LOT of resources at idle
I mainly use a computer for:
- Work (heavy usage of Microsoft office apps)
- Audio / recording studio
- Some gaming
- Software development
Unfortunately for me, that's three uses where Windows excels, versus one for Linux.
Three out of four would also work on a Mac though
[flagged]
What's with this useless response?
I'm not the person you're responding to, but i see their 'ok' reply as being valid. I, too, use windows for audio recording: i rather suspect anyone that does, knows about what's available, both for mac, and linux. And, have chosen [reasons, amongst many, being: cost, availability, trust, familiarity...etc, etc] 'not those paths'. For now.
That's fine. But this is still a place to discuss things no? Also it wasn't even his comment I replied to...
If someone disagrees or agrees with my comment they should feel free to state their points or just ignore it. Maybe he has good points that speak against Mac
I am forced to use Windows at work. Surprisingly many large enterprises use Windows, mostly because of their dependency on Microsoft Office and Exchange. I'm really happy that WSL exists so I have to deal as little with Windows as possible.
At home I still need to have a native Windows laptop because of one application that I use a lot (a backgammon analyser) that runs natively on Windows and is heavily cpu driven. I could run it in a VM but the performance penalty is just too heavy.
I play video games that require an anti-cheat, so there's that. But honestly, it's fairly easy to deal with that. You can use the Windows IoT LTSC version and use one of those trusted debloaters. I haven't seen any AI features or bloat in a very long time.
I am not that proficient, I tried it three times, first hurdle is trying to find a distro, making all that research about which ones have more pre-configuration and which ones would be less buggy for your hardware can be a pain.
The thing that attracted me to Linux is the file system and customization. I just wanted to daily drive it, not really for any work. But bugs are just a reality using most DEs available.
In my case once, it even was related to performance, I had to stay the whole day trying to find out why Kubuntu was slower than Windows on my laptop, ended up just being one line in some config file that forced battery-saving performance, I failed to find the post online after encountering the same issue months later after reinstalling the system.
Believe it or not, it's not all sunshine and rainbows, I just realized I use Windows more and more in my dual boot system, so I gave up on using Linux after that.
What AI features hogs a lot of resources on windows?
Because 90% of software run only on Windows.
As a software dev I understand that having telemetry is a good thing. I dont believe it is "extremely invasive".
And I have no idea what AI features you mean (at least on Win10).
It is definitely in win11. Might be less on win10. Maybe try using something like simplewall [0], which shows prompts for every network request that phones home
[0] - https://github.com/henrypp/simplewall
Few people know what Linux is. Most only know that there are "macs" and "pc" and haven't used a personal computer privately at all since they got their first ipad in 2016.
Some people don't know that computers can be fast. Others modify their system to remove/neutralize all this crap. There are even tools to automate that.
It’s not even that difficult to manually remove these from Windows. It’s like a handful of configs. It’s way easier to do that than make (probably) any Linux distro to work with my current and previous setups. Which btw I could never achieve even with considerable amount of tinkering.
"Some people are just stupid"
But most are just ignorant. Ignorance, not being a crime, of course.
Blunt and yet so true
i use it only for gaming. no developer i know personally uses windows. either they are on mac or linux.
And yet games run pretty fast on Windows.
[flagged]
No need to be so mean. And it's obviously not true as not all Windows programs run on linux.
Source: I switched nearly all my machines to linux over the past couple years.
Note that this doesn't include lxcore.sys, the kernel side driver that powers WSL 1.
(Also, I'm surprised that WSL 1 is still supported. It must be in maintenance mode though, right?)
That's the only part I care about dang. I still use WSL1 and have done a number of interesting hacks to cross the ABI and tunnel windows into "Linux" userspace and I'd like to make that easier/more direct
I'm very interested in knowing about your hacks! Would you mind sharing a bit more?
I'm also still using WSL1 and was hoping to be able to fix some of it's quirks :(
No, both are still fully supported despite what the numbering may suggest.
Not a Windows user, but I think WSL is great. I see a lot of Windows user criticising Linux for... essentially not looking like Windows. "Linux Desktop will never reach mass adoption unless it [something that boils down to 'looks more like Windows']".
The thing is: I consider myself a real Linux user, and I don't want it to look like Windows. And I hate it when Windows people try to push Linux there, just because they want a free-with-no-ads version of Windows.
In that sense, if WSL can keep Windows users on Windows such that they don't come and bother me on Linux, I'm happy :-).
Not a Windows user, but I hate WSL. Looks like microsoft realizing they will lose a generation of developers to linux so they implemented linux inside their OS. Now people won't see the joys of recompiling kernel :)
I've stopped seeing joys of recompiling kernel [and consequence reboots of servers which easily could take 10 minutes and that's without IPMI /KVM] since 2009-2010
Fortunately, Desktop users never need to recompile their kernel, it's really just a choice.
And I hope that you don't use WSL for servers :).
Nope, WSL is for operator machines desktops/laptops, pure Linux for servers.
WSL isn’t Linux implemented in Windows. WSL 1 was, but it is not the good version of WSL that most use.
WSL 2 is a special purpose VM which has ties into Windows in a few key ways to make interoperability easier. You can run a program on Windows and pipe its output to a Linux program for example. Windows and WSL can trade system RAM back and forth as needed. Networking between the two is very smooth.
You can recompile the kernel for WSL all you want, and many do. Microsoft make their changes public as required by the GPL. You can use your own kernel without anything from Microsoft. You can easily create your own WSL distributions, customized to your hearts content.
It’s more than the sum of its parts, really. Feels that way to me, anyway.
People just want to bash Windows left n right. But no other OS in history has been this mature with handling GUI snd and providing the flexibility, customisations etc.
Before I say anything, Windows 11 is bad.
I remember playing with Win98, XP , I would modify many many registry settings, mod binary files to do something with games, you could access all sorts of weird hardware which only had drivers for windows!
Windows 98-7 were best for learning stuff about computers (inner workings etc).
I remember, to remove viruses (XP)I was trying to hard delete system 32 folder, it deleted lots of files and it continued to run!
WSL1 got my hopes up that we were on the path to Windows supporting the Linux user space API but then it was cancelled and replaced with a virtual machine based solution that I didn't need WSL2 to implement myself (with more flexability and capabilities).
I'd much prefer a proper compatibility layer that converts Linux system calls to their equivilent Windows calls and those calls be exposed from the Windows kernel itself.
That way I could just run Linux applications, bash, zsh and development tools directly on top of Windows without needing any "remote development tools" in my IDE or whatever.
Something closer to MSYS2/git bash/busybox for win - but where tools can ignore their Windows specific stuff like the filepath seperator.
It's fine I guess
meanwhile Apple won't even make it easy to boot Asahi Linux on Apple Silicon.
Buying Apple hardware with the intent on running anything but what Apple wants you to run is setting yourself up for a battle, including trying to use non-Apple hardware with the hardware you purchased. It's why I'm not spending any personal money on Apple hardware.
Could've been worse. At least they're not locking you out of your device like on iPhones and iPads. They don't stop you from running Asahi, they just aren't interested in helping anyone run Asahi.
Microsoft, on the other hand, sells laptops that actively prevent you from running Linux on them. Things get a little blurry once you hit the tablet form factor (Surface devices run on amd64, but are they really that different from an iPad?) where both companies suck equally, though Microsoft also sells tablets that will run Linux once someone bothers to write drivers for them.
Obligatory: "Challenge accepted!"
https://www.youtube.com/watch?v=4iOi_iPNC50
Apple might not be releasing documentation on their peripherals, but they went out of their way in making it possible in the first place.
Apple could just have gone and do a straight port of the iOS boot procedure to their ARM Mac lineup... and we'd have been thoroughly screwed, given how long ago the latest untethered bootrom exploit was.
Or they could have pulled a Qualcomm, Samsung et al and just randomly change implementation details between each revision to make life for alt-os implementers hell (which is why so many Android BSP dumps are the way they are, with zero hope of ever getting anything upstream). Instead, to the best of my knowledge the UART on the M series SoCs dates back right to the very first iPod.
The fact that the Asahi Linux people were able to create a GPU driver that surpasses Apple's own in conformance tests [1], despite not having any kind of documentation at all is telling enough - and not just of the pure genius of everyone involved.
[1] https://appleinsider.com/articles/23/08/22/linux-for-apple-s...
Macs are almost universally seen as developer computers. If you are going to be developer friendly, then you need to do things that are developer friendly. Asahi project is 80% reverse engineering stuff.
On the macOS side, https://github.com/lima-vm/lima is the closest equivalent to WSL.
Parallels also has a commercial offering that does some nice GUI-level integration with both Windows and Linux VMs.
My understanding is that these are both built on top of some Apple API, and Parallels actually collaborates with Apple on making it work for their use case. So it's not the first-class support that you get from Microsoft with WSL, but it's still pretty good.
Nah, the closest thing to WSL on macOS is OrbStack.
Exactly same experience to WSL - great out of the box experience, easy to use, and insist on using their own patched kernel.
Apple's opinion is probably that if you want to run a *NIX-like OS on their hardware, you should use MacOS.
Which is... not necessarily wrong.
Eh, I have a Mac but end up SSHing into some Linux machine pretty often. There are too many differences between the two unless I'm using something like Python or JS. Docker helps too, but that's Linux.
Also, it's really annoying that macOS switched to zsh. It's not a drop-in for bash. Yeah you can change it back to bash, but then any Mac-specific help/docs assume zsh because defaults matter. Pretty fundamental thing to have issues with.
You can also use zsh in Linux ;)
Yeah but it's pretty much assumed that you're using bash there
Apple has gone out of their way to build first party virtualization APIs in their OS to boot a Linux VM directly by specifying kernel and initrd on disk. That would be a direct point of comparison to WSL, not Asahi. What are you talking about?
[1]: https://developer.apple.com/documentation/virtualization/vzl...
P.S. They also specifically built Rosetta for Linux to compile x64 Linux binaries into aarch64 to run inside Linux VMs on their machines.
I don't know about any of that, just that as a user, I cannot run Linux on my Mac easily.
You can’t? Just install UTM for a full VM one-click install (easier than wsl /install and two reboots) or any number of docker thingies that people build for the Mac.
Hm, never heard of that one but I'll try it.
https://mac.getutm.app/
This is very far from smooth, it has network issues, runs muuch slower than native etc. etc.
Would be just better to have linux but doesn’t seem realistic how everything works so flawlessly in macos, touchpad, sound etc.
I don't know why your experience was poor. At least under Apple Virtualization for ARM64 Linux, the performance has been great. Perhaps as the other commenter suspects you might be running x86 Linux under software emulation?
In any case, I've run bare metal Asahi on M1 (and M1 Pro) and they work amazingly well too. Installation was quite straightforward too.
Maybe just skill issue, but I was using it for developing a large rust project. Compilation time was way worse than native and memory was a problem since I was allocating half of total to the vm and I only had 16 gb.
Also the network would cut out and I would have to restart the vm periodically.
Just using a linux laptop is way better but then I don’t have a nice touchpad, excellent battery life etc.
I’ve done this numerous times and it’s never been onerous and everything has worked flawlessly. It’s also not slower than native if you’re running an ARM build of Linux.
Er wait, when I said Linux, I meant a common x86 distro. I'll try it anyway though.
Almost all relevant x86 distros have arm64 builds these days as well and once you enable Rosetta 2 you will be able to run x86 binaries/docker containers on them, but the Linux kernel remains arm64.
Otherwise, it is just using qemu interpreter to emulate x86 in software.
Biggest thing is that I don't want to get stuck rebuilding software from source because the package maintainer didn't make an arm64 binary.
Rosetta2 on my host OS enables the guest OS to run x86 binaries... that's interesting, I'll try it too, but I'd be surprised if it's truly hassle-free. At the very least would have to reconfigure apt-get to allow x86 binaries. Then idk about dynamically-linked x86 libs (I'm not a Linux expert).
I'm sure you can make apt work in a multilib world, but the mainstream way it generally works well is you stick to distro's arm64 packages (pretty comprehensive; arm64 is not some esoteric arch) for the most part and they work just great and you use/build docker containers that might be x86 and that is supported by `--platform` switch once you get basic stuff configured.
I suspect if your use case is more esoteric, it's likely not going to be worth the time. I'd just SSH to a second linux box.
To correct your statement on one key thing: Rosetta2 in this case is not running on host OS. Apple provides a Linux version of Rosetta 2 which runs inside your VM and is registered as a binfmt interpreter for ELF x86 binaries[1]. This is similar to how `wine` or `java` or `mono` can execute their respective binaries directly.
[1]: https://developer.apple.com/documentation/virtualization/run...
Thanks, that makes more sense. I do have an RPi which is also arm64 (still different from Apple's), and the mainstream things work ok at least.
It's easier to dual boot Asahi than Windows. Secure boot and disk partitioning are two examples of roadblocks that are streamlined in the Asahi installation, but quite difficult on Windows
Let's be honest, nobody earnestly expected them to care about running native Linux in the first place. You knew what you got into when you bought the Mac.
What? Apple made changes to actually help them.
https://news.ycombinator.com/item?id=29591578
Apple implementing iBoot is table stakes. They should have gone the extra mile actually, and implimented UEFI the same way Intel did; but that would have made it too easy to support Apple Silicon on Linux. Sticking to their proprietary bootloader forces reverse-engineers to painstakingly guess at DeviceTree drivers until a bare-minimum boot environment is possible.
If Apple hadn't opened iBoot in some way then I don't know how they would handle a secure reinstall process. If that's "to actually help them" then they very clearly didn't try too hard. Without driver code or UEFI support they're basically telling the community to pound sand, and modern Asahi efforts reflect that.
I would love if the bug(s) with working on the windows filesystem from within wsl could now be fixed. https://github.com/microsoft/WSL/discussions/9412#discussion...
Microsoft too.
>Lxcore.sys, the kernel side driver that powers WSL 1
This isn't open source, and considering that this is probably what ties into/sets up WSL as a windows subsystem that's a bit of a bummer.
The rest is just a Virtual Machine for the most part, isn't it?
Wow. In 2009, when it looked like Microsoft was the most closed company of all time, I was telling people at work, they should port windows to the linux kernel. What happened over the next 15 years, I don't think people would have believed it if you told them back then. Things have changed.. ALOT. Now granted, this isn't what I said they should do, but you know, eventually they might see the light.
Never see anything Microsoft does in the direction of open source as “they have seen the light”. It’s a trap. Claiming open source friendliness is the bait, Windows is the trap itself.
Yeah I remember when they bought Github and my coworker was telling me how they've turned a new leaf and want to support foss... nope, they wanted to train an AI on all the code there.
Yep, see VS Code etc.
This whole thread is basically frogs praising the cozy warming water in the pot.
What specifically about VS Code?
I personally don’t use it, pretty much just cause I’m comfortable with my current development environment, and nothing has spurred me to migrate in a while. I’ve been vaguely suspicious to see Microsoft rapidly gain such a huge market share with VS Code, but I don’t know any specific criticisms about it.
VS code is designed to fracture. https://ghuntley.com/fracture/
Sounds like the argument is while it’s technically open source, trickiness with the licenses makes it basically impossible to legally fork it into a usable software. That sounds plausible to me, I’m no lawyer.
But isn’t Cursor a wildly successful VS Code fork, done legally? (I assume if it were in violation of licenses, Microsoft would have already destroyed them.) Seems like a glaring exception to this argument.
The secret sauce in Cursor is checks notes ignoring Microsofts license terms and hacking around Microsofts countermeasures and attempts at blocking them. See e.g. https://news.ycombinator.com/item?id=43587420 https://news.ycombinator.com/item?id=43616838 etc.
I'm not be sarcastic or funny when I ask this. Why isn't this called the Linux subsystem for Windows? It seems like a Linux subsystem running on Windows. If it were the other way around, (ie, a Windows Subsystem for Linux) I'd think that Linux would be the primary OS, and something like WINE would the subsystem.
I think it's supposed to be read as "the Windows subsystem for [running] Linux [applications]". Back in the old days there used be a thing called Services For UNIX (SFU), which was a bunch of Windows tools and services that provided UNIX-like behavior. Then came Subsystem for UNIX Applications (SUA). And now it's WSL.
Trademark law. They'd have to license the TM to have a product name that leads with another entity's trademark.
Management or marketing needs "Windows" to be the first word when people write articles about it..
Not only open source, but extremely well documented.
WSL1 is the good one, WSL2 just runs Linux simultaneously alongside Windows.
Nice but where is the code? Is it just very, very incomplete or a joke?
They do this sometimes during their Build conference. The code will probably show up after a keynote or session announcing it at the conference
actually it's there. the link from the blog just points to a branch without the code
I never even realized that it was tivo-ized. Probably because I haven't been on windows since before WSL became a thing.
Not sure about the impact of WSL because personally did not use it but I do know couple of friends who stopped spinning up Kali VM because of WSL.
M$ contributing to open source is great, but I switch to Linux because I don't trust Windows, the OS. Not because of accessibility.
I have to use Windows as my main box after nearly 6 years of MacOS (and before that Mint) and WSL2 helps me keep my sanity.
WSL is the main reason I switched from Mac/Linux to Windows two years ago. Excited to see this move!
so they gotch you
WSL1 was hobbled by needing to calculate Unix Permission numbers and hardlink counts for every file. On Windows, you need to create a handle to the file to get those things. That's a file open on every file whenever you list a directory.
Does this mean Microsoft is abandoning it as end of life? It's hard to tell intent here.
Probably related to AI, given how closely integrated both WSL and AI is
Microsoft is always too little too late just in time to save the day!
Every time I read this product name I think that the words come in the wrong order.
(Windows subsystem) for (Linux)
A Windows Subsystem for running Linux
The title is misleading and ambiguous as to whether this applies to WSL1 or WSL2.
still yet to find out.
Maybe someone will finally build my dream: a WSL distro that I can also dual-boot natively. I'd love to switch between bare-metal Windows with WSL and bare-metal Linux with virtualized Windows at my leisure!
Parallels on Mac did this in reverse a decade ago. You could dual boot windows and MacOS, or you could boot into your windows OS while running MacOS and access both file systems properly.
Ok but MacOS is the worst of the 3 worlds. It can't run Linux or Windows apps
Not true, 86box while slow on new apple ARM silicon can boot windows XP, and QEMU on Intel silicon will allow you to boot all thee OS at once.
MacOS has a lot of issues (mostly by Apple recent policy changes), but posix systems are more alike than different. =3
https://github.com/Moonif/MacBox
https://github.com/86Box/86Box/releases/tag/v4.2.1
When you use WSL2, Windows itself is running virtualized on Hyper-V.
At least with VirtualBox and VMWare it is possible (not actually WSL but still).
I wonder if this is in any way connected to devcontainer becoming more prominent in Github and VSCode?
WSL is ok, I can run drop down terminal quickly to run any command and VS Code is well integrated with it. Too bad it's very slow.
I'd rather stay in Linux and use Windows if I really must. Can we have an LSW, then?
Virtual Machines exist.
Like Wine?
Sec guy (who was mainly a linux guy) was never happy to let people use WSL in corp due to security bugs.
Can anyone chime in - is this still a concern? Was it ever a concern?
WSL is an easy compliance trick when you want to run your own Postfix/Dovecot installation.
It's not about bugs, it's that users can do basically whatever they want in their WSL2 guest VMs and most endpoint security software has little control over it or visibility into it. It's a natural little pocket of freedom for users, which is great but undermines efforts to lock those systems down (for good or ill).
Depends. How do you feel about an OS running on your network that's not subject to your standard OS hardening, mandatory agent stack?
QEMU has win64 builds, and the guest OS can access SAMBA/NFS/SSHFS host shares. Getting guest OS hypervisor to work is soft locked on Home licensed windows, so options are often limited to Win guests on linux hosts.
In general, the utilities on posix systems heavily rely on a standardized permission and path structure fundamentally different than windows registry paradigms.
Even something as simple as curl... while incredibly useful on windows, also opens a scripting ecosystem that side-channels Microsoft signing protections etc.
Linux VM disk image files can be very small (some are just a few MB), simply copied to a physical drive/host later given there is no DRM/Key locks, and avoids mixing utf8 with windows codepage.
Mixing ecosystems creates a Polyglot, and that is going to have problems for sure... given neither ecosystems wants the cost of supporting the other.
Best method, use cross platform application ports supported on every platform. In my opinion, Windows should only be used for games and industry specific commercial software support. For robust system privacy there are better options now, that aren't booby-trapped for new users. =3
WSL is a stupid idea. Microsoft should just stop developing and maintaining its windows kernel and built a windows compatibility layer on top of Linux.
WSL going open source is a huge win for devs. Can’t wait to see what the community builds with it!
Cool! This means I can fix mDNS now!
Why isn't it "Linux Subsystem for Windows" as it is a Linux subsystem running on a Windows os?
A "Windows Subsystem" is a concept that dates back to the original Windows NT line of operating systems. Historically, there've been a number of supported "Windows Subsystems", essentially APIs for the kernel. In Windows NT 3.1, there were multiple subsystems: Win32, POSIX, and OS/2, plus a separate one specifically for security.
https://en.wikipedia.org/wiki/Microsoft_POSIX_subsystem
While WSL2 isn't implemented as an architectural sub-system (it uses a VM instead), WSL1 was far closer to the original architecture, providing a Linux compatible API for the Windows kernel.
I think it's because WSL refers to the Windows subsystem that allows you to run Linux, not to the Linux system itself. You still have to download and install Linux on top of it, or at least you did the last time I used it a few years ago.
I always assumed it was because it was a Subsystem for Linux that allowed it to be run as a guest on a Windows host. But your version works too.
Microsoft ist really terrible at naming things, that's for sure.
There may also be some trademark law precedent that forces this naming convention. Even on the google play store, if you have 3rd party apps for something, it's always "App for X", the name cannot be "X app".
It's hard to argue it's even a subsystem anymore. More like "Integrated Linux VM for Windows".
Can I use a vanilla kernel with it yet?
I think you always can. In the past you may lose some features / have some bugs. For recent kernel versions (>= 6.6) the only patches WSL kernels have is dxgkrnl + some hacky fixes for clock sync. Others are all in upstream already. So you'll just lose WSLg / CUDA passthrough and nothing else now.
Of course, there might be some regressions. They are usually only fixed (upstream) after WSL kernel gets upgraded and it starts to repro in WSL.
Their kernel modifications and patches are public, and some of them have been upstreamed long ago. You'll need to compile your own to get the benefit, but I don't see why you wouldn't be able to use your kernel of choice.
Of course, if you want the native integration WSL offers, you'll need to upgrade the Linux driver/daemon side to support whatever kernel you prefer to run if it's not supported already. Microsoft only supports a few specific kernels, but the code is out there for the Linux side so you can port the code to any OS, really.
With some work, this could even open up possibilities like running *BSD as a WSL backend.
What does the native wsl kernel not offer that you need?
A version that tracks the underlying distro better, or even closer to mainline. Current WSL2 kernel is 6.6, kernel is 6.12 or 6.15. Debian Trixie will be 6.12.
sleep?
strace shows that the sleep program uses clock_nanosleep, which is theoretically "passive." However, if the host suspends and then wakes up after the sleep period should have ended, it continues as if it were "active."
Is it a good news for Wine or ReactOS (Can they learn something to improve their projects)?
No. WSL2 is a Linux VM. It doesn't expose Windows API internals or implementation details. It uses normal, already well-documented public ones. Wine and ReactOS can already use the publicly available documentation and they are still behind on many such APIs' implementation. Windows is a big OS. It takes serious man power to implement many things.
Probably not. WSL2 is a kind of a VM, so it should be bypassing the Windows API.
what if this really is a long haul embrace, extend, extinguish. Guess time will tell
Microsoft doesn’t like open source software. This is cosplay.
Microsoft releases the important parts of VS Code under proprietary licenses. Microsoft doesn’t release the source code for Windows or Office or the github.com web app under free software licenses.
Don’t get it twisted. This is marketing, nothing more.
If you can do it better and use a very permissive open license, no one is stopping you.
Amazing, I briefly worked on WSL v1 in 2015! 10 years and going
WSL is amazing. Nothing short of it. My laptop SSD controller died on a conference. I bought a 200 dollar netbook running windows, installed WSL, downloaded the MDADM packages, was about to mount the encrypted drive with cryptsetup, mount the ext4 partition, then chroot into it, then my home drive was like working on my old laptop.
I did this in about 20 minutes, with the help of chatgpt.
In the end I was able to keep working through the trip and provide some demos to clients which landed us some big deals.
Go WSL!
Copying files between Windows and WSL is EXTREMELY slow. I really wanted to give Windows a chance but the slowness completely destroyed that chance, along with the lack of hardware acceleration for GUI applications.
See if the trick in this article helps out: https://pomeroy.me/2023/12/how-i-fixed-wsl-2-filesystem-perf...
A lot of people here are saying nice things about having dev environment on WSL. Honest question: how do you deal with with those minor but insufferable Windows' quirks like 0d0a line endings, selective Unicode support, byte-order-marks and so on.
While right now I enjoy the privilege to develop on Linux, things may change.
The worlds don't really cross. If I'm using WSL to develop software using a Linux toolchain I'm not using any other Windows tools, other than VS Code, in that environment. I could but I just don't find the need. I could literally be remoted into an actual physical Linux box and the experience would be nearly identical.
Occasionally I'll use File Explorer to manage my Linux file system or Linux tools on the Windows file system but really not the degree in which any quirks would be an issue.
may be you missed the point - in WSL, you are with Linux/unix based env. So your Vim or other editors and tools just work like in regular Linux, Windows part until needed can be invisible and uninvolved.
Other tools like VSCode IDE, has special handling (extension) to work _inside_ WSL and only keep GUI frontend on the Windows side (very close to how it works over SSH).
On the other hand, I quite often use "explorer.exe ." inside WSL to open File Explorer and jiggle around with files downloaded/created/modified (say with sed) in WSL and it works fine too.
Or use MarkText markdown editor on folder inside WSL being some git repo and I'm adding docs/instructions there.
Well that gives some hope, unless they (I'm speculating about future possible employer) just disable WSL
I'm using WSL since about 2017 on insider builds and wsl1 for occasional cases and WSL2 as as daily driver for, 5 years, so nice for me and no need in Linux on desktop.
For me, I just accepts the quirks and move on with other things in life.
> Honest question: how do you deal with with those minor but insufferable Windows' quirks like 0d0a line endings, selective Unicode support, byte-order-marks and so on.
exactly why I use WSL, lf-only line endings, UTF-8, everything a basic debian bookworm iso can provide, plus docker with GPU Access
How do you deal with the mac 0x0A line endings?
While I had to, I enjoyed using WSL1 on Windows. It was disappointing to find WSL2 has no user upside; it just discards the benefits of WSL1 in favor of the simpler implementation.
Shame for all of the people who worked hard on WSL1 only to be virtualized into nonexistence.
Anybody know what the deal is with neither Oracle nor Microsoft trying to make it possible for VirtualBox and WSL2 to coexist without severe performance impact? What the heck is the issue that neither side knows how to solve? Or is there a deliberate business decision not to solve it?
It's because WSL2 is using HyperV behind the scenes, and HyperV is a Type 1 (Native Hypervisor), running directly on top of hardware.
When you activate it, it also makes you host windows OS virtualized as well, albeit with native access for some components like GPU etc.
That's why all other Windows Hypervisor (Virtualbox, VMWare Workstation) will experience one issue or another when WSL2 is activated, because more abstraction is happening and more things could go wrong.
That makes no sense. Are you actually familiar with the technical issues or are you hand-waving? WSL2 itself is a Linux VM running in top of Hyper-V. Heck, as far as I know other Hyper-V VMs run fine alongside WSL2 too. Why can't a VirtualBox Linux VM do the same?
Because Virtualbox and VMWare Workstation is Type-2 hypervisor, they run on top of the host OS (Windows), and not directly on hardware.
https://en.wikipedia.org/wiki/Hypervisor
That doesn't in any way explain why VirtualBox couldn't be made to run on top of Hyper-V. You might as well tell me Linux apps can't be made to run on Windows because Windows isn't Linux.
> Anybody know what the deal is with neither Oracle nor Microsoft trying to make it possible for VirtualBox and WSL2 to coexist without severe performance impact? What the heck is the issue that neither side knows how to solve? Or is there a deliberate business decision not to solve it?
Oh I thought your parrent post was asking general overview on why Virtualbox will have severe performance impact if WSL2 is activated. I posted the reason due to multiple abstraction conflicting with each other and there you go.
> Why VirtualBox couldn't be made to run on top of Hyper-V. You might as well tell me Linux apps can't be made to run on Windows because Windows isn't Linux
AFAIK it's already possible but still experimental on Virtualbox, also it's hard issue to solve, and have tiny ROI I suppose. And why would they spent time fixing this slowness that only impact some small userbase like you?
> AFAIK it's already possible but still experimental on Virtualbox, also it's hard issue to solve, and have tiny ROI I suppose. And why would they spent time fixing this slowness that only impact some small userbase like you?
It seems like you're just making guesses and don't actually know the answer? The reason I asked wasn't that I couldn't make the same guesses; it was that I had read online that there are technical obstacles here that (for reasons I don't understand, hence the question) they've had a hard time overcoming. i.e. "tiny RoI" or "small userbase" don't fully explain what I've read.
I was hoping someone would actually know the answer, not just make guesses I could have made myself.
I hope they'll do WSA next!
Is there fuzzing documentation?
I despise Windows 11 so much, but have to use it. I have a 24/7 box with Ubuntu running a couple of Linux and Windows VMs and that's the way I like it. I don't touch the Ubuntu host except for when I need to reconfigure it.
All development is done on Windows laptop via SSH to those VMs. When I tried using Ubuntu via WSL, something didn't feel right. There were some oddities, probably with the filesystem integration, which bothered me enough to stop doing this.
Nevertheless, I think it's really great what they now did.
Now all what's missing is that they do it the other way around, that they create a 100% windows compatible Wine alternative.
What hypervisor are you using for the windows vms?
QEMU/KVM.
What distros are y'all using on WSL?
NixOS! [1] You can keep the entrie system config in a single git repo. For me, it's far easier to work with, than, let's say, Ubuntu. But beware, it has steep learning curve.
[1]: https://github.com/nix-community/NixOS-WSL
debian bookworm; just to get my work, and hobbies done
I still don’t understand the naming. It’s a Windows subsystem that runs in Linux? But it’s a way to run a Linux environment on Windows?
Podman offers the same experience natively but also supports VMs via podman machine.
Internal WSL maintainers must have been hit particularly hard by the quarterly layoffs.
Still named backwards.
WSL along with VSCODE has been a godsend for Rails development.
And now for the reverse (Windows on Linux): https://github.com/winapps-org/winapps
That page has no mention of the actual license though.
MIT License: https://github.com/microsoft/WSL/blob/master/LICENSE
Are they ashamed, if they didn't mention it in the announcement?
I think they have been using that license for a while when releasing stuff. Windows Terminal, the new PowerShell, etc.
WSL is amazing if you work for a non tech company that is a windows house but want to do development in Linux. It’s seamless (at least to my middle ability) for VS Code.
why is not called Linux subsystem for windows??? or its just programmer bad naming things
It's one of those things that make sense if you understand the details.
Windows NT itself has had an architecture supporting environment subsystems for things like Win32, POSIX, OS/2 for decades. See https://en.wikipedia.org/wiki/Architecture_of_Windows_NT for more details. Later, it made it relatively easy to host Linux and Android too.
You can imagine they commonly called these things "subsystem for XYZ". So "Windows subsystem for Linux" made sense for those in the know.
Does sound weird outside of the world of NT though
It's just HN commenters not knowing how to read
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcREVRSJ...
Cool! Now make Microsoft Office Open Source! I understand you won't,so atleast release the Linux versions of them!
That would not be a sound strategy. Microsoft choose to make a OS a commodity, but not services and platforms, as part of the strategy to commoditize the complements like developer tooling
Commoditize your complements: https://gwern.net/complement
I remember way, way back when Internet Explorer first came out, there was talk of a Unix/Linux version coming soon.
https://en.wikipedia.org/wiki/Internet_Explorer_for_UNIX
1997 to 2001 (IE 4 to IE 5 SP1)
great news :-)
now how about mainlining the kernel patches?
so we get a chance of a more current and Linux distro provided wsl kernel :-)? https://github.com/issues/created?issue=microsoft%7CWSL%7C11...
big news
Check the license and its details. This might be great, or it might be MS looking to get free help. Especially with dev layoffs.
MIT: https://github.com/microsoft/WSL/blob/master/LICENSE
IANAL, but how is this license different from, say, the older BSD license - thought that was "have fun, do what you want, post a notice"? It doesn't say anything regarding ownership of changes, nor how to add copyright for such changes... Does this mean that MS is looking to own changes, or will there be a string of extra copyright notices for each (significant?) change?
The MIT license scrunches the first two clauses of the 3-clause BSD license into a single clause, and omits the third clause (the nonendorsement clause, which is already generally implied). As a practical matter, most of the basic "simple" open source licenses are functionally identical.
But who owns the copyright to changes, and how is it recorded? I just am suspicious as to what or how large companies who sell/rent software deal with open-source, free stuff...
That's not covered by the license; that's covered by the CLA (Contributor License Agreement), and in the absence of one (I don't know if there is one or not for this repository), the author retains copyright to their code as usual.
You can answer those questions for yourself, it's all in the repo.
WSL in combination with the enshittification of Windows was the thing that finally convinced me to switch from Windows as a main driver to Kubuntu/Linux.
KDE Plasma is IMO the best grapical desktop environment at the moment, including MacOS.
Killer.
Now do NT.
Too much 3rd party code in Windows to make that feasible.
Buy them.
OT but the name irks me; Windows subsystem for Linux makes it sound like some sort of official Wine layer. It's a Linux subsystem for Windows if anything.
It makes it sound like Microsoft is giving some capability to Linux whereas it's the other way around.
Microsoft can't name a project leading with a trademark (Linux <something>), hence why it's called WSL.
Source: https://x.com/richturn_ms/status/1245481405947076610?s=19
If you want to see the thread
https://xcancel.com/richturn_ms/status/1245481405947076610?s...
Very interesting comment there:
“ I still hope to see a true "Windows Subsystem for Linux" by Microsoft or a windows becoming a linux distribution itself and dropping the NT kernel to legacy. Windows is currently overloaded with features and does lack a package manager to only get what you need...”
NT is a better consumer kernel that Linux. It can survive many driver crashes that Linux cannot. Why should Microsoft drop a better kernel for a worse one?
Meanwhile in Linux we cannot even restart compositor without killing all GUI apps.
I swear sometimes progress goes backwards..
Is this a Wayland issue? This works fine for me on X. But yes, progress goes backwards in Linux. I had hope for the Linux desktop around 2005-2010, since then it only got worse.
If your $DISPLAY managed by Xorg server goes away your X apps will also crash. Wayland combines the server with the parts that draw your window decoration into the same process.
Under Windows everything including the GPU driver can crash. As long as it didn't take the kernel with it, causing a BSOD. Your applications can keep running.
I can restart window manager and compositor just fine in X. Also it is not generally true that X apps crash when the server goes away. This is a limitation of some client libraries, but I wrote X apps myself that could survive this (or even move their display to a new server). It is of course sad that popular client libraries never got this functionality under Linux, but this is a problem of having wrong development priorities.
Can you expand on this? I've used Windows 10 for 2-3 years when it came out and I remember BSODs being hell.
Now I only experienced something close to that when I set up multiseat on single PC with AMD and Nvidia GPUs and one of them decided to fall asleep. Or when I undervolt GPU too much.
Of course that depends on the component and the access level. RAM chip broken? Tough luck. A deep kernel driver accessing random memory like CrowdStrike; you'll still crash. One needs an almost microkernel-like separation for preventing such issues.
However, there are certain APIs like WDDM timeout detection and recovery: https://learn.microsoft.com/en-us/windows-hardware/drivers/d... . It is like a watchdog that'll restart the driver without BSOD'ing. You'll get a crash dump out of it too.
People that comment things like this probably have their heart in the right place, but they do not understand just how aggressive Microsoft is about backwards compatibility.
The only way to get this compatibility in Linux would be to port those features all over to Linux and if that happened the entire planet would implode because everyone would say “I knew it! Embrace Extend Extinguish!” At the same time.
I agree. For years I supported some bespoke manufacturing software that was written in the 80s and abandoned in the late 90s. In the installer, there were checks to see what version of DOS was running. Shit ran just fine on XP through W10 and server 2016. We had to rig up some dummy COM ports, but beyond that, it just fuckin worked.
IBM marketed "OS/2 for Windows" which made it sound like a compatibility layer to make Windows behave like OS/2. In truth it was the OS/2 operating system with drivers and conversion tools that made it easier for people who were used to Windows.
Untrue. OS/2 for windows leveraged the user’s existing copy of windows for os/2’s compatibility function instead of relying on a bundled copy of windows, like the “full” Os/2 version.
Os/2 basically ran a copy of windows (either the existing one or bundled one) to then execute windows programs side by side with os/2 (and DOS) software.
It was previously called the Windows Subsystem for Android before it pivoted. It had a spiritual predecessor called Windows Services for UNIX. I doubt the name had been chosen for the reasons you say, considering the history.
That said, to address the grandparent comment’s point, it probably should be read as “Windows Subsystem for Linux (Applications)”.
>for the reasons you say
That's not what I say, that's what the former PM Lead of WSL said. To be fair, Windows Services for UNIX was just Unix services for Windows. Probably the same logic applied there back then: they couldn't name it with a leading trademark (Unix), so they went with what was available.
WSA was a separate thing.
WSA and WSL both coexisted for a time.
Wikipedia states that WSL was made based on WSA.
Got a link? WSA didn’t come out until Windows 11 was released, and WSL predates Windows 11.
https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux#Hi...
It was called Project Astoria previously. Microsoft releasing the Windows Subsystem for Android for Windows 11 is news to me. I thought that they had killed that in 2016.
Astoria and WSA are different things. Sort of. WSL and WSA both use the approach that was proven by Astoria. That approach was possible since the NT kernel was created, but no one within Microsoft had ever used that feature outside of tiny pieces of experimentation prior to Astoria. Dave Cutler built in subsystem support from the beginning, and the Windows NT kernel itself is a subsystem of the root kernel, if I am remembering a video from Dave Plummer correctly.
Anyway, Astoria was an internal product which management ultimately killed, and some of the technology behind it later became WSL and much later, WSA. WSA's inital supported OS was Windows 11.
Microsoft being Microsoft, they artificially handicapped WSA at the outset by limiting the Android apps it could run to the Amazon App Store, because that's obviously the most popular Android app store where most apps are published. [rolls eyes] I don't think sideloading was possible. [rolls eyes again]
I don't work for Microsoft and I never have; I learned all of this from watching Windows Weekly back when it was happening, and from a few videos by Dave Plummer on YouTube.
Subsystem Linux for Windows.
GNU/Linux Subsystem for Windows
They should not presume to trademark something called a "Linux subsystem for Windows".
There's history here; there was an old thing called Windows Subsystem for Unix. Again, not what you'd expect from the name.
That was ”Windows Services for Unix”.
It's a “Windows subsystem” for running Linux, but yeah the naming is pretty confusing.
They tend to name things like that. For example in Azure you have "Azure Database for MySQL" and "Azure Cache for Redis"
I agree that it's a dumb name but I actually think it's an apt description of what it is.
We run Linux on top of Windows. Windows is the subsystem for the Linux environment.
Still a dumb name.
Windows' Subsystem for Linux :p.
The name is good if you understand what the subsystems are on Windows and how they work.
The NT kernel was always designed to host multiple subsystems. This Windows subsystem is for Linux. And that’s why it’s named like that.
[dead]
[dead]
[dead]
[dead]
[flagged]
And written in C#!
Right?
Right?…
microsoft open sourcing a lot of things lately
I wonder if companies open-source stuff mainly as part of a bigger strategy which primarily benefits them. could it be a way to access to a pool of free, contributing talent?
You mean like StarOffice being open sourced as OpenOffice to attempt to undermine Microsoft Office revenue a couple of decades ago? To quote Bugs Bunny, "Myeah, could be..."
why would companies not do things that benefit them? and if it's meant pessimistically, let me take you back to a much worse time when Microsoft didn't open source anything
> let me take you back to a much worse time
Microsoft is not much better now.
I mean yeah, the money and growth these days is pulling people into choose their cloud/services platforms
Why was this flagged? This isn't even a secret, a lot of SaaS companies will open source parts of their offerings to increase adoption, making the money back when larger orgs now want to use it, and are willing to pay for enterprise support plans to get the service straight from the horse's mouth.
I think it's a fair exchange too, even as an individual I pay for plenty of smaller open-source SaaS services—even if they're more expensive than proprietary competitors—for the very reason that I could always selfhost it without interruption if SHTF and the provider goes under.
Would really be curious to hear the reason why, from an internal perspective.
I've seen a number of theories online that boil down to young tech enthusiasts in the 2000's/early-2010's getting hands-on experience with open source projects and ecosystems since they're more accessible than enterprise tech that's typically gated behind paywalls, then translating into what they use when they enter the working world (where some naturally end up at M$).
This somewhat seems to track, as longtime M$ employees from the Ballmer-era still often hold stigmas against open source projects (Dave's garage, and similar), but it seems the current iteration of employees hold much more favorable views.
But who knows, perhaps it's all one long-winded goal from M$ of embracing, extending, and ultimately extinguishing.
>Would really be curious to hear the reason why,
My guess…
The same reason Rome didn’t fall. It simply turned into the Church.
MS isn’t battling software mfgs because they have the lock on hardware direction and operating systems so strongly that they can direct without having to hold the territory themselves.
- becomes open source under MS control
- three years later it's left in the hands of the powerful community that was built around it with MS help
- MS doesn't have to provide support and it's not their problem anymore
> microsoft open sourcing a lot of things lately
Yeah, MS-DOS 3, Winfile and a castrated version of Powertoys. This all looks like extend and extinguish theater.
couldn't they have saved millions of dollars if they open sourced it earlier?
No. To get something substantial to work you need to have some (if not most) development work done by people who are being paid.
In this case who except Microsoft would have paid for development here.
WSL caused me to just install Ubuntu right over my Windows installation. That is how useful it was for me.