Hacker News new | past | comments | ask | show | jobs | submit | ralph87's comments login

If you buy a matched OS + laptop in any other situation, why not also for Linux? Buy an XPS next time, or any other vendor (Lenovo?) with tested hardware, and just put an Ubuntu USB stick in.

"I bought the top result for 'wheel' on eBay and ruined my life trying to fit it to this aircraft engine, this is such a joke!" Shit has just worked for years if you want it to work.


I did this through a pilot program through our IT department.

Years ago when I wasn't full time employed I used to do a lot of linux and send patches when stuff was broken, etc..

We used both XPSes and System 76 machines. A whole bunch of money was spent.

All the software dev tools are great.

The problem is the linux machines running batteries out when you try to go do a webex in a conference room with a customer whereas a Mac would barely even drain the battery at all.

Or the linux stuff has a major FUBAR issue with the specific VPN your company uses and you need.

Or the linux stuff has trouble with some security related config on the specific Wifi setup in the building.

Or gnome/unity needing a reboot after unplugging external monitors and trying to walk off to a meeting.

It's all that stuff. People who run stuff at home (like I did) have great flexibility to swap out the hardware and software that don't work well with Linux. It's much much harder in the corporate environment.

Both the XPSes and the System76 work awesome if you leave them permanently on a desk.. they don't work as replacements for MBPs in our environment.

I was making it work and hacking away at fixes for various broken things in linux for a while. And then I realized it was killing my productivity and I just took the linux laptop back and asked to have another MBP.


Contra opinion: the VM option reduced the service interface between Windows and Linux to a single kernel implementation and a few drivers, rather than every possible userspace program ever written. It's an amazing and obvious trade off. My inner architecture astronaut appreciates all the ideas in this post, but I've been trying to kill that guy for over a decade now. The bottom line is WSLv1 design SUCKED precisely because it tried to cross-breed 2 extremely complex semantically incompatible systems.

Example: Linux local filesystem performance mostly derives from the dentry cache. That cache keeps a parsed representation of whatever the filesystem knows appears on disk. The dentry cache is crucial to pretty much any filesystem system call that does not already involve an open file, and IIRC many that also do. Problem is, that same cache in WSL must be subverted because Linux is not the only thing that can mutate the NTFS filesystem - any Windows program could as well. This one fundamentally unfixable problem alone is probably 80% the reason WSL1 IO perf sucked - because the design absolutely required it.

Solutions are rip out a core piece of kernel functionality, and in the process basically taking over ownership and maintenance for ALL code anywhere in the kernel assuming the existence of said cache, engineer something that is somehow better, and support this in perpetuity, including any semantic mismatches that turn up much later that were never designed for

The idea of merged ps output where Windows binaries would show up in the Linux /proc. How would you even begin to implement that without confusing EVERY process management tool ever written that interfaced with /proc? What about /proc/.../environ? On UNIX that has no character set. On Windows it is Unicode.

A trillion problems like this made WSL1 a beautiful nightmare. Glad it was tried and from watching the tickets, that team bled heroically trying to make it all work, but ultimately, I'm also glad it's gone, because the replacement is infinitely easier to engineer, maintain, and use, and that team earns its quarterly bonus much more easily. Everyone wins, except the astronauts, but as experience teaches us they never do.


I agree with the sentiment but, some scenarios have become orders of magnitude more complicated on WSL2 like connecting to a daemon on Windows or vice versa. I understand clear cut security boundaries and separate network interfaces, but it's extremely hard to get them running smoothly now. Everything was on localhost on WSL1.

Memory usage has also gone bonkers with the VM approach, causing unnecessary overhead for casual users who don't need performance.


> some scenarios have become orders of magnitude more complicated on WSL2 like connecting to a daemon on Windows or vice versa

This is about my only issue with WSL2, but it's a big one. Perhaps worse than this being a problem is that Microsoft don't seem to be providing any solutions - there are various GitHub issues about it, with non-Microsoft ransoms being the ones providing workarounds (but these depend on your environment - I haven't found any way to get this working myself!).

WSL1 was very ambitious. AFAIK, the main reason for shelving it's approach was filesystem performance. While it was indeed slower than native/VM, I personally never found it troublesome outside of artificial benchmarking. For the vast majority of use cases, WSL1 worked well, with Docker being the only real thing missing, IMO.


Running an npm install for an app of moderate complexity on WSL1 was such a painful experience that it turned me off of using the service for work dev entirely. Now with WSL2 it's much better, and I use it daily to the point where I have done away with my dual boot. (I still keep around a Linux notebook for edge cases.)

Obviously this is only re: fs perf, I can't speak to your other issues which do seem quite challenging. But I can definitely understand why they would have seen fs perf as a priority.


FYI to any WSL2 users, in it's migrated configuration the WSL VM can request as much RAM as it needs, but has no means to free this back. There's an option to use .wslconfig to specify a limit.

https://github.com/microsoft/WSL/issues/4166


My problem was that the ram was never released back when I didn't use it anymore and it started to kill windows itself. Had to hard limit it from the link you provided.


Does "wsl --shutdown" work?


Yes


WSL1 hasn't gone away for those who prefer it. But we're talking about an extra gigabyte. It's still cheaper than a full VM.


It's even cheaper to only use what you need with msys2!

bash inside mintty gives me all I need, with minimal overhead.

Eventually, I'd like to see bash inside Windows Terminal made easily available - complete that with busybox, and you cover 90% of the Linux usecases without having to download anything (a bit like how starting Terminal.app offers most of what you need on MacOS)

Add an option to install packages using msys2/pacman for the power users, and I believe most people would not waste time (or disk space, or ram) playing with WSL1 or WSL2 just to run the one thing they may need.


What makes you think the overhead between Cygwin/msys2 and WSL 1 would really be so great? They are both just translation layers after all.


WSL1 tries to do too much. WSL2 tries to do even more.

msys2 mostly cares about running your textmode software.

For example, someone talked about processes. Here's all that I see in msys2:

# ps xwau

      PID    PPID    PGID     WINPID   TTY         UID    STIME COMMAND

      480     479     480      18796  pty1      197611 14:13:13 /usr/bin/bash

      479       1     479      16872  ?         197611 14:13:12 /usr/bin/mintty

     1118     480    1118      16496  pty1      197611 17:00:32 /usr/bin/ps
Most of the time, I don't need to access Windows processes - and if I do, data can be exchanged through a file.


WSL1 also doesn't show windows proceses in 'ps', only the WSL processes. For example:

    shawnz@ShawnsPC:/mnt/c/Users/shawn$ ps xwau
    USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
    root         1  0.0  0.0   8936   184 ?        Ssl  Nov17   0:00 /init
    root         6  0.0  0.0   8936    96 tty1     Ss   Nov17   0:00 /init
    shawnz       7  0.0  0.0  18588  2708 tty1     S    Nov17   0:00 -bash
    shawnz     252  0.0  0.0  18880  1984 tty1     R    18:04   0:00 ps xwau


But in your example, I can already see you have a init system and multiple users - neither of which will be needed in most cases.


The "init" process is just a stub used by WSL, it is not a "real" init system. And Cygwin also allows the creation of multiple users within the Cygwin environment, doesn't it? It's just a matter of populating /etc/passwd. If you want to use WSL1 with a single user then you could just set the default user to root.


main draw for wsl2 for my team was IO performance. how is git on huge projects under msys2?


It's clear that they are not enthusiastic about delivering new features to WSL1. Consider how WSL2 now exclusively features CUDA support, Wayland support, loop mounts, Docker, etc.


Docker support is precisely the sort of feature that would have required significantly more work to support in WSL1 than WSL2.


Yes, admittedly loop mounts and Docker were an easy add for WSL 2 and would have been significantly more complicated for WSL 1.


Docker can also make use of Windows containers, and that is probably easier when everything is running on top of Hyper-V, as type 1 hypervisor.


Docker works fine with WSL1, I don't think I can use WSL2 at work as we're waiting on the next LTS version of Windows.


Docker can't be hosted on WSL1. You can use the Docker client on WSL1 to communicate with a Docker host running on WSL2 or Hyper-V, though (which is how Docker Desktop works)


I think WSL1 is there for transitional purposes and will be thrown away eventually. It was already incomplete, it didn't support forwarding any ICMP packets other than ping, so traceroute didn't work for example. I don't expect it to stay as an option for long.


At work we develop on Windows virtual machines, and WSL1 won't run on a VM. Fortunately WSL2 does, so at least I can enjoy Linux tooling I sorely missed.


Interesting, I would expect the opposite problem since WSL1 doesn't use virtualisation in any way. Do you have more details about the error you experienced?


Unfortunately I tried WSL1, didn't work, and left it at that for months. When WSL2 was out, it just worked and I had forgotten by then what kind of error WSL1 gave me.

I can only recall it was a dialog box, not a crash, and that it had to do about virtualization.


In WSL 2 the nameserver is the Windows host OS. That's how you get the IP of the host if that's what you're talking about.

    export WINHOST=$(cat /etc/resolv.conf | grep nameserver | awk '{print $2; exit;}')
Because the Linux VM in WSL 2 is a "different" computer, mostly, according to Windows, there may be firewall trouble, I suppose.


Windows itself is a different computer, because Hyper-V is a type 1 hypervisor.


> connecting to a daemon in Windows

That doesn't seem impossible to implement. Can't you connect to some port on your gateway to access the host? Even if you currently can't, it still seems pretty easy to implement in the grand scale of things, than trying to make WSL1 work.

I'd imagine it is in some todo list somewhere in Microsoft


I apologize if I'm missing something here, but if casual users don't care about performance, what's the overhead here?


Example: on WSL1 I could cheaply run Linux CS labwork including valgrind checks etc. - with less resources and much snappier than with VirtualBox VMs

I'm not confident about diving into WSL2.


I've had surprisingly good results running that sort of thing under WSL2. Pure CPU workloads actually work pretty well, and in my experience valgrind has mostly been CPU intensive?


IIRC WSL 2 use 50% of your memory in Windows, or 8GB (whichever is smaller) by default.


I have 32 GB. Hardware is cheap for Windows. imo That's one of its main perks vs Apple


Is that an upper bound? My Ubuntu 20.04 console sessions on WSL2 consume a couple tens of Mb.


The individual process inside the vm might not consume much, but the vm itself might reserve some GBs of memory. Try looking at how much memory `vmmem` process used in task manager.


Also:

https://www.reddit.com/r/bashonubuntuonwindows/comments/d8x7...

"So, I was copying a 8GB file to my Debian home folder using Windows Explorer, via $wsl network. The copy process made my CPU's fan runs like a jetplane, and it stop copying when at 95%, then I opened up Task Manager and saw this.

even after it finish copying, Vmmem process still hold 13GB RAM and not release any. Why? I have to shutdown WSL after copy a big file?

Answer:

Very normal but annoying issue. ... The only solutions are:

a) a kernel change by MS to limit the amount WSL2 can cache b) disable to caching ( nocache ) c) Put a limit on the WSL hyper-v VM

Not sure what or how MS will try to solve this issue... WSL1 did not have this issue, because it shared the main system memory, because it was just a layer around the NT kernel."


Check the other comments on the link? Solution c) is already there?


I can't find that process in my system, nor a significant difference in physical RAM usage before and after starting Ubuntu in WSL2. Digging a little it seems like memory is dynamically allocated and reclaimed when freed. I'm not defending WSL2, just trying to understand how is it supposed to consume 8GB of my 16Gb of RAM.

https://devblogs.microsoft.com/commandline/memory-reclaim-in...


Look for a 'vmmemory' entry in your process list, it'll likely be in the hundreds of megs, if not a gig or so.


I don't have that process while running WSL2 in my machine. I have vm-agent and vm-agent-daemon, but I'm using a virtual desktop, so this might or not be related to WSL2. Both of these processes consume less than 10 Mb combined. In my other comment [0] I referenced a link from Microsoft that says RAM is dynamically allocated and reclaimed when freed.

0. https://news.ycombinator.com/item?id=25161769


This hasn't been my experience, but sure, even if memory consumption is higher, for most applications where Windows users are going to use WSL, additional memory is incredibly cheap compared to the dev overhead of trying to boot into a Linux environment and switching back and forth.


Memory and cpu limits are configurable for wsl. You can cap your vm to whatever portion of your machine that you want.


Correct, but... this is not WSL. This is a VM.

Let's see this from the opposite point of view: WSL1 is to WSL2 what WINE is to a VM running Windows. Two completely different approaches.

The thing that would have allowed the WSL1 to do was to really integrate between Windows and Linux: imagine writing running a Linux command, piping its output in a Windows command to pipe it again in a Linux command. Imagine sending a signal to a Linux process from a Windows process. Or imagine accessing physical hardware from the Linux subsystem. With the WSL2 is impossible to even use a serial port!

WSL1 had performance problems. That thing about the filesystem performance could have been the occasion to optimize NTFS and even to add to the NT kernel the support for other filesystem, natively. Or why not add to the NT kernel other Linux functionality, to make them available both in the Linux and Windows subsystems.

Of course not everything would have been possible to map on the NT kernel. That is fine. But for most application really the WSL1 was usable.

Maybe the real problem with the WSL was only having called it "Widnows subsystem for Linux". They should really have made WSP: Windows Subsystem for POSIX. Drop the binary compatibility with Linux requirement and just make it an environment in which you can compile code that uses the POSIX API and interface with Windows. Not having binary compatibility with Linux is not a big deal, thing about it but is what macOS does and nobody seems to care, and users would have created a package manager just like there is brew in macOS to easily install software.


imagine writing running a Linux command, piping its output in a Windows command to pipe it again in a Linux command.

But that's not most of us want to do with WSL. We just want to run the team's React app with its rube goldberg npm monstrosity that falls over when you try to run it on windows.

That use case covers 90% of us. No need to ask for anything more.


This is how our industry goes to crap. Repeatedly pruning things that 90% of the current userbase doesn't use over time give you a thoroughly useless product. I mean, do it 7 times, and you've already lost 50% of your audience. If you target 80% cutoff, you're down to 20% of the users now.

I personally do want it from WSL. Hell, 90% of my use of WSL is like this - I run Emacs on my work machine under WSL, where it feels at home. I use it to drive Windows-side cmake and Visual Studio compiler, piping output back to Emacs. Because I can, it works, and it's a much nicer working experience than any app could give me.


Not having binary compatibility with Linux is not a big deal

But it is, because there is actually lots of proprietary software available on Linux as binary-only --- many of it very specialised and expensive --- and Microsoft wants to be able to run that too.


Windows Subsystem for POSIX was already tried. It didn't take off.


It was a hack to get around a US regulation about government procurement requiring POSIX support. So I think tried is a very strong word, it was never intended to be remotely useful.


Isn't the only reason that it didn't take off because it was so incomplete?


Incomplete and nobody really knew it existed.


> That thing about the filesystem performance could have been the occasion to optimize NTFS and even to add to the NT kernel the support for other filesystem, natively.

I doubt that would be feasible given how important backwards compatibility is for Microsoft's business. WSL1 seemed like a huge effort on its own, so changing one of the most critical OS features would be a little ambitious.


> imagine writing running a Linux command, piping its output in a Windows command to pipe it again in a Linux command.

?? we do this all day long on WSL2.


> imagine writing running a Linux command, piping its output in a Windows command to pipe it again in a Linux command.

Actually for that special case WSL2 works. I'm currently experimenting with `sshd -i` on WSL2; not sure why it doesn't work on Ubuntu 20.04 but does on OpenSUSE 15.2…


Your point about the dentry cache seems like a non-sequitur. The cache isn't the bottleneck; the bottleneck is that NTFS is a crappy filesystem.

Yes, I've read this comment: https://github.com/microsoft/WSL/issues/873#issuecomment-425...

The argument about Windows not having a single, central dentry cache doesn't hold water. For one thing, there's no reason to think that a two-level cache would significantly decrease performance. More importantly, the fact that on Windows most VFS work occurs in the filesystem driver itself only proves that Windows could have implemented an EXT4 driver with better dentry and other semantics and permitted WSL1 environments to keep most of its files on an EXT4 volume, which is what happens with WSL2, anyhow.

WSL1 could have been better in every way. It would have required them to track a moving target, but they already accomplished semantic parity with seemingly minimal resources. Improving performance was certainly possible, and keeping up would have required significantly less work on its own. At the end of the day, I think the reason WSL1 was canned is obvious: the decision to ship WSL1 was probably accidental. Why accidental? Because anyone with the slightest business acumen would realize that a WSL1 environment with feature and performance parity to open source Linux would mean there'd be little reason to target Windows' native environment at all for new software development. And while Microsoft has done relatively well for itself with Azure and other ventures, its revenue still principally derives from Windows and Office, and the last thing Microsoft needs is to accelerate movement away from those products.

WSL2 means that Microsoft can keep its Linux environment a second-class citizen without being blamed for it, while still providing all the convenience necessary for doing Linux server application development locally. Integration with the native Windows environment will be handicapped by design and excused as an insurmountable consequence of a VM architecture. For example, with WSL1 and some obvious and straight-forward improvements it would have been trivial to run a Linux-built Electron app with identical performance, responsiveness, and behavior (not to mention look & feel, given the web-like UI). With WSL2 Microsoft will likely only ever provide GUI integration over RDP, which will never have the same responsiveness (not because it's theoretically impossible, but because nobody would ever demand it, and in any event it would require tremendously more effort than the comparable WSL1 approach).


> the bottleneck is that NTFS is a crappy filesystem.

Care to explain why do you think NTFS is crap?

From my experience it's usually bad assumptions about files under windows and every application or library tries to stick to posix interface when dealing with files (open, read/write, close) which tends to block for longer periods on windows than on linux counterparts which results in significant perfomance loss .

Linux first software will always outperform windows implementation and Windows first software will outperform Linux implementations unless you provide separate code paths to properly handle underlying OS architecture and assumptions.

On Windows closing the file handle is extremely costly operation due AV checks and file content indexing [1]

[1] https://www.youtube.com/watch?v=qbKGw8MQ0i8


> Care to explain why do you think NTFS is crap?

I don't know if it's crap but it's much much slower than EXT4.

I remember reading a comment here that Windows in a VM on a Linux host was faster than bare metal.

Probably not true but I decided to make a test. I have a .net core app that insert data in a sqlite db (the resulting db is about 300 GB).

So I benchmarked this app on Linux (it was previously running on Windows) and IIRC it ran about 4 times faster.


An interesting observation I did was that even Hello World is about 100x faster on Linux than on Windows.

In my Linux VM I had to use the time command to even have an idea of how long it took, as it seemed to return immediately. I think 5-15ms but it already a while ago.

On the Windows machine where the Linux VM ran it took several hundred ms.


Which VM software did you use under windows?


Probably VirtualBox, could have been Hyper-V.


so what's the 'Windows way' then? keeping a file open forever and let it lock out other programs from using that file? I'm starting to get an idea why Windows is getting on my nerves so much with file locks....


Here is the thing, UNIX is the strange dude with advisory file locks, every other multiuser OS does the right thing to avoid file corruption.


> so what's the 'Windows way' then?

Using either memory mapped files of overlapped io (IOCP).

It's tricky to use when you want to write the content since you must preallocate the file before you start with the writing. Appending to file just doesn't work under NT kernel since WriteFile blocks even if you use overlapped io.

Devs just need different mentality when it comes to Windows programming compared to Linux. Due the fact that everything under NT kernel is operated asynchronously you'll have to adapt your code to such concept. Meanwhile under Linux you had no other alternative for nearly 30 years (io_uring and friends) so if you wanted to be portable with minimum OS specific code then you had to implement things in synchronous way or write two separate code paths for each OS.

Guess which one is used in practice.


You can just use FILE_SHARE_WRITE if you don't want to lock out other programs...


If I know one thing about files it's that any fix beginning with "You can just..." is probably wrong.

https://danluu.com/file-consistency/


I didn't recommend you do this, I just said it's there if you really want to.


This is not due to NTFS. (And NTFS is not crap either.)

Try it on another file system if you don't believe me.


NTFS on Windows was indeed always significantly slower when doing a lot of file operations. It was noticeable even long before WSL existed: NTFS and the whole infrastructure connected to it has by design bigger maintenance cost for every opening of the file, for example. That's undeniable. One can say that NTFS could have maybe been implemented faster on some other system (with less hooks etc), but NTFS on Windows is provably slower for the file operations people do every day on Linux, that's a simple fact. Like working with big source repositories containing a lot of files (git, node.js). The people who decided to make WSL2 faster for Linux programs achieved the speedup by going full VM. I would also prefer that they worked on NTFS on Windows long enough until the file operations become faster for everybody, but it didn't happen.

That's one of the lost opportunities that the original article laments about.

Edit: a reply to dataflow’s "try your operations on another file system" answer: I don't have to, I remember: there was a time when people used FAT formatted media much more than today -- and NTFS was slower than FAT too, under the same Windows, on the same hardware, without installing anything from third parties. It can be that on Windows 10 FAT got to be comparably slow, but in earlier times, on slower machines, FAT was visibly faster compared to NTFS. But NTFS was more robust to failures, and I preferred using NTFS for anything that is not temporary. I admit that the whole Windows infrastructure around what we consider purely "a filesystem" is also a serious part of the problem, by design. And as I wrote, it's a lost opportunity that the bottlenecks weren't identified and the changes made to the benefit of everything on some future Windows.


Again: try your operations on another file system, then tell me it's NTFS that's slow.

It's not NTFS that's slow at opening files. It's the I/O subsystem. You'll see the slowness with other file systems too.


This is the correct answer. Myth that NTFS is slow should already go away.

You can already run windows on top of BTRFS if you want but it'll be painfully slow compared to linux [1].

https://twitter.com/NTDEV_/status/1327358814891470850 https://github.com/maharmstone/quibble


Note: I've explicitly said: "NTFS on Windows" and "NTFS and the whole infrastructure connected to it". Also: "NTFS was slower than FAT" was true for years. I have no experience with BTRFS, but that doesn't prove anything without knowing more details (the overhead introduced to make it work). So... it's both NTFS and the Windows "subsystems."


> So... it's both NTFS and the Windows "subsystems."

Ok, probably both on default windows installation due legacy and backward compatibility with dos names [1].

https://docs.microsoft.com/en-us/windows-server/administrati...

I remember this issue when creating few million of files inside one folder and it was extremelly slow because of 8dot3 name creation. It has to go through each filename to generate short name O(n) when this legacy feature is enabled.

After disabling 8dot3 there were no performance issues anymore.

> I have no experience with BTRFS, but that doesn't prove anything without knowing more details (the overhead introduced to make it work)

I tried to point out following: Ext4, Btrfs, Zfs, any other UNIX filesystem will be slow under Windows. NTFS or any Windows first filesystem will be slow on Linux. There is just too much of the differences in OS architecture between the NT and Linux.


> The argument about Windows not having a single, central dentry cache doesn't hold water

This was more about cache coherency, and it's not an argument, it's a trait shared by every other Linux-over-X implementation (UML, coLinux). It is fundamental to solving a problem users want - seamless integration, i.e. no opaque ext4 blob. Why doubt the reasons given by Microsoft, when they match observations not just for WSL but every other system in the same class?


Good points; in addition, I can only imagine the headache of trying to support the latest eBPF, io_uring, WireGuard, and other advanced kernel features in WSL1. I imagine a lot of newer features would return ENOSUPP from the kernel, so WSL1 basically becomes a weird fork of an old Linux kernel. (Weird because it lacks a large community, and old because the ENOSUPP's downgrade what's available.)

In WSL2 everything should work (depending on the CONFIG options and how much Microsoft has changed).


NTFS performance leaves much to be desired on Windows too, it really hurts for workloads dealing with lots of small files, such as programming.


I was wondering if Wine on Ubuntu is faster than Windows native. I found this OSBench near the end of this page (look for "Test:Create Files"):

https://www.phoronix.com/scan.php?page=article&item=wine-ubu...

Ubuntu native is much faster. It's odd that Wine is so slow.. I wonder why?


Linux and Unix in general seems to love huge writeback caching, which is great for speed but horrible for consistency and reliability to power failure and such; on the other hand, Windows flushes the caches more often, providing greater reliability but without as much speed.

That's been my experience, in any case; doing lots of small file operations barely causes any disk activity in Linux, but far more in Windows. Moreover, abruptly cutting power in the middle of that would likely result in far more writes lost on Linux than on Windows.


Linux/Unix just trust that if you want something persisted with certainty you'll do an fsync, if you do it will absolutely guarantee you're not losing that. It will absolutely make sure that your filesystem doesn't get corrupted by a power loss but if you didn't fsync your write you had no place believing it was persisted.

Doing that IMHO matches real world use much better. If I do a compile and get a power outage I don't care if some object files get lost. If I do an INSERT in a DB I do care a lot but the DB knows that and will fsync before telling me it succeeded. So making sync explicit gives you both great performance and flexibility.


Doing that IMHO matches real world use much better. If I do a compile and get a power outage I don't care if some object files get lost.

The real world use is someone working on a document, using Save periodically, and expecting whatever was last saved to survive the next outage, not some arbitrarily old version.

In other words, Windows implicitly fsync's often.


What are the trillion other problems? Honest question, some I heard this a lot, but in reality I only ever saw people complain about the speed of the filesystem. So that would just mean 1 (hard) thing to fix.


The network stack integration was less than transparent too. Random places all over the system where e.g. a magical ioctl() just didn't work properly. Who really wants to spend their life reimplementing every last baroque detail of that stuff?



The actual advantage of working on something like WSL1 is that it would ensure that the NT kernel is as capable as the Linux kernel (Linux is of course the best designed and implemented among the non-realtime, not provably correct and not secure kernels).

For instance, the complaints about I/O performance are because the NT kernel has a worse implementation, so they should have improved it for both Win32 and Linux apps instead of giving up.

(your reasoning about the dentry cache makes no sense though since such a cache would be implemented by the NT kernel and certainly not by WSL1, so there's no difference between Linux and Windows programs)


> (Linux is of course the best designed and implemented among the non-realtime, not provably correct and not secure kernels)

That is a coffee-spewing statement. Linux generally has a reputation of approaching new features by coming late to the party, seeing the mistakes everyone else made in the feature, and then implementing it somehow even more poorly than everybody else.

An example of a particularly bad Linux subsystem is ptrace, which interacts poorly with threads, relies on signals (itself a bit of a mess of a feature), and is notorious for all the races you have to do to actually properly set up a debugger. The Windows debugger API is much simpler, and even includes some features I'd love to have in Linux instead (hello, CreateRemoteThread!). I was going to write to the parent comment that implementing ptrace to let you run gdb (or, worse, rr!) in WSL1 would have been something that every NT kernel programmer would have "NOPE"d at because the impedance mismatch is just that high.


I mean, it's entirely possible to implement ptrace if you spend enough time doing it, it will just be complicated and the performance will likely be not that great. (But yes, it's a pretty bad API.)


As I understand it, it's not really possible to "improve" Win32 I/O performance -- both the fundamental I/O APIs and the NTFS on-disk storage format make high performance infeasible.

Not without either abandoning all extant FS drivers, or abandoning NTFS compatibility, anyway.

Edit: Here are the WSL 1.x's team members original comments on this subject. It sounds like a deeply intractable problem.

https://github.com/microsoft/WSL/issues/873#issuecomment-424...

https://github.com/microsoft/WSL/issues/873#issuecomment-425...

The second link explains why a dentry cache just isn't feasible.

The short version is that the NT IO APIs seem to have been very, very ill-considered. It's just not possible to make it fast. Even win32 operations are extremely slow, so win32 applications go out of their way to avoid doing any file I/O. Linux applications were not written with those constraints in mind.


Amusingly enough, I just watched a talk about this topic earlier today:

https://www.youtube.com/watch?v=qbKGw8MQ0i8


> Linux is of course the best designed and implemented among the non-realtime, not provably correct and not secure kernels

Citation needed.


> Linux is of course the best designed and implemented among the non-realtime, not provably correct and not secure kernels

Another interpretation is that Linux has been trying to re-implement Unix for 30 years, when the point of Unix in 1970 was a simple portable operating system that can be coded in like half a year.


I find IO on windows is fine, the problem with WSL1 was that the standard windows model doesn't line up well with Linux (fairly fundamental things like in Linux you can delete open files, which you can't do in Windows).

You couldn't switch Windows to a more linux-like model, it would break all existing windows programs.


You can delete open files on Windows, but you need the right option (FILE_SHARE_DELETE).


This method was talked about since the Spectre/Meltdown days. It has often taken Google much longer to release patches they've been sitting on, e.g. many original containers patches were like this.


Why didn’t Intel do it then?


Do what exactly? This is essentially a performance optimization for folk bagholding oceans of vulnerable hardware


It should work. Who would trust Intel in this?


Chrome is not open source, it's a derived proprietary superset of the open source Chromium. They are not the same thing in very material ways


Only a few things are different in Chrome. Notably, it has the ability to send crash reports and metrics, play proprietary audio codecs, some Google API keys are added (eg. safe browsing, speech, web store), and the Widevine component to enable DRM. Nothing is different in the core code of the browser.

https://chromium.googlesource.com/chromium/src/+/master/docs...


> in very material ways

No pun intended?


Kubernetes does a whole lot more than just restart jobs when a node fails, in fact of all its features this is one most people probably see in action the least.

Kubernetes centralizes all the traditional technical nonsense related to providing a robust environment to deploy applications to. I want Kubernetes even in a single node scenario because I want Kubernetes-like packaging, deployment and network services for any app I work on, as they are a net simplification of what was previously an ad-hoc world of init scripts, language-specific supervisors, logging, monitoring, etc etc., and rearchitecting an app deployment from scratch simply because its resource requirements increased.

If people want to continue trying to scale it down further, where is the harm in this? There are plenty of legitimate cases where it makes sense. There's no real limit to that work either, it's conceivable with the right implementation improvements (in k8s and the container runtime), it might eventually be possible to reuse the same deployment model even for extremely small devices.


A lot of people seem to miss that one of the primary points of using K8S is that it’s a PaaS. Instead of relying upon cloud provider based services one could use K8S constructs and make it easier to port applications and services across hosting providers. The lock-in factor is a point of nervousness not because people consider wanting to move from AWS but because it can be hard to port new features supporting providers concurrently if every developer immediately reaches for the provider’s message queue, e-mail service, batch processing pipeline, etc. It takes a lot of effort to learn these various services’ nuances and that’s not necessarily useful for everyone.

And while K8S maintenance is fairly involved, trying to do deployments and system administration in its absence (read: production grade applications packaged prior to containers) is expensive and extremely error-prone as well.

As much as it’s a pain to deal with a mangled K8S setup, it absolutely beats 20+ idiosyncratic applications wired their own particular way with oddball service hacks on a snowflake server. This is a huge business liability that is changed at least to random containers in a container-based ecosystem of applications.


Again, what is it for? If you have a single server, what is K8S going to give you over say, a single static binary or a docker container?

K8S on a single node does nothing for you network-wise, you have no overlay network (because there is nothing to overlay) you have no ingress or egress (there is only 1 node so no matter what you 'configure' your ingress and egress will be that same node), and it's unlikely to have enough resources to do something like a full application deployment with some big helm chart.

While I agree that you can (ab)use K8S as a runtime and packaging format, all of those big benefits are removed when you are running it on 1 node, except perhaps the fact that you can talk to the apiserver and define your jobs/tasks/pods the same way. But even then you'd only do that locally, because 'testing' in a dev or staging env that doesn't match prod is going to give you non-representative results.


Ever tried to host multiple apps on a single machine? Oh look, a custom Nginx config only one person understands. Oh look, some hacked up letsencrypt config only one person understands, etc etc.

> K8S on a single node does nothing for you network-wise

- Container IP auto-assignment

- Container security policy

- Container DNS management

- Ingress management ("custom Nginx config")

- "Environment that feels like a large network and doesn't change if moved to a large network"

What part of this is difficult to understand?


> Ever tried to host multiple apps on a single machine? Yep works fine. Has been for decades, even before containers existed. Guess what the sites-available and sites-enabled directories for apache and nginx are for.

> Oh look, a custom Nginx config only one person understands.

Just because you put it in a container doesn't mean it's no longer custom or that everyone suddenly understands it.

> Oh look, some hacked up letsencrypt config only one person understands, etc etc.

Plenty of people put their nasty hacks in containers and pod definitions and still nobody (or just 1 person) understands it. Packaging changes none of this; a dirty pod, container, VM image it's all still dirty.

> K8S on a single node does nothing for you network-wise > - Container IP auto-assignment

So does docker, or even an uncontainerized bridge interface

> - Container security policy

So does Docker, or a plain cgroup

> - Container DNS management Yep, that it does. But when you only have 1 node, what is the point?

> - Ingress management ("custom Nginx config")

Great, but besides moving complexity from your app to the infra it doesn't help at all on a single node. It actually gets worse: node goes down, everything goes down (app, fallback, load balancing, routing, security)

> - "Environment that feels like a large network and doesn't change if moved to a large network"

So unless you are doing some local development that you later on push to dev/prod, we're talking about feelings. Not much objective to say about that except that it exists.

> What part of this is difficult to understand?

All of it. Shoving complexity and responsibility around doesn't reduce it, and having people make bad software isn't less bad because of the runtime it runs on.

Kubernetes in prod is great, and the envs that go with it (like development and staging), sure. But when you run something in prod, and you need availability, scalability and a host of standardised facilities, then a single node or some magic 'it works by default' config is very far removed from real-world production.


We could argue this ad infinitum, K8s of course doesn't remove all proprietary elements from a solution but it is a huge step up. Speaking as an ex-Googler, it took over 10 years but I'm so happy the rest of the world finally has a standard like this, the world is a better place for it even though I at one point had to unlearn all my traditional sysadmin habits and immerse myself in an environment practising it successfully to finally understand.

Your original question was what is the point. These are the points. As for why not Docker, k8s network effects and strategy of its sponsors mean Docker is on a lifeline, everyone knows that.


Of course Docker is doomed ;-) We have CRI-O, gvisor etc. showing that it works fine without it as well. Someone implemented a OCI compatible image runner in bash using standard cgroups, with a bit of luck we'll end up calling containers 'containers' and images 'images' instead of using docker's brand name.

Also, I'm not saying that k8s is bad, or using k8s as a practical API definition of the platform to target when packaging and configuring applications; I was aiming at the 'boo hoo k8s is too hard' tagline every "simple" version seems to hold on to.

One could also install standard K8S and remove the taint and run pods locally, same result.


The emphasis here being on the word "should". I wouldn't buy just yet until some third party has published a sustained load/heat test. Apple have sucked at this for quite some years now, it's folly to touch a MacBook for anything remotely compute-intensive


Only when you compare apples to oranges, but this is an apples to apples comparison. Same underlying silicon, almost identical configuration, different chassis. It's impossible for the Pro not to outperform the Air when the only substantial difference for performance is TDP and we have zero reason to believe that heat-pipe + fan would be outperform by passive cooling of all things.


Agreed, the Pro's cooling should be better. However, I think it's worth waiting and seeing. There might not be that big of a difference in the end (e.g. both are uncharacteristically well designed and run fine; or conversely both are so badly thermally throttled that no serious intensive tasks can be performed) in which case other factors could end up being more significant.


Apple has always sucked when it came to cooling. The Apple /// and Lisa were plagued by “IC creep” where, due to heat expansion from inadequate cooling, the ICs would wiggle out of their sockets ever so slowly.


The use case is benchmarks basically, and maybe faster cold boots. About the only place the typical heavy computer user is likely to notice major difference between SATA and NVME is probably running something like "find /", or some kind of full-disk search scenario.

Even in the latter, full text indexing which every platform has had for years now makes it much less likely that the full directory tree will even get walked, and differences even less likely to be noticed. As a side note, every desktop platform's full text search seems to suffer software performance problems that are largely independent of the underlying disk

Even in the full-tree enumeration case, since Spectre/Meltdown mitigations landed, system call overhead is so high now that even with a lightning fast disk, a large chunk of total time taken to walk the directory tree is lost basically twiddling the CPU mode securely. You can definitely still see the difference between SATA and NVMe, but you can also definitely measure the amount of time during the NVMe run that is spent in software -- incrementally faster NVMe will have quickly diminishing returns.

"What about databases!" This was my original interest in SSDs to begin with. It turns out, despite being a data monkey who loves large databases, since 2013 any time I've worked with a giant dataset like this, it is always in the form of large scans (usually from something like a CSV or XML file), where SSDs don't really have a mind-blowing advantage over magnetic (but of course they are still 5-10x faster a seq io, its just that data parsing and processing is typically the bottleneck now).


Small but important nitpick: since at least 2005 there is no "on the NYSE", the price you're likely seeing is the best price consolidated across all national market system venues, the primary listing exchange is mostly only an administrative entity in recent times. Actually some of the largest premarket trading is occurring on NASDAQ and a CBOE-owned exchange called BATS


The gallery app looks to be authored using it:

- It needed to show a loading spinner on a 250 Mbit connection

- It hung the browser while 'booting' the page for a solid 3-5 seconds

- It downloaded 4.39 MB in 50 requests

- Opening web inspector in Firefox while reloading the page was sufficient to cause the boot process to hang indefinitely

- Page looks pretty, but at this point it barely matters

We could conclude either that the project's attention to important details is low, producing this experience, in which case what else might we discover once committed, or alternatively we could conclude that it is high, in which case, this is the best possible experience for any Uno app.

Instant pass


I read your comment and thought it couldn't be that bad and those metrics are not necessarily representative. Then I tried it on FF and Chrome on a 4770k and a GTX 1060. I simply tried switching on the vertical tabs. I got a 0.2-0.5 seconds latency from click to feedback on the screen (and longer for the page to fully load!).

Sorry to any dev at Uno, but that's terrible.


That's for webassembly. None of that matters for most applications on the other platforms it targets.

And even for webassembly, 4.39 MB is not prohibitive for some use cases. Specially with caching. I bet the average medium blog loads more than that.

I wouldn't be so quick on dismissing a solution.


Marketed as a single codebase platform where the most ubiquitous deployment target has a user experience that tells your customers you hate them

4.39 MB in 50 requests is a showstopper on any mobile network, especially when alternative solutions do not have that problem. It is fair to assume some first time experiences will involve 7.5 seconds or more additional latency on 3G networks, double that for a poor signal areas.

It's still an instant pass


Implying the solution requires 50 requests by looking at the gallery is rather uncharitable.

Half of the requests are unrelated js/css/images that could be combined into smaller and fewer requests.

As for the other half of files, most are .clr.

I'm not privy on the details of this solution but most webassembly demos I've seen combined all files into one single wasm file.


Effectively this is arguing for the project's attention to detail being low, we covered this already


I'm in complete agreement that 4.39MB in any number of requests is too much, but I've got bad news for you about the average webpage: It's that or worse, and you pay a similar cost every time you navigate to a new page on the same site. At least for a webapp you pay that cost once and then can navigate freely within it.

Devrel people from the Chrome team have been beating the drum on this for ages - large web content excludes huge chunks of the world from being able to use stuff, no matter what toolchain it's built with. All these huge frameworks and ad networks and in-page video ads eat up so much bandwidth...


The average web page can be as nasty as it likes, but if I can deliver apps that _feel_ awesome and optionally can deliver metrics that provide a commercial basis for that awesomeness to whoever is paying me, I don't really care much about the average web page at all. Insert some truism here about using the incompetence of others to justify one's own slovenly behaviour, and some other truism about standing out from the crowd by applying common sense.

"I built an average web application and it attracted an average number of users, it more or less worked if you visited it from mobile" would look fucking awful on anyone's resume.


They published their demo calculator app on the Play Store: https://play.google.com/store/apps/details?id=uno.platform.c...

All the top reviews are talking about how slow it is. A calculator app.


I bet the average medium blog loads more than that.

That others are just as wasteful is immaterial; the fact that it's multiplied by every single user makes it downright hostile to everyone except those who are profiting from bandwidth use.

Every single "cross-platform solution" I've seen is basically a sub-optimal experience on every platform. Personally, I'd rather have fewer native applications than the wave of quantity-over-quality not-quite-right-anywhere that stuff like this tends to encourage.


Hoping someone snarfed a git clone, revision history is waaay more interesting than the code itself


It seems like it was pushed as a single commit: https://web.archive.org/web/20201104050026if_/https://github...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: