Hacker News new | past | comments | ask | show | jobs | submit | plg-'s comments login

> it's beyond me why would anyone willingly use an Apple product

Final users don't see this mess.


Hiya! Former mac native developer here, moving to a new company. My new corp gave me the option of a thinkpad running windows or a Mac, and I chose the mac just so I could have a sane terminal experience, UNIX-like tools, etc.

I would vastly prefer to use Linux, but unfortunately that's just not an option for a company-issued machine at this juncture--and in my experience it's easier to spin up a VM on a Mac than a Windows box.

Being a Mac native dev, I'm very acutely aware of the pain other devs go through with Apple and their APIs, but unfortunately Macs remain a better platform to write code on in my personal experience.


My experience is that Windows Subsystem for Linux has been amazing on Windows and just keeps getting better. I've also never noticed any difference in spinning up VMs.

But anyway I get keeping with familiar tools but, I just disagree that MacOS is a better or even "sane" terminal platform. All the ancient GNU tools Mac ships and BSD-style "but Posix!" pedantry drives me up the wall.


In my personal experience, git crashes about 1 in 3 commands I run on windows. Haskell takes ~10 times longer to compile on a ~4 year old desktop windows machine than a ~6 year old mac laptop. And I usually spend hours trying to get simple tools installed, vs. minutes on macs.


> git crashes about 1 in 3 commands I run on windows.

I've been using git over 2 years on windows with no issues at all, it seems you have a buggy version?


Definitively a local issue between the chair and the computer.

The whole developers world would be up in arms if Git actually crashed in 1 out of 30 commands on Windows, _whatever the configuration, git bash, powershell, or WSL 1/2_. There would be yearly top hacker news post about "one year and still not fixed"

Heck, Git would not have reached its dominant position if it was such a buggy mess.


Are you using windows git, or linux git inside WSL? The latter seems a lot more reliable in my experience.


I have not used a current version WSL, but it was terrible when I tried it. Could not find files saved in the WSL terminal in explorer (I understand that is a limitation). The was so much unknown going on in the "integration" that I wished I just used a VM and took the perf hit instead of digging to figure out where Windows was mounting the FS and figuring out permissions.

I have no desire to look at WSL ever again.

I experienced the same thing with on F# on mac a year or so ago, the dotnet CLI tool was effectively broken and official onboarding docs didn't work.

I tried revisiting when they announced F# 5 late last year, but same thing, docs don't work/broken on Mac. Turned me off for F# development and leaves me a bad impression on anything Microsoft releases.


WSL2 has fewer "magic unknowns". WSL1 used the NT Kernel emulating the Linux kernel so there was a lot of (seeming) magic in that interop, because it relied on low level NT details that don't look like "normal" Windows to Windows.

The files, for instance, were stored in NTFS but with Linux metadata in alternate data streams. Akin to what macOS used to call Resource Forks, except alternate data streams are far more rare in Windows and most native Windows apps trample over them. Microsoft didn't advertise where to find those files specifically because they didn't want people using Windows apps on those files and breaking Linux metadata. Instead, Microsoft heavily encouraged using /mnt/{drive letter}/normal/windows/path (like /mnt/c/users/me/Documents) and normal Windows paths and keeping files you worked on in both environments in the Windows plain old NTFS without alternate data stream weirdness side (because those /mnt drives didn't use the Linux metadata alternate data streams).

Eventually, Microsoft added a Plan9-based file server to WSL1 serving on the \\wsl$ system path for browsing those files and some smarts around it. (Launching a Windows EXE from a WSL terminal would convert the Linux path to the \\wsl$ path for instance.)

WSL2, on the other hand, is an extremely lightweight (Hyper-V based) VM, uses a real Linux kernel, and generally uses VM tech. Files are stored in a standard VHD, which can be explored with plenty of VM tools (including Windows File Explorer). They are still accessible in File Explorer through the \\wsl$ service. (Though in that case Windows can mount them using standard VHD mounting. The direction of the Plan9-based file server winds up reversed from WSL1 in that it is used instead by the VM to access host machine files through the VM barrier.)

As for F#, F# itself is an open source project with possibly a lot more of a "community project" mentality than it is an "official" Microsoft release. I don't know if that changes your opinion, but it is one of the projects where Microsoft has best embraced open source. (Including some of the potential downsides of open source, like needing Github Issues filed on broken documentation or it will go unnoticed/unfixed.)


You can literally just run 'explorer.exe .' in a wsl1 shell to get an explorer to show up in whatever directory you are currently in. The wsl files are not hidden from windows, and can be edited from there just fine.

F# (and most of Dotnet core) is also a mess on linux, so no surprises here.


> Could not find files saved in the WSL terminal in explorer (I understand that is a limitation). The was so much unknown going on in the "integration" that I wished I just used a VM and took the perf hit instead of digging to figure out where Windows was mounting the FS and figuring out permissions.

You can explore the files stored inside wsl partition by going to \\wsl$ using file manager.

You can now also mount an external drive formatted as ext4 directly.


In very recent versions of Windows 10 WSL will even add directly to File Explorer a shortcut in the usual Locations pane (left-hand panel with quick folders/PC/whatnot) to \\wsl$ with a Tux icon. It's amusing seeing Tux every time you open File Explorer, and possibly even more amusing that Microsoft is installing that shortcut themselves.


Compilation is the bane of my existence on windows. I have a few large cross platform projects (using CMake) I work on (at work), and the builds on Linux are a night and day difference. My 16 core Linux workstation does the build in like 80 seconds, and the same machine booted into Windows takes 15 minutes.


The developer experience on my Mac is IMO vastly superior to that on my Windows machine with WSL because of the complication of configuring IntelliJ products to use environments in WSL. When I use VSCode, the experience is about the same in both machines.


Fair enough! I have run into an annoying number of issues that were because the flags for `cp` varied from mac to other *nix systems, which was very annoying to debug.


You should give Windows Subsystem for Linux a try. It's what I'd choose in your scenario.

https://docs.microsoft.com/en-us/windows/dev-environment/ove...


I'm heavily invested in Linux for years already, a i3+terminal+firefox+emacs guy.

I forced myself to work on Windows 10 Enterprise for a week and left kind of feeling OK about it. It's a bit slower than Linux, a bit too many moving things by default and I definitely prefer the env vars and config files over registry and control panel. But. I didn't use WSL or WSL2. I just had nushell and Microsoft's terminal app, with winget and all that. Some keyboard shortcuts and multiple desktops enabled, writing Rust software with emacs, firefox and a good terminal was not bad at all. I would not dislike working more in there, but in the end find Arch Linux to be the end game OS for me, so keeping the installation just when I need to debug some Windows issues.


I actually have been trying this recently! I've been using VS Code via SSH into a WSL2 container running on my windows box and it's been going surprisingly well.... but that was after a moderate amount of effort to get WSL2 working to begin with, which was partially complicated by my past efforts of getting WSL1 to do similar behavior. I'm also not 100% confident NewCorp's IT would be kosher with me spooling that up. I could be wrong, but it seemed easier to go with the lower-number-of-abstractions-to-get-an-acceptable-experience via mac at the time.

Though who knows! Maybe I'll change my mind and get a new machine :)


> that was after a moderate amount of effort to get WSL2 working to begin with, which was partially complicated by my past efforts of getting WSL1 to do similar behavior.

Could you explain more?

I know installing and switching to WSL2 isn't as straightforward on windows stable. Is that what you are referring to?

If so, on insider - you can run wsl --install and it will work.

If not running wsl2 by default, wsl --set-default-version 2

I think they could make it easy to onboard users by setting better defaults and decreasing friction.


I had to fight with enabling/disabling Hyper-V in windows features for a while, and also somewhere I flashed the BIOS on my motherboard and it reset my virtualization-enable switch to "off" (which I guess was the default?)

50/50 PEBKAC and Windows being difficult, IMO, but my total unfamiliarity with troubleshooting windows made the process a bit more annoying than I felt it ought to be.


There is the other issue that Windows is full of spyware. Most of the mac's telemetry is inadvertent and leaks much less data to the OS vendor.


I would assume any IT-issued devices are full of spyware regardless of OS vendor.


I chose the mac just so I could have a sane terminal experience, UNIX-like tools, etc.

Well, that’s on you. You could have had WSL2 which is amazing.


While I think WSL is an option worth investigating, it is not the answer for everyone.

I jumped from XP to Windows 10 with WSL1 and had used Linux and macOS in between. It took me a few days to come across a bunch of pain points I was surprised had not changed; I do not like the install/uninstall scatter-shot method, monthly reboots for updates, control panels feel like archeology digging older UIs for settings, the window freezes and I cannot resize/minimize when the app is busy, I do not like Explorer or the windowing UI. PowerShell in practice seemed too verbose for interactive use and the learning curve/adoption was too steep to be productive (everyone else on the small team was writing BAT files). I got tripped up by odd things like offline documentation and you had to wrap it in a BAT file to automate it. I also don't like compiling stuff for Windows. I don't begrudge anyone who likes Windows, but I have a very strong preference for the other OSes. I might have some of those details wrong, but that's what I remember from the transition. I wished I had kept a diary because I forget a lot of it now.

WSL (admittedly v1) was a bit odd and hefty to install. The account/permissions and locations of files was awkward. I found myself ssh-ing into a Linux box to do a lot of things.


Yes and that's not vendor specific for sure.

I occasionally look after a fairly large Windows WPF application which is half integrated with Microsoft Word and there are hundreds of lines of code dedicated to quite horrible workarounds for issues caused by API changes and weird ass behaviour. There are a lot of if statements for different Word versions as well.

For example: when saving a file "safely" (i.e. without weird ass side effects such as locking or document metadata corruption), if your word version is 7, 8, 9 or 10 you must use SaveAs2000 API call. If your word version is 11, 12 you must use SaveAs API call. If your word version is any other one then you need to use SaveAs2. This is entirely not documented past telling you that you are told not to call half of them and most of the reasons behind using them were discovered by taking the VSTO libraries to bits.

At the end of the day, the objective is to make sure the end user never sees the hell you had to go through and entirely takes your efforts for granted. They don't care and efforts to appeal to them are frowned upon, even if we whine and complain about it in our own circles.


I also had enough, I now consider the Apple ecosystem a legacy platform, similar as Internet Explorer in the web world. I still do port my software but as a "best effort" scenario, nothing guaranteed basically, their ecosystem is too much out of touch with proper development practices to be able to guarantee anything, and I do warn people that I cannot guarantee much as well.

I'm sure there's going to be some people annoyed just by reading that but if you dabbled just a bit into their ecosystem, you'll certainly know why I have this opinion.


But it seems they do, eg. rachelbythebay stopped using Wireguard because of the mess.


she stopped using Wireguard (and ranted about it) rather than stopped using Apple's products (which are ultimately responsible for the failures she complained about in Wireguard)


Right. And Rachelbythebay is way more technically inclined than most users; if she wasn't able to correctly apply the blame to Apple, then normal users are definitely not going to be able to do that. Developers need to be more up-front about why these issues exist, we need an education push.

For all the criticism about how Fortnight framed its issues on iOS (and some of that criticism was warranted), coming out of the gate strong with a consistent message that Apple was to blame was likely the only way to get any 'normal' user to even consider that there were multiple issues and viewpoints at play. There's no such thing as subtlety or nuance when you're trying to talk to that demographic about why their phone/desktop doesn't do the thing they want it to do.

In the long term, I don't know. On one hand, these issues do affect final users, but communicating with final Mac users is difficult.

But on the other hand, Wireguard isn't going away, it's a clearly better protocol. So right now, final Mac users assume it's the devs' fault. But are they going to assume that when literally everyone around them has decent VPN clients and their Mac experience is just miserable? Mac users aren't completely isolated from the Linux/Windows world, at some point they're going to realize the pattern if all of the software on their platform is just worse.


> if she wasn't able to correctly apply the blame to Apple, then normal users are definitely not going to be able to do that

This isn’t a moral judgement. I apply the blame to Apple. But I also choose to keep using their product. Their products are less dispensable to me than another VPN protocol.


> But I also choose to keep using their product.

I think the issue is less people who understand the tradeoffs and decide that the Mac platform is still worth using -- it's people who do not understand that there is a tradeoff at all, or who think that the root cause of all of this is just the developers being lazy.

If you're aware of the reason why Wireguard can't do updates while it's running, and you say, "that's fine, I still want to use it on Mac", that's a very different reaction than saying, "the devs don't know what they're doing."

I suspect that average nontechnical users are currently in the latter category rather than the former, but I could be wrong.


People have been choosing usability over security forever - don't be ashamed.


It's funny because the reason anyone cares about this whole episode is that some people felt the need to play a white knight in the developer's mailbox.

The smarter people will quit the Apple platform, and the dumber ones will quit the software whose creators refuse to put up with the Apple bullshit (plus some that try to put up with it but Apple arbitrarily fails their review anyway).


You're mixing up intelligence with morality, or something similar to morality. Just because some interfaces are bad and the company is anti-competitive doesn't mean using it is a dumb choice, you have to weigh up the pros and cons.

Perhaps it's an axiom that the open alternative is better in the long run, but that's too long a run to really care.


It's gradually becoming a ghetto. They do somethings well, and at one time it was a much better experience than Windows, but I don't think I would say that today. When I replace my Macbook Air, it will probably be with a Windows or Linux device.


Until the give the app a poor rating, when ultimately the root cause is the arbitrary reviewer's decisions making a fix harder to make than it should.

It's not the user's fault, bit it's the developer taking all the heat while it's business-as-usual for the reviewer.


That's a fair point, I was picturing developers (who are Apple users) in my head when I wrote this.


I hadn't heard of Buildpacks before, sounds very interesting.

In particular the out of order layer replacement. I'm interested in switching to Buildpack for the images I maintain for my home cluster. Would make upgrading my base image so much simpler compared to rebuilding all the other images! I read a bunch of docs/articles since reading your comment yesterday but couldn't find any mention of this, or better yet an example. Are there some docs I missed? (I didn't look into the spec.)


Nevermind, I realized that rebase is exactly that. I had misunderstood the docs.


Don't hesitate to reach out on Slack if you have more questions: https://slack.buildpacks.io

A few tips on rebase:

(1) If you want to rebase without pulling the images first (so there's no appreciable data transfer in either direction), you currently have to pass `--publish`.

(2) If you need to rebase against your own copy of the runtime base image (e.g., because you relocated the upstream copy to your own registry), you can pass `--run-image <ref>`.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: