If true that's a pretty nifty pivot.
Had they shipped Astoria in Windows 10 to allow Android apps to "just work" it would have destroyed their UWP strategy.
So they might get a few more Windows Phone users but they would have lost control of the new development platform and given it to Google on a silver platter. And they would never have got anywhere near the numbers Android is at.
I would be shocked if Microsoft don't bring UWP to macOS, Linux and Android (I don't see how they can with iOS) in the not-too-distant-future.
Kind of like how OS/2 support of Windows applications helped destroy it?
Would be nice to see Apps from VS running in Linux and macOS. There's been some effort that one assumes supports this direction... Also, all things to make Azure nicer to use and better as a target are in their long term interest.
My question mark over iOS is because of how strict Apple are with approved apps. Will Apple like Microsoft bringing "app parity" into the iOS eco-system and more importantly will they allow it?
That being said, it is pretty obvious it is a minuscule platform; apps are often lagging behind their iOS/Android counterparts, and there are some obvious ones missing (like Snapchat and Pokemon Go).
It is kind of sad really; I think it would be healthy with more than two major players, and Windows 10 users will probably feel quite at home in Windows 10 Mobile.
I can't blame Microsoft for it, but the app-gap between WP/WinMo 10 and Android is getting untenable for me. I'm thinking about switching back, not because I think Android is better (I don't think it is at all), but simply because I'm starting to feel left out when all my family and friends are using, e.g., SnapChat to keep in touch and I...can't.
It particularly riles me up that there were perfectly functional third-party Windows Phone Snapchat apps, but Snapchat demanded that they be removed and still (evidently) declined to make a WP app.
Not to pick on Snapchat, it's not an uncommon story.
If I could even just download the Snapchat APK and sideload it on my Windows phone, that'd be enough for me. But I can't, since Astoria got killed.
They need to defend their trademark, or it will be genericized. Not that that's a big deal, but it can be a thing.
More importantly, if the windows store is filled with crappy snapchat apps, it hurts their brand. If they release an update that breaks those apps, it hurts their brand. It's a no-win situation. Then there's hacks of third party apps, that are spun as being Snapchat's problem: http://www.cnn.com/2014/01/01/tech/social-media/snapchat-hac...
Snapchat is in a particularly bad position for the last bit, because what they're selling is fundamentally a lie, and having that lie exposed hurt them a lot.
Likewise I'll pick up my iPod touch when I want to listen to a podcast, browse Instagram, or use Find My Friends.
There are hardware differences that make me use one device for it's camera or more reliable bluetooth, but I don't really associate that with the operating system.
Do you use a Windows Phone device because of the operating system or because (for example) it has an excellent camera or does something special with your XBox?
- app organization. One swipe and I'm looking at an alphabetized list of apps I can jump to by letter. This is huge to me, so much easier to find and identify an app versus a 2d grid dominated by icons.
- live tiles. My email tile shows me unread emails, photo tile shows pictures, calendar shows appointments, etc. Small thing but makes life a little simpler and makes my home screen look nice.
- settings are organized and laid out in what seems to me to be a much cleaner and logical way. I think this may have regressed some in win 10.
- maps are offline by default. Saves on data and helps out a lot when going through poor signal areas, which I do a lot.
- Cortana is super awesome, ime works better than google now.
- I can deny apps individual permissions.
- OneDrive integration. I didn't use OneDrive prior to buying the phone but since it was auto installed on my win computers i started using it and its very convenient.
A lot of it just all the minor usability and ui struggles just aren't there. To me it's a lot more polished and easy to use. I may have bad taste. :)
If I take a picture on my phone, it syncs to my laptop and desktop. If I put something on my computer in my OneDrive folder photo folder, it shows up in the photos on my phone.
You share a folder, update a file and then the other person does not get the update to that file, so then they are sitting there on the phone with you saying "no I don't see the updated file" and you have to have them login to the web version and basically download the file to their one drive. That happened to me several times and I just gave up on One Drive. I don't know if they fixed that. Yes, it syncs things for YOUR folders but it will not sync things to shared folders.
Note that Here maps with offline maps is also available on iOS and Android:
- solved by any launcher
- cannot comment
- gmaps handles this automatically
- baked into android since L
Nearly usability and UI struggles are easily solved with a custom launcher.
Then you have never used Windows Phone. None of the Android launchers provides the equivalent of WP tiles with the same amount of integration. There is no API that launchers could use to pull out the same amount/types of information.
Nonsense. It does some prefetching, but if you are in another country (no data) and you go slightly off-route, there is no maps coverage anymore. Luckily, new versions of Google maps allow you to download offline maps ahead of time as well.
For me the reason to switch if I was still on WP is that the app state is so deplorable that even Microsoft's apps on iOS and Android are miles ahead of WP's counter parts.
What's missing in Android compared to WP in this regard? I have widgets on my Android homescreen that has buttons to activate app functionality (e.g. Audible's 'play' button) and that dynamically update their content. Is there some richer interactor or viewing scheme wp supports?
That said, it's a great launcher and I use it on my Note 4. I still don't fully understand the strategy behind why Microsoft released it though.
Customizability is great, and I think a phone OS should be highly customizable. But I also want it delivered to me in a state where I have to do as little customization as possible. I don't want to fool with it more than I have to.
Can you comment further on google maps doing that automatically? I have not seen it to be the case that I can turn off data, input a destination, and get turn-by-turn directions the whole way.
I am fully aware of Google Drive, but the reason I started using OneDrive is because it is already installed on all my Windows computers; all I had to do was sign in. The same is not true of Google Drive. I take a picture with my WP and by the time I walk over to the computer it's already there, and I didn't have to do anything to set it up.
Also, regarding permissions, no, it was "added" early on, but you couldn't use it without downloading other apps and then without rooting your phone. I think it's supposed to be in Marshmellow for-real-this-time, but I haven't seen it yet.
On the other hand now that Uber and bank of america have a universal app it feels more complete.
This may seem ridiculous but that's what these apps are doing to people. :)
I have an Android phone and it doesn't have Snapchat on it. I also have friends with varieties of phones, and Snapchat, and they're very intelligent. It's donwvoted because it's a pointless "get off my lawn comment" that says nothing other than "I look down on all Snapchat users".
If Microsoft is building a framework, I'd expect it to provide a great UX for their users.
If a compatibility layer is required, it would be better for MS users to have it in the reverse direction, to let WP apps run on Android, and use that to try to convince devs to build WP apps.
In summary, it never made sense for MS to provide WP users (their own platform!) with a second-rate UX.
I would claim 'nice ux' is something that would not drive market adoption nearly as much as other factors (value, usability in general, etc).
I see where you're coming from when you say that nice UX is not the most important thing, but a poor UX comes in the way of usability. The conventions users are used to no longer work, which comes in the way of using the app for its intended purpose. UI that doesn't fit might also cause users to pause and reorient themselves, again distracting them from their goal.
For example, wall switches in India are on when pressed downward, as opposed to upward in the US. Either works, but if the switches are different from room to room in your house, it's confusing.
There were many other factors that killed OS/2. Just before Warp was released, it was quite hyped in the press. The then unreleased Windows 95 was considered to be a train wreck and OS/2 a true 32-bit operating system. However, once people got their hands on Warp, it turned out that the installation was difficult unless you had hardware that was covered by the relatively small driver base. Moreover IBM didn't really seem to care about supporting OS/2 for end users. So, much of the enthusiasm evaporated even before Windows 95 was released.
This is categorically false. You only need to visit the windows phone reddit to see all of the complaints on how poorly implemented it was, how it affected the performance of the phone and the app compatibility issues.
Microsoft's attempt to put Android apps on windows phone and their harebrained idea to replace Google services with their services, within the app, were all failures and not because it would have cannibalized their native apps, but rather they just couldn't pull it off.
Xenix, then SFU (Services For Unix - for interop - on Win NT or 2K), plus they had (part of?) a POSIX subsystem around then or earlier- I used it a bit for C utility dev work on WinNT), etc.
· SCO Unix, although it incorporated some Xenix code, was very different from Xenix; it derived a lot from SVR4.
· SCO basically went out of business, because their value proposition was "Unix, but on a regular PC so you don't have to buy an expensive RISC workstation." This stopped being a useful value proposition around 1997 because they couldn't keep up with Linux. So they sold the SCO Unix product line to Caldera, a Linux distributor, in 2001. I'm not 100% sure but I don't think any employees moved from SCO in Santa Cruz, California, to Caldera, which was in Utah. Maybe someone stuck around to smooth the transition?
· Microsoft's backing of SCO was direct; they "bought a license" in 2003. Maybe they also did some indirect backing that I don't know about.
· Caldera basically failed in the Linux distribution space in 2002, and the investors booted out CEO Ransom Love and replaced him with Darl McBride, formerly of Novell, IKON, a couple of startups, and Franklin Covey (!).
· The copyright infringement lawsuits weren't based on Xenix. Nobody claimed Linux had copied from Xenix. Rather, they were based on Bell Labs Unix, from the 1970s, and AT&T Unix System Labs System V Unix. SCO had supposedly acquired the copyrights to these sometime in the 1990s, and in fact them giving permission is how the Lions book was legally republished in 1996 or 1997, but it turned out that in fact what they had acquired was not the copyright ownership, but a sublicenseable license to the copyright. The actual copyright rested with some company Novell had bought up at some point, so the lawsuit got thrown out of court.
· The SCO Group sued not only Linux vendors, notably IBM, but also Linux users, notably AutoZone and DaimlerChrysler. This is an error of omission, but it's crucially important.
1. Xenix was based on Bell Labs V7 and then AT&T SVR2, SCO Unix was based on SVR4. Xenix and SCO Unix shared code in addition to that derived from their common ancestry. Claiming that they were "very different" is obviously a matter of opinion. In my opinion, they were pretty similar.
2. SCO split into two pieces: one was sold to Caldera, the other became Tarantella. As far as your hypotheses about why SCO failed, I'm not seeing why it is relevant to the discussion.
3. Microsoft indirectly backed SCO Group through BayStar capital. Please verify this for yourself with a quick web search so you will be protected from propagating your misinformation in the future.
4. Not relevant.
5. Both XENIX and SCO Unix were derived from the same original codebase. SCO Unix would not exist if it were not for XENIX.
6. Actually, I think the AutoZone and DaimlerChrysler lawsuits are fairly unimportant within the context of the original discussion. To me, the interesting point is that MS had the foresight in 1979 to see that Unix would become a big thing and as a result created Xenix. Decades later, when Unix (Linux) was indeed a big thing, by proxy, it attempted to use XENIX to fight back at Unix (Linux) proponents.
The history is that Microsoft were producing Xenix - a port from AT&T. Eventually they decided to stop doing that work themselves as DOS, OS/2, Lan Manager and similar were of interest. (Note though that they ran their email infrastructure on Xenix - it was part of their operational business.) Xenix was handed over to a father and son company in Santa Cruz. They called the company Santa Cruz Operation so that in phone calls to Microsoft, the Microsoft folks would think SCO was a branch office not a different company.
Xenix was updated, ported etc, eventually being called SCO OpenServer. That "SCO Unix" did not have SVR4 in it. Heck it could barely do multi-processor and similar. In 1995 Novell (owners of the AT&T Unix at that point) then "handed" SVR4 over to SCO, with the result being called SCO UnixWare. That was the SVR4 derivative. SCO did this to get into the enterprise space, and couldn't do the engineering to bring the Xenix derivative there.
Later in the 90s there was a game of musical chairs as Intel announced the Itanium, and all the Risc chip and Unix vendors formed consortiums. SCO was part of one with IBM named Project Monterey. It was supposed to have some Linux compatibility, but as time progressed IBM cared more about Linux (and AIX) while SCO couldn't keep up with the engineering commitments.
In 2000 all the Unix stuff (OpenServer and UnixWare) went to Caldera. Well it would have but the deal was so complicated (it was pseudo licensing with money and royalties going back and forth). Heck Caldera only really wanted OpenServer. The SEC went "huh" a while later, so a new deal happened in 2001. There was also a tech downturn, so a simpler deal was done. All that was left of SCO was Tarantella hence the company rename.
Caldera renamed themselves to The SCO Group a while later. The grounds for suing IBM were around Project Monterey, although as far as I know neither side did all they should have. And then since IBM had gone in on Linux, SCO Group decided to sue claiming IBM had put Unix code into Linux. Novell got involved because the details of "handing" over mattered.
Were you around when stuff moved over to Caldera? Were there SCO people who moved to Utah, or who became Caldera employees in California?
My condolences on having had such a shitty thing happen to such a beautiful company. I don't ever remember SCO Unix being a particularly great Unix, but it was pretty solid, and the company was awesome; it's a place I would have been proud to work if I'd had a chance.
The Unix side of the business was rapidly shrinking. 1999 was a banner year because everyone had to go out and upgrade their operating systems so they could claim y2k compliance. 2000 was the end of a tech bubble, and also saw a tech downturn. This lead to drastically reduced sales. Throw in Linux getting increased adoption (remember that IBM promised to spend $1 billion on it which gave a lot of credibility), and that SCO's UNIX products no longer had particularly relevant sweet spots in the market.
I don't know of anyone who moved to Utah. The people in California became part of Caldera but I don't know exactly how that was legally structured. Also SCO had folks all over the US and world. At the peak it was ~1,100 employees and $250m annual revenue.
SCO was a good company and many people liked working at the company because they liked many of their colleagues. Employee turnover was quite low because of that. It also had bad points, but what doesn't?
SCO Unix (OpenServer specifically) was great but not from a technological viewpoint. However the vast majority of users were not techies - they were dentists, receptionists, pet cemetery workers etc. OpenServer came by default with a gui that let those regular folk get things done in a friendly way. See my sibling comment about why Caldera wanted OpenServer.
And SCO did have some firsts. It was also very good at snatching defeat from the jaws of victory. It was the first company to offer Internet in a Box. You installed the system, and now were on the Internet (as a server). We were the first to ship a browser (licensed Mosaic). We shipped by far the most copies of Netscape. At one point Pizza Hut started allowing orders over the Internet. It was more of a proof of concept rather than massively used and widespread. But SCO was behind that too.
Why did Caldera only want the more dated OpenServer? Did they plan to provide consulting services to existing OpenServer users?
By the late 90s SCO had 15,000 of these VARs. Caldera wanted to essentially substitute Linux for OpenServer into that setup, and make $1,000 per copy of the OS rather than $25. The VARs realised they could supply Linux themselves which is why Caldera had no traction doing that, and doubled down on SCO OpenServer and the existing installed base. Hence the company rename too - it was all SCO products.
UnixWare was touted as Enterprise ready. It did multi-processor well, had sophisticated filesystems (Veritas), could do clustering (Non-Stop) etc. But at the time most who wanted "big" Unix went to one of the RISC vendors each of whom had their own Unix. If they wanted to go Intel (which wasn't credible until the Pentium Pro) then the competition was Windows NT.
1978, I think. The idea that Unix would take over from CP/M seemed pretty common at the time. The main problems were the high Unix prices and the cost of the hardware required.
However, Microsoft got lucky with DOS on the IBM PC, and the PC market took off, and IBM also licensed Xenix....
Unfortunately, IBM decided that it needed to own the whole stack. This precluded using Unix/Xenix/etc as the PC client, which was a problem because Microsoft was now dependent on IBM. (Ballmer called it "riding the bear", followed by BOGU, for Bend Over, Grease Up.)
IBM finally published its strategy in 1987 as Systems Application Architecture (SAA), which mandated the use of the extended edition of OS/2 (not available from Microsoft) as the PC client. SAA also included IBM's PS/2 micros with MCA expansion buses, intended to break the link with the DOS-based PC industry.
Faced with possible exclusion from the IBM-controlled corporate market (1), the best deal Microsoft could get in 1985 was to co-develop OS/2, so Xenix -- despite having been by far the most popular Unix of its day -- became surplus to requirements.
Another factor was the arrival of AT&T's System V in 1983. This showed AT&T was serious about selling Unix, and maybe Microsoft didn't think it could compete. What it did was contribute small parts of Xenix to SVR4 (1988).
SVR4 was where AT&T & Sun decided to redefine and take over the IT industry, which led to the great Unix Wars and the creation of OSF etc. All of which infighting left the door open for Windows, DR-GEM, DesQview and many others....
(1) SAA flopped and IBM had to go back to making PC compatibles, so it turned out that IBM didn't control the corporate market as much as it thought.
I'd forgotten about the BayStar thing, but its existence seems to be debatable; someone at BayStar said they got a promise from someone at Microsoft to guarantee their SCO investment, but didn't actually fulfill the terms of the guarantee, and Microsoft denies any such guarantee ever existed. So it ends up being a he-said-he-said thing.
In 2010 the Electronic Frontier Foundation awarded the Pioneer award to "Pamela Jones and the Groklaw Website" for "Legal Blogging".
Remembering having to use it (the MKS version) to port some Solaris software, I recall my team mates having an alternate expansion for the acronym that isn't that hard to guess.
I remember Mortice Kern's MKS Toolkit from BYTE and PC Magazine magazine ads, but did not know of the MS connection you mention.
Currently, I have to switch over to my MacBook Pro to do any work that requires Adobe software, and then back to my Ubuntu machine for development. So if this Windows 10 thing works out, I might consider switching over (although last I used Windows I wasn't a fan of juggling three different shells--four if you count git bash: PowerShell, cmd.exe, and Cygwin).
Otherwise, I might do what I should've done in the first place and ultimately get a new Mac that can support 4k properly--the one I have is the generation right before 60Hz 4k support, and the 30Hz my current one is at is surprisingly annoying to work with.
For a desktop/laptop, particularly in corporate/enterprise environments (but, also, often for solo devs where its also a personal PC) there are lots of reasons you might want to have Windows outside of the actual dev-specific parts of your work. This is an alternative to second computer / dual boot / VM-based solutions to having your Windows and Linuxing it too.
This means either MacOS with the associated hardware lock-in or Windows with all the associated problems in installing/compiling dev/research tools that just work on Linux or Mac.
Macs will run both Windows and (with some EFI hacking) Linux. It's true that you need a Mac to (legally) run macOS, but that's lock-out, not lock-in. Essentially it is a $1000 license to the OS and perpetual updates.
For some cases this is a limitation. Their laptops are quite nice and the imacs are okay, but if I want to have, say, a powerful desktop workstation with a bunch of nvidia GPUs for CUDA then MacPro doesn't really cut it, and I'll have to run Linux on it; and if I need a extra secondary/tertiary computer/laptop that doesn't really need to be good, then I have to choose between it running a different UI than the main computers or paying a rather hefty premium because of the lockin.
Unless you live in a very special bubble, yes.
On the other hand my mother's friend had a mysterious driver problem (screen glitching) with his NVidia card after upgrading to Windows 10.
Or because they are not new. I had a bad experience after i bought, in January 2016, Intel NUC with Skylake processor and Iris 530 graphics card.
I had few months of struggle, like:
- problem with installation (installer won't boot without some cryptic kernel parameters passed),
- lack of graphics driver
- random crashes (like Google Maps causing the whole system to hang, requiring hard reset)
- processor not running at full speed
- system seeing only one logical core instead of 4 (2 cores x HT)
- "shutdown" system button causing reboot instead of power off
Most of those were fixed only after Ubuntu 16.04 came out, at the end of April. Some issues however, persist.
So, my impression is that Linux is good choice only if your hardware is quite old (like, say, two years, or at least one processor / graphic card generation behind)
For people like me, who want latest and greatest hardware Linux is not an option.
Linux as a subsystem of Windows gives me a much better story as far as hardware compatibility and software than the other way around.
I have struggled with MSYS2 several times on Windows, and have broke the installation. Once it hits critical mass, some dependency or setting ruins my whole experience.
But then again, I've been on the wrong side of these discussions before: I was running Minix on my Amiga 500, and I was rooting for Minix over Linux back in the day. I also ran MkLinux on my PowerPC Mac, and was working on my own OS as a variant of minix.
If you are a web dev, and don't do .NET F#/C# then sure, Linux all the way, however, I am curious to see how .NET Core plays out.
I think this is just something people regurgitate often and believe without evidence. I've used the IDE and it is terrible – probably one of the worst. Last I checked it couldn't open multiple windows for certain file types. Its interface looks like a web IDE. It hangs while it scans your project to provide IntelliSense, which IntellieSense itself is crazy annoying. Who wants stuff popping up while you type or stuff to get underlined while you haven't yet finished. I actually have a big list of annoyances somewhere around.
I really can't agree more – just use Linux, or some variant of UNIX, or use macOS if you don't want to fuss with buggy/missing drivers and piecing together your own computer from parts like it is some kind of difficult feat or accomplishment worthy of nerd respect.
* Ubuntu, and all children apps run under current user's credentials, but are launched under an Lxssmanager servicehost process (as opposed to explorer.exe)
* ProcMon can interact (send sigterm / etc to) unix processes; but as shown on htop, unix processes are not aware of windows ones; nor have I found any ways to send signals backwards
* API compatibility is excellent. Last 2 days, I have re-compiled a full python & elasticsearch & mysql stack into this. Overhead is significantly lower, than any of the virtualization stacks
* I don't use AV tools; however, I suspect they don't check for ELF files. Host system is mounted in /mnt/c , /mnt/d ,etc in LXSS, with current-windows-user creds
* 2-command Sandbox reset: lxrun /uninstall & lxrun /install <- will restore the linux subsystem to factory default
* LXSS root dir (/usr /var, etc) is in c:\Users\$username$\AppData\Local\lxss\rootfs\ ; home is mounted into c:\Users\$username$\AppData\Local\lxss\home\$username$\
And some caveats:
* Filewatching (specifically, inotify_add_watch ) does not (yet) work
* Manipulating files from the host occasionally makes it _disappear_ (??!) from visibility in the linux subsystem. Specifically, git pull from bash, then git update from host makes git update from bash impossible (index file open failed). Same problems with other types of host file-manipulation. This might be due to permissions, or somethings I haven't figured out yet.
Today the status on this issue changed from 'No status' to 'On the backlog' so it will be fixed someday. https://wpdev.uservoice.com/forums/266908-command-prompt-con...
-new_console bash ~
You may want to add the tilde to the path, as it seems to open to /mnt/c/Users/<username>/. Also, I think it still messes with the keys, but not always. They seem to work fine right after starting it.
I haven't tried it out, but this project claims to offer support for doing exactly that:
(from: https://github.com/xilun/cbwin/ )
So, I suspect this uses TCP & networking to do that; which does work: you can listen to a port in LXSS, and that will be accessible via 127.0.0.1 ;and you can connect to 127.0.0.1 from LXSS, and it will be routed to eg. windows listeners . Above was specifically about process-based signals, and non-network-hacks.
Using TCP is actually not that much different than opening a Linux or Windows device or IPC object and using it, except that: * you must handle serialization and framing, * you typically get no security for free (I get some back with a GetExtendedTcpTable + OpenProcessToken + AccessCheck hack), * performance is not excellent, esp with the security check, however on a modern computer you can still sustain launching at a mean rate of ~40 Win32 process / sec (and peak rate of hundreds per sec), which is largely enough for any use case I expect. (Actually if you do spawn Win32 process too fast for too long, Windows tends to graphically bug even after you stop that activity. It might be because of my graphic card driver, because I did not had that issue in a VM.)
On the plus side of using TCP, I could easily test some of the WSL side code on a real Linux to track some bugs (temporarily allowing non localhost connections on my dev environment)
To be clear, cbwin is not and will never be a complete substitute to proper IPC / interactions between Win32 and WSL, but that's not too much of an issue because MS will very probably add some in a future release -- I think at least some way to launch Win32 programs from WSL, and at least working pipes between processes of both world (not 100% sure, but I would be surprised if they don't).
I've been mostly using bash in windows (installed with git tooling) for a couple years now.
When you install Windows 10 there are several pages of checkboxes where you can disable this telemetry. Of course we can never be sure that it is actually disabled.
Now that docker has better integration, I may well start using that more often.
EDIT: See child comment, GitHub preview is fantastic! Slides have a ton of great info.
Our entire floss community is based on forking an existing project, improving, and sometimes merging again, and the entire history of innovation has been based on this copy-transform-combine cycle, too.
Edit: Nevermind, switch to desktop version and it works.
The problems that you have are quite different ones:
* Demonstrating that the Alex Ionescu of http://www.alex-ionescu.com/ and of https://microsoftpressstore.com/authors/bio.aspx?a=07cda0ad-... is the Alex Ionescu of https://twitter.com/aionescu/status/710477975288827904 and of https://github.com/ionescu007 . It's difficult to show a connection in that direction. The Alex Ionescu of https://alexionescu.net/#contact lets people connect the dots — a different set of dots, mind you.
* Knowing that https://github.com/ is the real GitHub. If this is a problem for you, then you have more serious and urgent problems than viewing a PDF document. (-:
To the point that I would trust a PDF from BlackHat _more_ than I would trust one from any other scientific or professional conference.
Edit: Since everybody took this post seriously, I will, too: We would be closer to some local maxima, but would have no chance to progress beyond them.
Sometimes the intersection of "because it can be done" & "the most that can be done with the resources within reach" can be applied in the most problem-solving of ways.
Haven't tried the Linux in W10 yet, but at least have gotten DOS to boot to bare metal on a modern PC from a GPT-structured HDD reliably with less monkey business than ever.