I think this is a failure of internal management on Microsoft's part. There aren't ads in the file manager because Microsoft-as-a-whole decided they should be there; there are ads in the file manager because the OneDrive team wanted more users, didn't care about alienating people from Windows-in-general, and had the power to put ads there.
I bought a new computer with Windows 10 for my wife, an artist. It contains Windows Home edition.
Windows Update reboots her computer "outside of working hours". Of course, she leaves the computer running with partially completed work sitting in the application because art isn't something you do in one sitting. She comes back on occasion and finds that Microsoft rebooted the computer, LOSING ALL HER WORK.
This shows that Microsoft does not value its users and their work.
I could, of course, BUY an "upgraded, professional version" but I have a better solution.
Honestly, as a pro digital artist, the two habits I found most useful to teach myself were "regularly zoom out to 100% so I don't get lost in needless detail" and "save every time I feel like taking a break". Windows' behavior here is inexcusable but there are a ton of other things that can happen - art software crashes, power interruptions, drink spills, kids/cats poking, etc. Switching to Linux won't save her from these.
Seriously. I've been doing digital art for money since 2000. I've lost work. I've learnt to consider anything unsaved as potentially something I could lose any minute. The sooner your wife learns this, the happier she will be as a digital artist.
(And set her up with automatic backups while you're teaching her this. "Oops I deleted half my sketch by accident ten saves ago" is wonderful to recover from.)
It is not the state of the PROGRAM, it's the "state of the DESKTOP".
You have your art tool running, a browser with a dozen open windows, a couple PDFs open, a chat window with the art group, and all of the other open applications. POOF. Gone.
It's like someone came to your office and "cleaned off your desk". Some of us rely on where things are as part of our "external memory". Do you remember which 3 PDFs (aka books) you were looking at last night?
The image might not be lost but WHERE you were working on the image, the state of your toolbox, what brush you were using, what color you had picked, etc. are mostly all in your head and reflected on the desktop.
That said, it is truly ARROGANT to reboot someone else's computer without permission.
Next time you go to work, walk around and reboot everyone's computer while they are working. See how that goes.
I've gotten very very spoilt by OSX trying very hard to retain information about what was open when you reboot, I suppose. At any moment I've got like four or five different desktops set up with various numbers of windows related to the task I'm doing on that desktop, and those will all come back if I reboot. I still find the "save early, save often" habit very worthwhile.
Hell, maybe I'm also spoilt by the fact that Illustrator defaults to re-loading a file with the view sized and centered exactly the same way it was when you last saved it.
Even with all these things making life easy for me, I'm still saying that "don't walk away from your art app with anything you'd be sad to lose unsaved" is a habit worth having.
I am not defending Windows rebooting and ignoring unsaved changes in the least. That is inexcusable.
The forced reboots are especially amusing if you consider them in the context of the fact that the Windows NT series was long touted as being so stable they never require rebooting, while 9x with its minimal memory protections would need to be rebooted often if your applications weren't well-behaved. Meanwhile, year-long uptime was a common occurrence with Unices (including Linux).
It seems Microsoft does not value uptime anymore either.
I think they realized that people obsessing over uptime yielded machines that hadn't been patched in half a year, and took the easier route to fix it (forced reboots, instead of a Windows equivalent of kGraft).
While I don't agree with the most recent iteration, this is just a continuation of the slow march they've been on since they added Automatic Updates and downloading the updates before prompting you to install, then increasingly invasive prompts, until now, where you can't opt out of restarting without highly invasive registry changes or an enterprise license.
And in the process overlooking the massive number of gray boxes sitting around on shelves and under desks doing "something" and running some bottom of the barrel Windows install.
Now why those boxes are even online so that they can download updates, never mind reboot at "random" intervals, is the biggest puzzle.
I agree, the OS shouldn't be adding to this problem, but there are plenty of ways for an application to close unexpectedly. OS crash, application crash, power outage, the computer locking up etc. Any application that keeps a user's partially completed work open for long periods of time should be durable to all of these situations. An OS restarting itself should never be more than a hiccup.
If MS' motivation is to protect against intrusion, then the most invasive thing they should do, after whatever internal countdown, is to simply drop the internet connection.
There is a lot of state that gets lost in a forced reboot, not only in terms of things that could easily be auto-saved like files. Not to mention this is something an application has no control over, so you still lose whatever you did since the last auto-save; and I believe this (being suddenly terminated) is something an application should not need to care about either --- requiring that every application care about saving and restoring state is ridiculous. The OS is the one terminating processes, it should be responsible for restoring them.
If a "reboot" simply meant a few seconds where the computer is unusable, and then you get put back in the exact same environment with everything being exactly the same as before, I would probably think it acceptable, but the reality is very different.
I agree that forced reboots are unconscionable, but there's nothing the OS can do about power failures, as long as we're using volatile DRAM for main memory. There's also the possibility of crash-causing bugs in the app itself. I think any app used for content creation has to do auto-save.
(I worked on a micro-Emacs-clone, Mince, in 1981, that ran on a 64KB CP/M machine and had very usable auto-save -- on floppies! You can't tell me it's too hard.)
any application that doesn't do auto-save to protect the user from crashes and power failures is broken
That's a cop-out. I'm about to start a number-crunching process that usually takes about a week to complete. Is that supposed to have some magic facility to be interrupted at any point and restart from where it left off, or at least somewhere close? How much of my time should I spend on that, given that I wrote the scripts so I'll be the one implementing any auto-save as well? Should I also reimplement the dependencies that are doing a lot of the heavy lifting, given that calls into some of that functionality can take considerable time to complete?
Periodic state dumps or checkpoints can save a lot of wasted time if there's a power interruption 5 days in.
True enough, but so can a UPS and automatic sleep function in the event of extended power outage, and we already have those routinely installed anyway.
Is your number crunching program geared towards a mass market; or a 'once in a while' report run by a single company?
Neither of the above, though this specific type of work might be unique to the project in question for all I know.
I also don't know how many other projects with unusual or unique long-running tasks are out there. However, given that there are hundreds of millions of people, if not billions now, with personal computers, I imagine the answer is well into "quite a lot, actually" territory.
I would expect to be able to plug a router into my computer and flash it without Microsoft deciding to potentially brick the device on basically any operating system.
For those who may be interested: the way to turn this particular trait off in Windows 10 is to mark all of your Internet connections as "metered connections." Windows will no longer download updates unless you specifically ask for them, which you can do when you're not in the middle of a big project.
How long will you go without security updates to achieve that?
The game was over as soon as Microsoft decided not to allow some users to install individual patches any more and to bundle security fixes with other changes as part of the same update mechanism.
Well, I did a little bit of research and it looks like the policy Microsoft had in place was already allowing for emergency updates to be installed over metered connections, and this change in wording may just be a clarification of the existing policy. I'm going to hold off on updating until there is more information out there about what the change in wording actually implies. If things are bad enough I might change to a new OS to work on my projects. Automatic updates are just so distracting and frustrating most of the time. Like most humans, I just want control over my environment.
Unless they've changed something, wired connections cannot be set as metered (even though in practice most of them are either metered or capped); this can only be done for wireless (both WiFi and mobile.)
It's because of this misfeature that I've decided I'm going to build my own Wi-Fi router, which will hard-block any and every IP address belonging to Microsoft.
I've been late to work before because my Surface restarted to install an update, it got stuck rebooting, and my alarms didn't go off.
I think if you want to be on time to work, the logical thing to do is to buy a dedicated alarm clock rather than building a WiFi router to block Microsoft's IP addresses. You've only addressed one of many possible failure modes with that solution. People have been using dedicated alarm clocks for decades, and the failure modes are well-understood. It's mostly power failures, which is why they all have a backup battery to keep the alarm working even if your power goes out.
The problem Microsoft has created is that you now have to choose between trusting their updates and having a chance of your system being broken by them, or not trusting their updates and having a chance of your system being compromised through an unpatched vulnerability. This is a binary question with no good answer, and it's no longer clear that accepting all updates to ensure you get the security patches is the option with the lowest risk.
Is this better or worse than a 0 day being used to take over her system, encrypt the HDD, and hold it ransom?
FWIW, good apps should support session restore. If a hot fix gets pushed, especially for an active exploit, then getting it installed asap to everyone (not just those of us on HN who keep up with security bulletins!) becomes important.
They missed their chance to "spy" on us as much as Google (google search, gmail, maps, google-analytics, android) and Facebook (facebook, whatsapp, instagram) do.
They are trying to catch-up (skype no more p2p, linkedin) and I've noticed that XP & 7 were leaking a little, 8 was leaking moderately but 10 gets the cake!
They want their piece of the pie, they want to enter, see everything, read everything, snatch everything.
They went from "neutral OS minding their own business" to "your business is our business". Something like installing XYZ app on an iPhone and see that it "talks" to 5 trackers and facebook (for the betterment of the services).
1. There is no such thing as a FREE meal (OS).
2. If something is given to you for FREE then the product is YOU.
Good luck to all of us. The discussion remains the same: whose data is our data and if/what can we do to keep it ours.
Edit: also the tendency to use "cloud" for all Office files (which 100% are scanned, analysed and given to any government that sneezes) is also deplorable but hey, "clouds are cool"
Not at all. The rule doesn't state that paying for software guarantees that you're not still the product, just that if you're not paying for it, it's significantly more likely that the difference will be made up for somewhere else.
But in a free marketplace, the difference is usually made up by charging commissions on a volunary transaction by both sides, which both sides enter into because they believe benefits them. In this case, who is the product exactly? Both sides are equally "the product" and often would welcome their positions / profiles being indexed and matched together with a corresponding offer to facilitate a deal. The rest is details.
The perception to end users is that it is. When was the last time you heard a /normal/ end user even considering that Windows had a price as part of their computer build?
As long as there's no provider whose product you could be, perhaps, but be careful even with Linux- awhile back, Canonical added a feature to Ubuntu where the user's use of the dashboard search function was reported back to the company [1]. That got taken out after a mass uproar, but we now know that Canonical wants to be just as creepy as MS.
That was a big one. Luckily free software pretty much guarantees no one is going to get away with trying such a thing (unless just about everyone in the world looses sight of the main goal), it's too easy to switch. During that fiasco, I was on Arch and/or Trisquel so it didn't phase me much, currently on Xubuntu though.
Like much communication, there is something unstated but clearly implied in the expression "There is no such thing as a free meal" This is that it refers to products of commercial enterprises. Charities, for instance, do give meals away for free. Linux is produced by a volunteer, non-profit organization, and so it is free.
By the way, I have noticed that it is rather common for HN commenters to fail to pick up the assumed meanings, and jump on the words alone as being mistaken.
with Linux the 'cost' is largely the effort you need to put in to get things up and running properly. admittedly this has got a lot better over the years and the need to be a self-reliant woodsman is considerably diminished (I'm an ex-Solaris admin, so I know pain!) but like someone who chooses a classic Pontiac that they know how to maintain every aspect of it's a valuable skill and 99% of the time it'll run better and you'll have more control than someone who rents a Google self-driving car by the trip...
I have very mixed emotions about this. There is a bit of nuance here I think.
I have both Windows and Linux desktops, and both annoy me tremendously during updates. On Windows its a new feature that I have to 'click to disable' and on Linux is a new feature that completely screws up my settings and I have to click to go back to the way it was, or construct my settings using the new paradigm.
Both of these annoyances are driven by software change. As the software changes, it changes the environment in which it runs, and symbiotically a bunch of other things change. Why do these things change?
Basically capital. In the free software world it is intellectual capital. Most people who want to contribute to open source, want to do so by "creating a new feature" or "improving and existing feature to be better". Few people want to contribute by "fixing compatibility bugs" or "fixing configuration confusion." They have intellectual capital to spend and they want to spend it on change, not on status quo. In the commercial software world it is worse, they are constantly paying their programming team, every day, every week, 11 paid holidays a year. They can only do that if they have revenue. And they get revenue by selling change not by polishing the status quo. So they change things and they get 'creative' about ways to get revenue.
The ugly truth is that the software business could in fact be "done" for a lot of things. You could just declare the kernel "done" and only allow bug fixes or new processor support. You can declare AutoCAD done and only fix bugs or add GPU support. You can declare graphics Done and simply build GPUs that do what the graphics library need, perhaps a bit faster.
The world changed in a fundamental way when computers got to be 'fast enough' for the imagination of people who wanted to use them. Upgrade cycles started lagging and now it is not uncommon for someone to have a computer for five years before an upgrade. Microsoft needs to make enough money on the OS they sold you once ever 5 years to pay for a crap ton of developers. The math doesn't work, so they are being 'creative' about other ways to make money on those users. Linux/FOSS needs new developers to survive but those developers mostly want to work on 'features' not bugs. So Linux systems suffer arbitrary feature churn.
I think the author's peeve is a symptom not the problem. The problem is that the money is leaving the software space and without it commercial software is not viable.
I am going to reply in the most rage inducing way, by providing a tiny counter example.
Really, though. I think you make a great point and are truly on to something here.
That said back to the aforementioned tiny counter example: Windows, for instance is coming up with things such as Game Mode[1] that are incredibly interesting. This is a feature that doesn't really compromise existing behavior, rather amplifies.
It's proof that some degree of continuous iteration can be healthy, it isn't necessarily healthy all the time. But saying development should stall outright is always a hard sell, there's always a way to improve anything.
Not rage inducing at all but brings up another part of the problem nicely. Do you remember the great Vista SKU explosion? That was when Microsoft tried making 16 or 20 different "versions" of Windows with different prices aimed at different markets. I believe that the goal there was not confusion but to capture revenue from people with specific needs, something which back fired on them from a customer support perspective.
The "enterprise" model of software development often has a base product that you install with all of the features but individual features are enabled or authorized by specific authorization keys. This was something VMS did and I'm surprised that Dave Cutler didn't make it part of NT unless it was protected by patent or something back in the day. But generally game mode is a wonderful thing for gamers, turns your PC into an "x-box" like experience, and makes it easy to use from across the room. As a gamer would you pay $25 to enable that? $50? For me a killer new feature is Windows System Linux (WSL) which I use all the time now. Would I pay $50 to enable the "developer edition tools"? Probably. But then I want bug fixes for those tools essentially forever. Pay for features, bug fixes for free.
Even there however you end up with odd cases where things like NFS support, is that "enterprise" software or is it "developer" software or both? You know the "enterprise" guys have no problem paying $500/seat to get "enterprise" support but if they only want NFS client support and they pay the $50 "developer mode" upgrade instead Microsoft might feel like they are leaving money on the table (they aren't but that is a longer post).
Want more RAM for that mainframe? Have some IBM service rep show up, pop the doors open, move a jumper, and bill your company a massive number of dollars.
People spend the their intellectual capital on things that buy them status. If you want devs to spend their time differently, change which activities give status. I don't know how you would do that though.
Intellectual Capital consists of human, structural and network capital. You need to work out the important activities or ingredients and measure where you are and where you want to be, then you can adjust or change your activities to enable you to achieve your new goals.
> [updates] on Linux is a new feature that completely screws up my settings and I have to click to go back to the way it was, or construct my settings using the new paradigm.
I've been using Linux as my primary desktop in the form of Ubuntu since early 2009. What settings are screwed up by up by updates? I can think about init scripts becoming upstart and then systemd. Is it that?
I didn't notice anything else in these years unless we're regarding desktop environments as settings (Gnome 2 to 3 or to Unity). The change of desktop environment has been the most annoying change of the period. Luckily I managed to keep an almost Gnome 2 experience within the Gnome 3 environment using the Gnome fallback mode. However I wished they never spent time working on Gnome 3 and left it as it was. I have the feeling I had to work to undo all their work which is a pity.
I'm sure there was a 'low disruption path' from 2001 to today but I apparently missed the memo :-). Window system churn Gnome->KDE 3 -> KDE 4-> XFCE, script churn (the whole systemd debacle), USB device churn and device feature churn (mouse tails? mouse scrolling? click to focus, focus under, hover focus?), and display driver churn especially with multiple displays and driver churn. (I've got two in an apparently freakish over / under configuration :-)
I'm guessing, for example, if you use "bleeding edge desktop technology", you'll run into issues with stuff like this constantly because the focus is on flash, not stability. I was really into stuff like that probably 10 years ago, and it was a major headache to keep it up. It's nice that the retail products have dedicated teams to keep stuff tighter, but they needlessly change things way too often. If I'm going to be stuck with other peoples choices anyway, I'd rather choose the team dedicated to changing as little as possible and adding things only when proven to be actually useful.
Lol I always think it's so funny when people are like "I've been using Linux for years, almost never a problem". The last Linux mint update completely crashed the default desktop. I've used many distros including Ubuntu starting back when they would mail you a CD. The problems are endless.
IMO all computers have problems, any claim otherwise are either delusional or fanboys (one could argue that's two terms for the same problem, btw).
Sadly in recent years Linux has taken on more and more of a Windows like black box experience.
What used to be clearly defined lines between parts have been blurred. And what used to be debuggable by running the same commands manually has turned into heisenbugs mediated over dbus.
Sad part is that while this makes me sound like an old graybeard, i am not much older than the people that are implementing all these changes.
Linux Mint officially does not support upgrades (at all). Their literal directions are to backup your data, re-install cleanly, and then restore your user files (implying it's not recommended to restore settings).
Debian provides a generally well supported and documented upgrade process. However it's the sort of thing that you frequently want an expert on hand to do. Still, many people pay to have their car's oil changed.
I guess it kinda depends what packages and distro you use. Running arch for about 2 years, and the only thing which broke occasionaly was nvidia+steam compatibility, so sometimes games wouldn't launch. After switching to amd, I haven't had any issues on any updates.
Windows has become essentially malware for most of it's users. The sad part is that most of them don't actually care or cannot change easily because of existing software.
But I do hope Microsoft improves WSFL to the point that I can say: just write for linux instead, you get windows support for free [and this is already partly true].
I find it a hard sell to say that any — and every — user should learn every platform of choice they use well enough to be able to modify it.
In fact, I might say it is our duty as developers to make sure the default version of our program is good enough that almost no user should have to change it.
And I believe Windows accomplished that, at some point. Then some asshole decided to add ads to it all, just to squeeze a few more cents out of the users.
>> In fact, I might say it is our duty as developers to make sure the default version of our program is good enough that almost no user should have to change it.
Well said. I couldn't agree more. That's also why I think "plug in architectures" which were all the rage a while back are stupid. If a plugin becomes quite popular that is an indication of a shortcoming of the software that should have been addressed by the developers. Unfortunately user privacy and security often require defaults that make certain things harder or impossible so defaults are set to make everything just work. I'm increasingly convinced that forcing people to enable features is better than giving them the option to turn them off when they get too frustrated.
> Windows has become essentially malware for most of it's users
That's a major exaggeration. The home suite's ads are more like tooltips than actual ads and they only appear once on a new install; every paid OS has similar things. Chrome OS, OSX and Windows. Ubuntu has sponsored search. Most people I know like Windows 10, it's largely a small group of people that are in love with Windows 7, are die hard Apple fans or Linux diehards that refuse to adopt 10 and most of them are scared of telemetry data being collected (which as a privacy "concern" has largely been disproven by Thurrott and others.) That being said, I hope Microsoft changes course and gets rid of most of these items. I actually find the one time reminder to use Edge and the plug OneDrive vastly less annoying (and more inline with their competition) and intrusive than having to disable "promoted apps." Anyhow, calling it "malware" is utter nonsense.
Maybe you forgot the tracking part? Or the inability of the user to customize the update behavior?
This is malware. And you're paying for it, which is mind-boggling.
Edit: you might think I'm a zealot about this, but I actually started my dev career on Windows 9x back in the day, with Visual Studio no less. When windows introduced online activation, and Visual Studio (2005?) followed suit, I started to look around.
I'm gonna disagree with the article and say no, the "experts" will not stay. Eventually they might get tired, as I did, with playing whack-a-mole with W10's new forms of adware with every new update and switch to Linux. Just because "there's a way to turn it off" doesn't mean I want to have to dig through these steps each time some new nefarious "functionality" gets added.
In my experience CAD/CAM users are experienced in their speciality, e.g. architecture, mechanical engineering, etc, but often are not very computer literate. It's not due to a lack of intelligence, but they have better things to do than to become IT specialists as well.
People in these fields, generally start out in larger firms and thus are only familiar with Windows as good corporate staffers. Those that go freelance or join startups, tend to continue using what they are already familiar with.
The CAD/CAM software vendors are motivated to go after enterprise licenses, thus there little incentive to support anything but the most widely used operating system.
I have the same feeling about the "just use a VPN" argument against net neutrality. That sound you hear is all the normal people in the world being thrown under the bus.
I agree with what you're getting at, but I think you mean Privacy rather than Net Neutrality.
The other commenter mentioned VPN becoming more regular which is good, but also maybe indicates something troubling. If it's regular for people to feel they need to take extra steps to protect their privacy from companies they're already paying then it would seem something's wrong.
> It’s quite possible it might not have been on purpose
The movement of Windows 7 "Professional" features into Windows 10 "Enterprise" editions was purposeful. E.g. monthly payments are required for Enterprise.
It is not easy for individuals (professional users) to buy Win10 Enterprise and LTSB editions. As a result, features/policies can be forced onto individual users. Such policies would likely not be accepted by system and security admins at businesses.
Honest question: are ads for OneDrive on Windows any worse than ads for Chrome on Gmail? As a user, I see them as equally annoying (based purely on the consequential effects and not the implications).
A key difference would be that I don't pay for Gmail, thus I pay through my data and ads. If I want a paid email solution that doesn't harvest my data, or serves me ads such things exist.
Also, having any OS collect your data is an incredible privacy violation, since, Operating systems, by their very nature have incredibly overarching access to essentially everything you do. And then, on top of it all, they serve ads.
Essentially, so far, software was either "sold", ad supported, or data supported, or both ad and data supported. Windows is trying to triple tap.
And again, the OS has access to absolutely everything. Having a platform have access to a slice of your life feels very different from having a platform have access to everything.
I dunno, power-users is being looked upon with disdain across the board these days. MS, Apple, even in the Linux word, developers will pitch their nose up if anyone mention the term.
they all see "power-users" as something rotten. A infestation in their perfectly pruned UX world of grazing users.
Note btw that MS is keeping these ads out of the market where their real money is, enterprise.
To MS, private and soho sales are just a pitch point when they go to sell their package deals of volume licensing. One about reduced costs of training employees.
I believe they took that out ASAP. It's not a great sign management made the choice in the first place but backtracking on a bad choice these days is actually pretty rare for a company to do and I find the action somewhat promising.