I see two reasons for why it has such a bad reputation.
1. It exposed how terrible device manufacturers are at writing drivers. nVidia alone (which only has niche hardware) themselves stood for the majority of Vista BSODs. And no printer or scanner company had probably written a device driver in a decade now so it took another 5 years for them to catch up (and they just continued creating bloat ever since).
Yes there were major architectural changes but this was the perfect opportunity since it was the first MS (consumer) mainstream 64 bit OS (I don't count the 64 bit version XP). Unfortunately the driver situation made things rather different between the 32 bit and 64 bit versions of windows, this did not help.
2. Pre-fetch/super-fetch or whatever they called it was WAY to aggressive. If you had a decent amount of RAM on launch day, or just a new regular computer 6 months after launch, the pre-fetching algorithms were so aggressive that they completely overloaded the harddrives that perform terrible with that random access load. It meant that the first 10 minutes after boot was spent trying to speed up that you might want to do at the extreme cost of slowing down things you actually wanted to do. Yes they were supposed to be run with low priority but it really exposed how bad spinning harddrives are at multitasking. If doing one task takes 1s, doing two tasks each taking 1s will now take 9 seconds if run in parallel etc.
After enough time this wasn't a problem as all your freely available RAM had been used up by prefetch or actual programs. If you seldom rebooted you never had to worry about it. But the regular user wants to use the computer right away after boot and will only remember the agonizing slowness of trying to start the browser and office applications after boot.
Compared to Vista, Windows 7 was just a new (much better) taskbar, better tuned prefetch and with the very important difference that by the time Windows 7 arrived the drivers had matured and many of them even supported 64 bit systems... But that was all that was needed for Vista to be seen as a disaster and Windows 7 an unparalleled success.
- As others have pointed out, UAC was way too active. Just about any application you cared to launch required permission dialogs to be clicked through - irritating to everybody, scary to most users, and quickly ineffective as everybody stopped reading and just reflexively went for the OK button.
- Lots of legacy applications broke for the most trivial of reasons: they were written to store configuration and other data in their installation directory, which defaulted to "C:\Program Files". This worked fine on Windows 9x, which by default allows user-owned processes to do just about anything, but not on NT, where writing to Program Files requires elevation.
So new Vista owners would click through a bunch of obnoxious UAC popups to install their favorite Windows applications, click through more UAC popups to launch them, and then watch them crash or mysteriously lose all their data.
You got extra loser points if you went for the shiny new 64 bit version, in which case your legacy 32 bit application installer was more than likely to try its luck with "Program Files" instead of "Program Files (x86)". Would it really have been so terrible to leave the old directory alone and call the new one "Program Files (x64)"?
None of this was terribly hard to fix if the application was still supported, but it did require the user to upgrade, often at a cost. Worse, if you were a small indie developer, releasing an upgrade now pretty much required buying an expensive certificate to sign it, lest UAC keep warning your users that they were launching an untrusted file from the scary internet. So lots of small free- and shareware apps which people loved were abandoned, undoing part of the Windows platform's greatest advantage: its large library of existing applications.
Or alternatively, why was breaking them out required in the first place? To this day I frequently end up having to look in two places to find something, because it's never obvious which of the two Program Files it should be in. Pre-64-bit Windows there was only ever the one place. This is a permanent usability regression.
And of course, I wouldn't even need to be digging through there in the first place if the Start menu launcher just worked, but no, they had to junk it up with Cortana, which is so incompetent it can't even find installed applications by name. More details on my Cortana rant here: https://news.ycombinator.com/item?id=15758641
Vista tried to take care of that by transparently redirecting to %LOCALAPPDATA%\VirtualStore, writable with user privileges. The feature is called Virtual File Store, and comes together with an analogous Virtual Registry Store.
See https://msdn.microsoft.com/en-us/library/windows/desktop/bb7... at Virtualization
"Program files" is localized so it's not even "Program files" in all languages. Installers that looked to that folder were doing it wrong anyway and wouldn't work on non-English machines.
You could, however, change the path of "Program Files" so your point still holds.
The same thing happened with the System32 folder. On 64-bit systems, System32 actually contains the 64-bit(!) versions, and the equally confusingly named SysWOW64 contains the 32-bit versions.
This issue is being discussed a lot and nothing is being done to fight it.
Microsoft and Google with Android - where the same issue can be seen - should step in with more proactive approach to solve this problem.
I don't have a solution but they should probably use some business incentives and possibly something directly from OS like built in benchmark scores/graphs, ideally compared to bare bones for the device so everybody can see how much performance you loose by getting all those "features" preinstalled.
A lot of the bloat on Android phones I see is from Google, apps you can't uninstall without hacking. My Huawei had a few apps from the manufacturer, but they were removable - there's just under 20 Google Apps that came preinstalled many [most?] of which can't be removed.
And even that may be delayed, because the push notifications might be delayed to get the radio the chance to sleep and save battery.
I do not need play services, why would I need it for ? I'm not even sure what this does apart from insta-gobbling my 50Mo monthly data plan downloading updates I don't want and cannot cancel of applications I do not use.
On my personal phone (Sony), I have only Play Services, Play Store, Gmail, Hangouts, Maps and Youtube enabled. All the other are disabled, including the 'Google' app.
I take issue with this statement. Many apps which "depend" on play services work fine on my Google-free android. Some examples are tutanota, duo lingo, and some games. Though it's not an easy path, I wouldn't consider my cellphone experience "kindle-like".
You cannot delete them, physically, because they are on /system partition, which is read only. That means, even if you would delete them as root, you would not get more space for another apps or your data. However, the read-only /system has more functions, that you would lose: it has known file layout (so you can image-update your phone, if you ever get an update), it is signed (so you can know your phone has not been tampered with, as it is not going re-sign itself once modified), it is also for factory-reset/sofware recovery purposes, so once you wipe /data, your phone will be in factory-mint condition (software-wise, of course).
I'm not familiar with the /system partition but it seems logical that if I can delete them as root, it means I can also install something else in its place or put some of my data there, which would help me a lot as my phone does not allow for an additional SD card.
If your vendor prevents disabling the apps, you can still try the route using adb and pm (google for adb pm disable).
The point I was making about /system is, that you don't want to mess with it, even if you have root. You can break more than you think, including dm-verity, and then you are not going to boot anymore. Also, apps installed in /system are getting updates installed into /data, so it is not going to solve your problems with space anyway. You would have to repartition your phone, which on ARM platforms opens a new can of worms (partitions are defined in the secondary boot loader, which is signed too. Moreover, if you do this wrong, you get a brick, you are not going to boot without reflashing the original SPL in an external programmer).
Seemed like. Whenever I quoted one to a customer they always turned their nose down at the price, then paid me several hours to debloat the thing, fix a driver that was shipped faulty, then a year later pay me again to upgrade it! Oh and replace the useless battery. The list goes on!
Case in point: Sony. The amount of CRAP that comes with the Xperia is insane. There was an uninstallable "What's New" app that would notify incessantly when it wanted to push some new app that Sony probably made money on shilling.
And never mind the Google crap.
The day I dumped it an installed LineageOS made my phone usable again.
I remember an update downloading itself and applying itself at shutdown then restarting to apply itself some more and looping like this indefinitely. Best update ever \o/
Still, Vista was a disaster.
I remember a conversation I had with a MS engineer at that time:
- Vista is like, the foundations for the good things to come. If you want a solid house, you dig solid foundations.
- I am buying a house, not just pillars in the ground.
(It was the same with the w3c specs: "maybe mozilla and opera are the ones misreading the box model spec and IE has it right", me "ms is on the board...").
A better example would be file line endings, where Microsoft did get it right (\r\n) and all the other OS's screwed up using just \r or \n.
The biggest problem is that each OS went its own way (Mac started with \r but of course uses \n now). If they all had the same line ending all along, whatever it was, no one would think much about it.
\r\n has the obvious disadvantage of being twice the size, along with making it possible to land in the middle of a line ending instead of before or after one.
Of course one advantage would be if you're controlling physical equipment where carriage return and line feed are independent of each other. I learned to program in 1968 on a Teletype ASR33 where CR and LF were literal commands to return the carriage to column 1 and advance the paper. You had to use both because they did two different things. Or on occasion you might use CR by itself to overprint a line. LF by itself was pretty rare, but would do what you expect if you used it: advance the paper without moving the print carriage.
CR LF was fine if you were typing interactively - in fact you just had to hit the CR key and the remote system would provide the LF. But usually we would punch our programs on paper tape, dial in, run the tape through and get the printout, and hang up right away. At $30/hour in 1968 dollars, this saved a lot of money. And of course you would run your tape through locally to print out and proofread your program before testing it online.
To be able to print a tape locally, you needed both CR and LF, but even that wasn't quite adequate. You really wanted to allow a little extra time for the machinery to settle, so the standard line ending we punched on a tape was CR LF RUBOUT.
RUBOUT was a character that punched out all the holes in a row of the paper tape. It was ignored by convention, so you could erase a typing error when punching a tape by pushing the backspace button on the tape punch and hitting the RUBOUT key.
Because it was ignored, RUBOUT was also useful as a "delay" character in the newline sequence. So I guess I'll never get over the feeling that the One True Line Ending is: \r\n\x7F
(Nah, I'm happy with \n, but it makes a good story.)
For example, you could define that your file was fixed length records (like the old punch cards); at that point each line doesn't have a line separator at all; the \n or\r is not stored on disk. But when you read a line using the C routines, one will be added.
I am not sure what C or RMS has to do with this
 https://en.wikipedia.org/wiki/Internet_Explorer_box_model_bu...  https://developer.mozilla.org/en-US/docs/Web/CSS/box-sizing  https://css-tricks.com/international-box-sizing-awareness-da...
Well, IE implemented their version of the box model in '97 (coincidently, NN4 did the same); the box-sizing property was first proposed in 1999, and first appeared in a draft in 1999, and implemented in Gecko in 1999 and in IE5/Mac in 2000.
That's two years from IE/NN shipping the non-standard box model (the standard one was defined before IE4 and NN4 shipped) to having a property to toggle between them. To me, that isn't "many years".
Really what makes it seem like many years is the fact that IE didn't implement box-sizing until IE8 which shipped in 2009.
: https://lists.w3.org/Archives/Member/w3c-css-wg/1999JanMar/0... (sorry, W3C MO space, but I think at this point nobody cares if I mention that publicly)
IE was stupid to not implement the spec, the spec was stupid for not doing it the way IE did.
Other than that, I don't see it being any more right. But it is a convention that is far older than Windows or MS-DOS. I saw it myself first on CP/M but it was there on VAX/VMS and I expect the Teletypes had it from 1960's.
* H. McGregor Ross (1964-01-01). "The I.S.O. character code". DOI 10.1093/comjnl/7.3.197. The Computer Journal. Volume 7, Issue 3. pp. 197–202.
* Jerome H. Saltzer and J. F. Ossanna (1970). "Remote terminal character stream processing in Multics." DOI 10.1145/1476936.1477030. Proceedings of the AFIPS Conference 36. pp. 621-627.
But mostly I was just reacting to the silly idea that CRLF would be right and lone LF not: Microsoft didn't come up with that idea. It was there already in 1960's and perhaps useful when you didn't want to do any line drivers for converting strings at output. But that's not really any more "right" than other conventions.
You left out a key qualifier: CR/LF is the semantically correct way to represent a newline on a physical device that has a physical carriage that has to move back to x=0 and advance one line in the y direction in order to start a new line. What devices connected to any computer today have that property? Answer: none.
In fact, even on computers that have such devices connected, the semantic meaning of "newline" in any file the user actually edits is most likely not to actually cause a CR/LF on the device. Word processing programs for decades now have separated the in-memory and on-disk file data from the data that actually gets sent to a printer. So the file you are editing might not even have any hard "newlines" in it at all, except at paragraph breaks--and that's assuming the file format uses "newline" to mark paragraph breaks, instead of something else.
Neither world is clean, pure, and free of weirdness that's only properly understood when looking decades in the past.
Dot matrix printers are surprisingly common as they are still cheaper to run than the alternatives. That's at least one.
It's worse than that. His position was: MS's understanding and implementation of the box model is the correct interpretation of the W3C specs (yeah, I know).
IMHO, what SHOULD happen is that if you have a device that has special timing requirements (like the old fashioned printers with no memory), then the driver is responsible for the handling the timing. Adding in weird bits to everyone's files is a bad idea.
And yes, I know the difference between carriage-return and new-line. And I know that in the old C specs, "\n" didn't have a guaranteed mapping to either.
Anyway, how is a consumer product that provides a bad experience to the user (regardless of the reasons) "amazing"?
Vista sped up file copying operations but fixed that bug, leading to a faulty perception of slowness. Worse, the progress bar behavior encouraged that perception of slowness. A progress bar that speeds up at the end will be perceived to be faster than one that is perfectly even which in turn is perceived faster than one that slows down at the end. And vista's progress bar usually slowed down at the end because it didn't properly account for those disk sync et al operations ahead of time. The result was a worse feeling experience despite what was happening under the hood.
What a great new perspective.
The only time I can think it ever came in handy was if the user had an unprivileged account and would need an admin to type in the password - hopefully the admin would ask the why question and dig into it before dismissing the UAC prompt.
That's pretty much the best case scenario. A lot of chipmakers (Conexant, Jmicron, Intel in many cases) don't allow you to download drivers directly from them. So you are stuck with whatever the OEM provides. In some cases I've found that newer model laptops by the same OEM use the same audio/media controllers under the hood in some case and I can use the newer driver from the updated model.
My next computer will be something from Microsoft's Surface line. They seem to be the only manufacturer who can make proper devices (everything working and power bricks which last more than 6 months - thanks Apple).
I wouldn't ascribe anything that generous that to a surface device as long as they still use the marvel wireless cards, which have a storied history of causing hardware connectivity problems. I largely enjoyed my Surface Pro 3 but frequently had the same issues. The Surface Book seems to have issues with sensing connectivity between the keyboard and screen, as well. Anecdotal, at work we even had a developer hololens go paperweight because the wifi stopped signalling. MS told us to junk it, unrepairable.
Face it, sample amounts being so big, you're not going to find anything which always worked for everybody. Anecdotally, having worked for a Dell-only place and typing this on a 7-year old XPS, I never encountered severe issues. Only standard hardware problems caused by wear (HD/memory/keyboard keys failing after +5 years of usage).
For me, part of the solution was to go into the device manager and edit the properties for my mouse and my network controller. On the "power management" tab I disabled the "allow this device to wake up the computer" option. I only use the keyboard to wake the PC.
Additionally, when I left the machine sleeping overnight, there was some scheduled task that would occasionally wake the machine. There is a way to disable that, but I forget the specifics.
I really wouldn't be bothered by occasional wakeups if it would go back to sleep afterwards...
The ad campaign featured people spotting deer at dawn from their home office, and the message was, this will completely change your life and bring about your inner sense of wonder, as if you were born again, a new person in a brave new world.
In reality, it was an OS upgrade that didn't work too well and that was more-or-less forced on you if you bought a new device, while at the same time XP continued to work just fine on all your older PCs.
People were disappointed, upset and angry. All other things being equal, a little more humility and a lower profile would have helped.
In their defense, Windows Driver Model (https://en.wikipedia.org/wiki/Windows_Driver_Model) may make it possible to wring the last bit of performance out of a system, but it doesn’t make it easy to write a driver. Its documentation also was somewhat of the type “once you know what this page tries to tell you, you will be able to understand it” variety.
It also didn’t help that new hardware frequently introduced new sleep state levels at the time.
This is true of all Windows versions, but was particularly true of Vista, and of course the reality was:
* Very few people carry out a clean install when they get a new computer. This is as true today as it was ten years ago
* Hardware manufacturers loaded the PC's up with terribly written adware before shipping (this situation has improved slightly)
The requirements for Windows 10 aren't that much more than Vista, so, the average person would get their new Vista PC running on a Core 2 Duo/Pentium D and 1-4GB of DDR2 Ram, loaded up with crapware and not do a clean install, and it would run horribly.
By the time Windows 7 came out, the PC manufacturers where writing slightly more efficient crapware, hardware was generally a bit more powerful, and they had fixed a tonne of bugs in the OS itself.
MS updates have made it worse because they stopped caring about this segment. Same story with phones on Windows 10 mobile.
Linux and Windows together dominate this market so thoroughly that everything else (UNIXes, BSDs, macOS) is practically a rounding error.
Continuous integration and deployment is all done with Microsoft agents orchestrated by VSTS (Microsoft's hosted version of TFS). Yes we use git
Easy to maintain and no performance issues.
I can approve and deploy a release from my iPad (the website is painful on my phone) by logging into Microsoft's Visual Studio Team Services website.
I won't even start to gush about how easy setting up a build and release pipeline is in VSTS compared to the other tools I've used.
Also with unix servers various bastion hosts and similar "security measures" are minor inconvenience and usually even supported by automation tools, while on windows this usually ends up being major PITA.
1) You can install an SSH server on Windows boxes just fine, then use Putty to SSH directly into PowerShell. PowerShell is not a classical shell, but rather a REPL for a procedural, imperative and object-oriented DSL for system configuration and administration based on .NET, with much saner syntax than my beloved zsh. In short, it works quite well.
2) With PowerShell capabilities - I'm a bit fuzzy on the details here, was a long time ago - you don't even need the SSH server, you can issue remote commands from your local PS instance. It required a bit of configuration up front, IIRC, but then you could replace your local session with a remote one with a single command.
So, in my experience - and note that it was probably nearly a decade ago! - Unix-style remote management was absolutely possible and not that much less convenient. And PowerShell is really a solid tool, with easy access to all of .NET and all of the system; the only annoyance I remember was certificate/signature management, dunno if it got any better.
But part of the problem is I have a real prejudice toward local agent based solutions with a central server coordinating everything.
There was a time when our net ops team did something and I couldn't Remote Desktop into a server to do something urgent and if course I couldn't just SSH into it where I had to write a quick Powershell script and deploy it via VSTS to make a change. It was ugly.
Besides, I already have sane deployment groups and tags defined by server environment and function. I might as well leverage them.
All those glitches though came from MS's far too aggressive and unrealistic plans for Vista. A couple years before it launched, 2003 I think, I was heavy in the Mozilla world, and had a high-ish profile in the community (I ran MozillaNews.org and was a long time triager). Robert Scoble tried to hire me to be a bridge between MS and Mozilla, a tech evangelist for features of Longhorn (as Vista was known then) that could help Mozilla, or really features that Mozilla could be a showcase for and be a tech ad for MS. I set aside my suspicions and gave it a try, learning about the technical side of Vista. I learned a lot, and wound up not taking the gig. I told him I didn't think these things has any real benefit to a cross platform application like Mozilla, and that I had real doubts they'd have any real impact on the market even if they were delivered, which I had strong doubts about.
The three tentpoles MS wanted Mozilla to use were:
1. I told Scoble that I saw no benefit in Avalon yet as in 2003/4 Mozilla wasn't really about to dedicate lots of time and attention to coding for some new graphics API that wouldn't be launched for years. He said it would be out much sooner. I said I had my doubts given it's rather early stage of development.
2. "Imagine users knowing their online banking and purchases are 100% secure thanks to the hardware and their OS!" I said I thought the idea was rubbish, a nonstarter, and I hoped it failed.
3. WinFS. My arguments were simple, "apps like this don't care about the FS. Plus, it'll never launch. I have zero faith this feature will be out before 2010. Filesystems are hard, and MS has a long history of cutting features to get products out the door. This is a prime target to be cut."
He argued it was solid and amazing, etc, as a good tech evang should, but in the end I said no to the whole deal. I couldn't in good conscience try to push tech that I didn't believe in and didn't even think would ever release. They were hell bent on shoving all this and more in Longhorn, rather than a smaller release in 2004 and finish the other features later. And thus, we got Vista. Lots of great tech, rushed out the door, and poorly configured.
One of the things Fathi (OP author) writes is
> [. . .] ecosystem partners hated [Vista] because they felt they didn’t have enough time to update and certify their drivers and applications as Vista was rushed out the door to compete with a resurgent Apple.
This goes some way to mitigating the characterization of device manufacturers as terrible at writing drivers. When considered in the context of "a resurgent Apple", it also provides a counterpoint in the specific example of nVidia as a niche hardware manufacturer.
2006 was the year Vista was released, at that time, Apple was shipping the quad-core Xeon Mac Pro with macOS Leopard (later Snow Leopard) which shipped with an NVIDIA GeForce 7300 GT video card. 
I used this particular computer all the way through Mac OS 10.7 (Lion) and if memory serves, I had a handful of kernel panics over the course of 6 years. From all I could tell, nVidia's video card device drivers on Mac OS never interfered with daily operation (up 24/7 as it was also an authoritative DNS server for my personal domains).
So, device manufacturers may be bad at writing drivers but those drivers also depend on stable and reliable APIs of the target OS, and such details have to be communicated between the two teams. Device drivers are an interface between host operating systems and hardware embedded systems. As such, the reliability of any device driver will depend on the sharing of information between the OS and device driver teams just as much, if not more, on the competence of the device driver engineers.
EDIT: remove extraneous words, a couple proper nouns where appropriate, punctuation.
And the problems in UI. And...
In short, it was really good version to avoid, especially as an upgrade. On a new computer it could have been somewhat acceptable. But amazing... no.
I'm on mobile now, but from memory you right click the start button and go into taskbar settings.
Personally, I have both set to "only when full". I miss that feature on Cinnamon.
I never liked Vista, can't say why but so far I think the UI/UX annoyed me. Everytime I boot Win7 I feel home (almost like NT5 or 95)
I'm inclining to believe this was on purpose, and not just in a "we just want to make the OS prettier for users" way. I think there used to be a theory that Microsoft did this to help PC manufacturers and chip makers sell more hardware, too.
Microsoft's mistake was that the resource requirements were too high compared to XP, so like 90% of the PCs running XP were useless when running Vista.
To this day the Win7 is arguably the best OS (supported til 2020), followed by the aging XP.
1. I dare you to try it on a computer with a 5400 RPM hard drive. A fresh installation will spend the majority of its time sitting around with 100% disk usage as telemetry, superfetch, defender and more all eat up the entire ~2M/s hard drive bandwidth for more than 30 minutes after boot. And then halfway through your day, it'll decide to start recompiling .NET, or some other package with zero user notification and your computer will come to a halt. But hey, you're resourceful right? Just disable those services! Nope, too bad. Every version of Windows (including creators updates) have made it harder and harder for a user to disable features that break. Services get moved to TRUSTEDINSTALLER, an account that you can't override. Those services get restarted without asking you. Some as few as a couple hours. And Windows Defender will restart itself, and the before-last creators update KNEW people were disabling it so they moved the link in Metro/settings to be more obscure.
2. I just spent Friday for a client trying to "fix" Windows for Microsoft and failed. He bought a "Windows 10" laptop with a 30 GB SSD drive. Too bad, Window alone, took 97% of the entire hard drive. I removed literally every application (including 1 GB avast) except for Chrome and Windows 10. Every time I removed space, removed the hibernation file, removed any and all disk cleanup stuff... Windows would then fill it with patches.
It also had a 2.6 GB "C:\recovery" folder. I checked online and they said "Feel free to delete it, it's from an old OS." I tried deleting it, no permissions--even as an admin. I went in, changed the owner from the glorious TRUSTEDINSTALLER and made myself the owner of all the files. I deleted some files, but one file refused to delete. The file? 2.6 GB. It said this file was open in "Windows Provisioning". I checked Windows 10 backups, restorepoints, file history, all that jazz. Zero.
I check online. Maybe I'm insane. What are the Windows 10 requirements for hard drive space? Oh yeah, 15 GB. So there are, "Lies, damned lies, and Windows hardware requirements."
Meanwhile, Windows 10 keeps spamming that "You need to free up space to continue downloading windows updates!!!"
Really? REALLY? Thanks for the update.
I download Process Explorer. They say find the open search and find the file handle of the file open. I do it. ZERO RESULTS.
I download a tool that lets you delete a file on reboot before a program will acquire the file lock. It queues it up. It runs. It fails. Still NOTHING in services.msc that has Provisioning in the name.
Okay, change gears. ALL they want, is to freakin' install Office 356 on their craptop. They've got an SD card with a 32 GB card.
By now, after clearing up at least 4 GB, C: is now down to 100MB free.
I download the 5 MB Office auto-installer. It fails with a pop UNDER error that you don't notice at first under the loading screen. Okay, instead of giving you a description, it gives you an obscure error code. Clicking it at least gives you the KB for "out of disk space." Lovely.
I load up the Microsoft website, I find an alternative downloads link. I find the offline installer.
But back to task at hand! I load up the same link on this slugger of a Windows laptop and I go "Fine, I'll download it to the SD."
I download it for ~40 minutes. Why? Because it's 4.6 FREAKING GIG for the offline Office suite. What basically boils down to an e-mail client, and word processor, is larger than an entire Linux distro with apps. (<-Yeah yeah, there's more apps, but I'm pissed at this point so I'm taking comedic liberty here.)
So I wait, and it finally downloads to the SD card, and at 99%, it stops and goes "download failed." There must be some Chrome bug with temporary space or something.
Well! I'm not defeated yet--this is my job and I'm paid for results. I've got a USB flash drive and my Linux laptop (read: running an OS that actually works and can be configured and fixed by the end user).
I go to the same website as before with my Linux netbook. But wait, the page... it's... different?
Everything is the same except that wonderful offline installer link? They removed from the page. That's right. Go there with Windows, and then Linux, and go to
same Microsoft download links and they will intentionally hide the ISO links and only give you the auto-installer link to ensure you're only going to run it on a Windows system. So customer friendly! (They do the same thing with Windows 10 ISOs, try it out.)
At that point, the client's laptop owner had to drive back 3+ hours to his office location so he had to take his laptop back.
I spent at least half a work day.. trying to (fight Microsoft) to free some space... on a machine that 100% meets Windows system requirements.
Thanks Microsoft. I wonder why do all my game and app dev on a Linux box these days. It's almost like I like feeling like I own the machine I paid for. Could you imagine having to go through all of these anti-consumer, anti-solution when doing hardware upgrades? What if you couldn't release the case on your machine without getting a "poweruser" license key from HP first? After all, they're just trying to protect you and they know how to run their hardware better than you. The more you look at that analogy, it really becomes insane how much we let Microsoft get away with bricking our own machines. The answer to a working machine should never be, "throw it out and buy a new one" when simply changing a config setting (if you were allowed to modify those registry values--sorry!) would suffice.
"I see you're trying to turn your SSD onto ACPI mode. Have you purchased an Enterprise SSD license yet?"
For reference, absolute minimum requirements are 16 GB for 32 bit and 20 GB for 64 bit . So in theory your client’s laptop should work, but it’d probably be a poor experience. (Likely also a bad experience with modern Linux on 30 GB.) Given that your client’s Windows 10 laptop has an “old OS” on it, I think there’s some info missing in this story. A fresh laptop shouldn’t have an old OS install on it. (Or maybe this is OEM recovery gunk?)
I just checked my laptop and the Windows folder is 18.7 GB. Did your client's laptop have a Windows.old folder taking a bunch of space? Large updates to Windows will create these. You can whack this if you need.  (Should also get deleted automatically after 10 days automatically.)
Disclosure: Microsoft employee
Literally just typed "raspbian minimum card size" in Google and Google dug up this as the top result:
"/Pi Hardware /SD Cards.
The minimum size SD card you can use for Rasbian is 2GB, but it is recommended to get a 4GB SD card or above. Card Speed. A Class 4 card, which is the minimum recommended has an average read/write speed of 4 MB/sec."
The default packages include things like webkit and libre office, so it looks to be a fully functional Linux install on a popular piece of hardware.
Now, 4GB still seems dangerously small. But if all a client wanted was office plus web, I bet someone like OP could make a workable system within that size limit without Raspbian filling the emptied space with updates.
Raspbian runs on 128 MB RAM or whatever.
Ultra-cheap computers with MMC flash drives with pathetic read-write speeds. And pathetic other parts. Such as this charmer from Walmart. https://www.walmart.com/ip/Teqnio-ELL1103T-11-6-Laptop-Touch...
It also boggles my mind how, still to this day, it's so hard to get a lower cost desktop or laptop that ships with an SSD, despite the fact that SSDs offer up such a performance improvement that many people consider them mandatory. The average consumer will have a much better experience with a computer that ships with a 128 GB SSD than a 1 TB HDD, yet every manufacturer is offering plenty of the latter (at 5400 rpm no less) and none of the former at sane price points. The two components even have similar costs now. In this era of streaming everything, the average person really isn't using much hard drive space. I know that my non-technical family members certainly aren't.
I just got my mom a $450 refurbished 2012 Dell workstation for common desktop use (mostly email and word processing). She loves it. It's night-and-day faster than the machine it replaced. And the single biggest performance improvement in it comes from, you guessed it, the SSD. A $450 five-year-old used workstation is trouncing any modern desktop in the sub-$1,000 range in practical performance. I would've gotten her a new one, but couldn't find anything in the price range that has an SSD, and the kinds of computers that do ship with SSDs also tend to have unnecessarily upgraded (and costly) processors and graphics cards, which are only useful for gaming.
(Oh, and the used workstation has a Core i7 in it too, so it's not exactly a slouch along any dimension except for 3D graphics performance.)
Don't buy 5 years old hardware second hands, it's poor investment and I speak from experience. Hardware has a limited lifespan then it just dies. The hard drive, the motherboard or the screen fail without notice and you're screwed.
As for hardware endurance, I don't think you're giving quality hardware enough credit. I've owned a lot of computing hardware in my lifetime, and the only failures I've ever experienced have been fans going bad (which is easy to fix) and spinning hard drives crapping out. Oh, and I dropped a laptop really badly one time and broke it that way, but that's not really the hardware's fault. Solid state components last quite a long time.
Entry cars in Europe are in the range $5k to $10k. Not sure closer to which ends. They are certified for regulations and safety.
I certainly had some hardware and I've seen everything die sooner or later. My order would be rotating hard drive, then gaming GPU, then display, then motherboard.
Never seen any computer reach 10 years without any replacement. You're significantly past half life when buying 5 years old.
I've seen plenty of computers last >10 years. So, we'll see how this one goes. Even if one component does need replacing at some point, it'll likely still have been the best choice. Nothing else offers that kind of performance at a remotely comparable price point unless you're willing to build a PC from scratch.
Yes, Europeans generally speaking have smaller cars than Americans. All cars have manual transmission.
Not 10 years with all original components.
I just did an install of modern Linux (the latest CentOS 7, with Gnome deskop), so I can check. The root partition is using at the moment 4.2G, plus a 2.0G swap partition and a 1.0G boot partition. So if this were a 30G disk, I'd have more than 20G left, even after installing a few applications.
EDIT: For a more realistic number, I just checked my Arch-Linux-based home server, which has a fairly small installation (including some multimedia and X11 stuff for PulseAudio, mpd and youtube-dl, though). Needs just over 2 GiB for the entire system and applications:
df / -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdc2 110G 2,2G 103G 3% /
So consumer-oriented distros don't seem to be nearly so much lighter than Windows. With the caveat that Ubuntu probably comes preloaded with more apps like Libre Office.
I just installed Ubuntu 16.04 a month ago (new laptop), and with /home/ on its own partition, after installing everything I use regularly, root is only using 9.3G
I have 17.10 in a VM and it takes 8,7 GB on the disk, with a few extra apps compared to default install.
Somebody who doesn't work in the IT industry - the target demographic of Windows.
I'm also surprised it's possible - but if it cuts the price, then why not? I assume 10GB more than the minimum is enough to earn the Windows badge.
But I don't think the crux of the issue was lack of disk space.
Maybe the manufacturers could have 128GB disks for as little costs as the memory card.
AS you're an MS employee you'll know about WIMBoot and whatever the newer less stupid version of WIMBoot is.
Also, I'd never heard of WIMBoot. Being a Microsoft employee doesn't make me a Windows expert. I don't work on Windows and don't own any systems that WIMBoot targets.
Looking at WIMBoot, it doesn't seem relevant for the case discussed here, either, since this client clearly didn't have the small space usage WIMBoot enables.
There are very many machines sold now, today, with Windows 10 (and previously Windows 8) that have only 32 GB drive space.
With WIMBoot any updates would eat drive space and that space was not recoverable, and this would sometimes prevent updating to Windows 10. MS says this here: https://blogs.windows.com/windowsexperience/2015/03/16/how-w...
> The reason Windows 8.1 devices using WIMBOOT are not yet able to upgrade to Windows 10 is because many of the WIMBOOT devices have very limited system storage. That presents a challenge when we need to have the Windows 8.1 OS, the downloaded install image, and the Windows 10 OS available during the upgrade process. We do this because we need to be able to restore the machine back to Windows 8.1 if anything unexpected happens during the upgrade, such as power loss. In sum, WIMBOOT devices present a capacity challenge to the upgrade process and we are evaluating a couple of options for a safe and reliable upgrade path for those devices.
There's a complex workaround of "delete everything, and use two USB sticks" which isn't great for the target user.
The new file compression stuff is much much better than WIMBoot.
And. I purchased one last year because it was outrageously inexpensive and had a i5/7500U, a nice 1080p touchscreen, and a miserable spinning harddrive and a miserable amount of socketed ram.
I consider replaceable harddrives and socketed RAM to be a feature, one I promptly made use of and now have a machine which is quite competitive to machines costing > $1k more than what it cost me for the machine+HD+RAM.