Hacker News new | comments | show | ask | jobs | submit login

Vista was amazing.

I see two reasons for why it has such a bad reputation.

1. It exposed how terrible device manufacturers are at writing drivers. nVidia alone (which only has niche hardware) themselves stood for the majority of Vista BSODs. And no printer or scanner company had probably written a device driver in a decade now so it took another 5 years for them to catch up (and they just continued creating bloat ever since).

Yes there were major architectural changes but this was the perfect opportunity since it was the first MS (consumer) mainstream 64 bit OS (I don't count the 64 bit version XP). Unfortunately the driver situation made things rather different between the 32 bit and 64 bit versions of windows, this did not help.

2. Pre-fetch/super-fetch or whatever they called it was WAY to aggressive. If you had a decent amount of RAM on launch day, or just a new regular computer 6 months after launch, the pre-fetching algorithms were so aggressive that they completely overloaded the harddrives that perform terrible with that random access load. It meant that the first 10 minutes after boot was spent trying to speed up that you might want to do at the extreme cost of slowing down things you actually wanted to do. Yes they were supposed to be run with low priority but it really exposed how bad spinning harddrives are at multitasking. If doing one task takes 1s, doing two tasks each taking 1s will now take 9 seconds if run in parallel etc.

After enough time this wasn't a problem as all your freely available RAM had been used up by prefetch or actual programs. If you seldom rebooted you never had to worry about it. But the regular user wants to use the computer right away after boot and will only remember the agonizing slowness of trying to start the browser and office applications after boot.

Compared to Vista, Windows 7 was just a new (much better) taskbar, better tuned prefetch and with the very important difference that by the time Windows 7 arrived the drivers had matured and many of them even supported 64 bit systems... But that was all that was needed for Vista to be seen as a disaster and Windows 7 an unparalleled success.




I built a new PC in 2007, bought Vista Ultimate OEM, installed it and added all patches up to that point. Two things became immediately obvious:

- As others have pointed out, UAC was way too active. Just about any application you cared to launch required permission dialogs to be clicked through - irritating to everybody, scary to most users, and quickly ineffective as everybody stopped reading and just reflexively went for the OK button.

- Lots of legacy applications broke for the most trivial of reasons: they were written to store configuration and other data in their installation directory, which defaulted to "C:\Program Files". This worked fine on Windows 9x, which by default allows user-owned processes to do just about anything, but not on NT, where writing to Program Files requires elevation.

So new Vista owners would click through a bunch of obnoxious UAC popups to install their favorite Windows applications, click through more UAC popups to launch them, and then watch them crash or mysteriously lose all their data.

You got extra loser points if you went for the shiny new 64 bit version, in which case your legacy 32 bit application installer was more than likely to try its luck with "Program Files" instead of "Program Files (x86)". Would it really have been so terrible to leave the old directory alone and call the new one "Program Files (x64)"?

None of this was terribly hard to fix if the application was still supported, but it did require the user to upgrade, often at a cost. Worse, if you were a small indie developer, releasing an upgrade now pretty much required buying an expensive certificate to sign it, lest UAC keep warning your users that they were launching an untrusted file from the scary internet. So lots of small free- and shareware apps which people loved were abandoned, undoing part of the Windows platform's greatest advantage: its large library of existing applications.


> Would it really have been so terrible to leave the old directory alone and call the new one "Program Files (x64)"?

Or alternatively, why was breaking them out required in the first place? To this day I frequently end up having to look in two places to find something, because it's never obvious which of the two Program Files it should be in. Pre-64-bit Windows there was only ever the one place. This is a permanent usability regression.

And of course, I wouldn't even need to be digging through there in the first place if the Start menu launcher just worked, but no, they had to junk it up with Cortana, which is so incompetent it can't even find installed applications by name. More details on my Cortana rant here: https://news.ycombinator.com/item?id=15758641


I absolutely agree with this. There should have just been a 'Program Files' folder and if a conflicting program was already installed, the architecture could be appended to the name of the newer install's directory (C:\Program Files\Foo, C:\Program Files\Foo (x64)).


I tried to find an authoritative answer from Raymond Chen, and the closest I can find is https://blogs.msdn.microsoft.com/oldnewthing/20081222-00/?p=... and linked http://brandonlive.com/2008/12/22/why-does-windows-put-64-bi...


>applications broke for … they were written to store configuration and other data in their installation directory, which defaulted to "C:\Program Files"

Vista tried to take care of that by transparently redirecting to %LOCALAPPDATA%\VirtualStore, writable with user privileges. The feature is called Virtual File Store, and comes together with an analogous Virtual Registry Store.

See https://msdn.microsoft.com/en-us/library/windows/desktop/bb7... at Virtualization


> Would it really have been so terrible to leave the old directory alone and call the new one "Program Files (x64)"?

"Program files" is localized so it's not even "Program files" in all languages. Installers that looked to that folder were doing it wrong anyway and wouldn't work on non-English machines.


"Program Files" isn't localized, at least not on the French version of Windows. Only user content is ("My Documents", "Desktop", ...), and even then, some of them are just links to non-localized directories.

You could, however, change the path of "Program Files" so your point still holds.


You are correct now but Program Files was fully localized in XP and below. In Vista and beyond, it uses junction points to the non-localized names (something I just learned).


> Would it really have been so terrible to leave the old directory alone and call the new one "Program Files (x64)"?

The same thing happened with the System32 folder. On 64-bit systems, System32 actually contains the 64-bit(!) versions, and the equally confusingly named SysWOW64 contains the 32-bit versions.

https://stackoverflow.com/questions/949959/why-do-64-bit-dll...


I'm a small fish, but all of the customers who purchased a Windows Vista computer from me found Vista a joy to use. Their drivers worked, their programs worked and the machines were fast. I think a much stronger reason for the bad rep would be that so many big-name labels sold Vista machines that were woefully under-powered and then loaded them with bloatware. I recall visiting a few customers who had new Vista computers from big brands and paid me to upgrade them within the first week of owning them!


You are 100% correct. I remember friends buying both desktops and laptops at the time that essentially contained XP-targeted processor/memory configurations, but instead now were shipped with 32-bit Vista. It was a perfect recipe for bad performance out of the gate, and that's before you even get into the pre-loaded McAfee, etc.


Exactly.

This issue is being discussed a lot and nothing is being done to fight it.

Microsoft and Google with Android - where the same issue can be seen - should step in with more proactive approach to solve this problem.

I don't have a solution but they should probably use some business incentives and possibly something directly from OS like built in benchmark scores/graphs, ideally compared to bare bones for the device so everybody can see how much performance you loose by getting all those "features" preinstalled.


>Google with Android

A lot of the bloat on Android phones I see is from Google, apps you can't uninstall without hacking. My Huawei had a few apps from the manufacturer, but they were removable - there's just under 20 Google Apps that came preinstalled many [most?] of which can't be removed.


I use all of the heavyweight Google apps that my phone shipped with, though, so I don't consider them "bloat". Maps, Gmail, Chrome, Play store, Drive, Hangouts, Photos, Wallet, and YouTube all get plenty of use. I will grant you it's weird that I can't uninstall most of them though.


I use none of the google apps and they occupy disk space, run on start and can't be disabled or uninstalled while taking much screen space and cluttering my icons of installed apps with stuff I don't need, don't want and don't use but cannot remove.


In what sense are the Google apps bloated? When not running, many of them don't do anything.


A lot of google apps are running on startup. They backup your files, contacts and profiles with the cloud periodically and wait for messages/emails.


They are not running, only Play Services is and you need that anyway. It is event based, the app is only launched and notified, when the push notification comes.

And even that may be delayed, because the push notifications might be delayed to get the radio the chance to sleep and save battery.


They are running on my two android phones (4.2, 4.4 and 6.0). Android tries to scare me off stopping them will not allow me to disable most of them.

I do not need play services, why would I need it for ? I'm not even sure what this does apart from insta-gobbling my 50Mo monthly data plan downloading updates I don't want and cannot cancel of applications I do not use.


Play Services is a framework, used by other apps, includes handling Google Accounts and push notifications (and this is only the top of the iceberg). Without it, your device would become Kindle-like, and all the apps that do not run on Kindle (or other, Google-less Android versions) would not run on yours too.

On my personal phone (Sony), I have only Play Services, Play Store, Gmail, Hangouts, Maps and Youtube enabled. All the other are disabled, including the 'Google' app.


> Without it, your device would become Kindle-like, and all the apps that do not run on Kindle (or other, Google-less Android versions) would not run on yours too.

I take issue with this statement. Many apps which "depend" on play services work fine on my Google-free android. Some examples are tutanota, duo lingo, and some games. Though it's not an easy path, I wouldn't consider my cellphone experience "kindle-like".


When not running they take up space on the device and also request updates.


If you do not want to use them, disable them. If you disable them, no updates will be done (and existing ones will be deleted).

You cannot delete them, physically, because they are on /system partition, which is read only. That means, even if you would delete them as root, you would not get more space for another apps or your data. However, the read-only /system has more functions, that you would lose: it has known file layout (so you can image-update your phone, if you ever get an update), it is signed (so you can know your phone has not been tampered with, as it is not going re-sign itself once modified), it is also for factory-reset/sofware recovery purposes, so once you wipe /data, your phone will be in factory-mint condition (software-wise, of course).


Most of the time the disable button is greyed out and cannot be clicked.

I'm not familiar with the /system partition but it seems logical that if I can delete them as root, it means I can also install something else in its place or put some of my data there, which would help me a lot as my phone does not allow for an additional SD card.


Whether it is disabled or not, depends on the phone vendor. In the phones that I have currently available (Google, Sony, Samsung), all the Google applications can be disabled. Samsung usually prevents disabling their applications, but still allows disabling Google ones.

If your vendor prevents disabling the apps, you can still try the route using adb and pm (google for adb pm disable).

The point I was making about /system is, that you don't want to mess with it, even if you have root. You can break more than you think, including dm-verity, and then you are not going to boot anymore. Also, apps installed in /system are getting updates installed into /data, so it is not going to solve your problems with space anyway. You would have to repartition your phone, which on ARM platforms opens a new can of worms (partitions are defined in the secondary boot loader, which is signed too. Moreover, if you do this wrong, you get a brick, you are not going to boot without reflashing the original SPL in an external programmer).


Is there a site that explains the Android storage layout?


Oh Yeah, forced updates are bad for everyone. Nothing good ever comes of that.


Bug fixes and security updates?


I think that was sarcasm


they occupy limited storage space and I have yet to find one that is not running on boot.


Microsoft did actually introduce a certification programme, as well as start to sell their own systems (other than Surface). I can't recall the brand they used but it seemed like a great idea.

Seemed like. Whenever I quoted one to a customer they always turned their nose down at the price, then paid me several hours to debloat the thing, fix a driver that was shipped faulty, then a year later pay me again to upgrade it! Oh and replace the useless battery. The list goes on!


I think you're looking for "Signature Edition" - eg devices sold by microsoft with a clean (ish?) windows install.


> Google with Android

Case in point: Sony. The amount of CRAP that comes with the Xperia is insane. There was an uninstallable "What's New" app that would notify incessantly when it wanted to push some new app that Sony probably made money on shilling.

And never mind the Google crap.

The day I dumped it an installed LineageOS made my phone usable again.


Samsung has the same garbage. Stupid apps on their own pseudo-appstore that update all the time.


The bad rep for vista comes from it getting in the way of using the computer among other things such as being bugged.

I remember an update downloading itself and applying itself at shutdown then restarting to apply itself some more and looping like this indefinitely. Best update ever \o/


> Compared to Vista, Windows 7 was just a new (much better) taskbar, better tuned prefetch and with the very important difference that by the time Windows 7 arrived the drivers had matured and many of them even supported 64 bit systems... But that was all that was needed for Vista to be seen as a disaster and Windows 7 an unparalleled success.

Still, Vista was a disaster.

I remember a conversation I had with a MS engineer at that time:

- Vista is like, the foundations for the good things to come. If you want a solid house, you dig solid foundations.

- I am buying a house, not just pillars in the ground.

(It was the same with the w3c specs: "maybe mozilla and opera are the ones misreading the box model spec and IE has it right", me "ms is on the board...").


I'm not quite sure how to interpret your last comment. You prefer the IE box model? Or you're saying they did it right from the get go? Because quirks mode was abandoned and everyone uses the Moz/W3C model now.

A better example would be file line endings, where Microsoft did get it right (\r\n) and all the other OS's screwed up using just \r or \n.


Interesting that you find \r\n to be "right" and the others "screwed up". I'm curious about the reasons for that.

The biggest problem is that each OS went its own way (Mac started with \r but of course uses \n now). If they all had the same line ending all along, whatever it was, no one would think much about it.

\r\n has the obvious disadvantage of being twice the size, along with making it possible to land in the middle of a line ending instead of before or after one.

Of course one advantage would be if you're controlling physical equipment where carriage return and line feed are independent of each other. I learned to program in 1968 on a Teletype ASR33 where CR and LF were literal commands to return the carriage to column 1 and advance the paper. You had to use both because they did two different things. Or on occasion you might use CR by itself to overprint a line. LF by itself was pretty rare, but would do what you expect if you used it: advance the paper without moving the print carriage.

CR LF was fine if you were typing interactively - in fact you just had to hit the CR key and the remote system would provide the LF. But usually we would punch our programs on paper tape, dial in, run the tape through and get the printout, and hang up right away. At $30/hour in 1968 dollars, this saved a lot of money. And of course you would run your tape through locally to print out and proofread your program before testing it online.

To be able to print a tape locally, you needed both CR and LF, but even that wasn't quite adequate. You really wanted to allow a little extra time for the machinery to settle, so the standard line ending we punched on a tape was CR LF RUBOUT.

RUBOUT was a character that punched out all the holes in a row of the paper tape. It was ignored by convention, so you could erase a typing error when punching a tape by pushing the backspace button on the tape punch and hitting the RUBOUT key.

Because it was ignored, RUBOUT was also useful as a "delay" character in the newline sequence. So I guess I'll never get over the feeling that the One True Line Ending is: \r\n\x7F

(Nah, I'm happy with \n, but it makes a good story.)


NUL was also a common delay character, and one can find some of the delay mechanisms, enshrined in the POSIX standard as part of the General Terminal Interface, in Linux even today. (They are largely gone from OpenBSD and FreeBSD.)

* http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_...


Could have been something that came from DEC as the original NT was influenced by VAX systems (which often had LA120's as consoles) I seem to recall.


Certainly not from DEC. DEC had a powerful "RMS" (record management system) later between a program and the disk. That later would take files in a balloon formats and convert them as needed.

For example, you could define that your file was fixed length records (like the old punch cards); at that point each line doesn't have a line separator at all; the \n or\r is not stored on disk. But when you read a line using the C routines, one will be added.


We are talking about the usage or CR/LF for terminal devices the LA120 and so on not some internal binary file standard.

I am not sure what C or RMS has to do with this


Because we're talking about how end-of-line is marked in a file, not in a terminal. DOS specifies the end-of-line in a file as CR/LF.


Not likely, as NT got this behaviour from DOS, And DOS probably got this from CP/M.


And CP/M was influenced by RT-11, another DEC operating system, bringing us full circle.


The W3C box model became standard and IE was criticized for its quirky noncompliant behavior [1], but then many years later 'box-sizing: border-box' was introduced and widely praised and adopted by frameworks [2][3]; it's funny how things change.

[1] https://en.wikipedia.org/wiki/Internet_Explorer_box_model_bu... [2] https://developer.mozilla.org/en-US/docs/Web/CSS/box-sizing [3] https://css-tricks.com/international-box-sizing-awareness-da...


> then many years later 'box-sizing: border-box' was introduced and widely praised and adopted by frameworks

Well, IE implemented their version of the box model in '97 (coincidently, NN4 did the same); the box-sizing property was first proposed in 1999[1], and first appeared in a draft in 1999[2], and implemented in Gecko in 1999[3] and in IE5/Mac in 2000[4].

That's two years from IE/NN shipping the non-standard box model (the standard one was defined before IE4 and NN4 shipped) to having a property to toggle between them. To me, that isn't "many years".

Really what makes it seem like many years is the fact that IE didn't implement box-sizing until IE8 which shipped in 2009.

[1]: https://lists.w3.org/Archives/Member/w3c-css-wg/1999JanMar/0... (sorry, W3C MO space, but I think at this point nobody cares if I mention that publicly)

[2]: https://www.w3.org/TR/1999/WD-css3-userint-19990916#box-sizi...

[3]: https://github.com/mozilla/gecko/commit/3ee32d59158a036bd667...

[4]: http://tantek.com/notes/csssupport.html


IE's box model always felt much more natural. The problem was that the exact same CSS would result in widely different layouts in most browsers.

IE was stupid to not implement the spec, the spec was stupid for not doing it the way IE did.


Now I'm curious. How is \r\n more right than \n or \r? In which situations? Advantages? Disadvantages?


If you send it in raw format to a Teletype, it'll print correctly...

Other than that, I don't see it being any more right. But it is a convention that is far older than Windows or MS-DOS. I saw it myself first on CP/M but it was there on VAX/VMS and I expect the Teletypes had it from 1960's.


It's not that simple. In the 1960s operating systems such as Multics existed, which even then had the idea of device independence. So the end of line sequence as far as Multics applications were concerned was a single LF, whatever the terminal type. The operating system was in charge of converting that to whatever the terminal actually needed. Multics was following the standards of the time, moreover. As the ASCII standard of the time explicitly allowed an LF could denote both a Line Feed and a Carriage Return (and indeed any padding delay characters too) in a single character if the system decided to employ single character encodings.

* H. McGregor Ross (1964-01-01). "The I.S.O. character code". DOI 10.1093/comjnl/7.3.197. The Computer Journal. Volume 7, Issue 3. pp. 197–202.

* Jerome H. Saltzer and J. F. Ossanna (1970). "Remote terminal character stream processing in Multics." DOI 10.1145/1476936.1477030. Proceedings of the AFIPS Conference 36. pp. 621-627.

* https://unix.stackexchange.com/questions/411811/


I tried to say that it was not so simple. Yes, obviously Multics had the LF line end convention. Some systems saved files internally in a record format so the line end was whatever what the software decided to print out there.

But mostly I was just reacting to the silly idea that CRLF would be right and lone LF not: Microsoft didn't come up with that idea. It was there already in 1960's and perhaps useful when you didn't want to do any line drivers for converting strings at output. But that's not really any more "right" than other conventions.


It has downsides if you're actually implementing software, but \r\n is the semantically correct way to represent a newline, because \r is Carriage Return (x=0) and \n is Line Feed (y++). In many scenarios \r is a useful primitive to have by itself, but if you want to support unix-style line endings you can't implement it, and you can't implement \n as a line feed - it has to be a newline. So in practice some expressiveness from the original character set was thrown out to save one byte per newline.


> \r\n is the semantically correct way to represent a newline, because \r is Carriage Return (x=0) and \n is Line Feed (y++)

You left out a key qualifier: CR/LF is the semantically correct way to represent a newline on a physical device that has a physical carriage that has to move back to x=0 and advance one line in the y direction in order to start a new line. What devices connected to any computer today have that property? Answer: none.

In fact, even on computers that have such devices connected, the semantic meaning of "newline" in any file the user actually edits is most likely not to actually cause a CR/LF on the device. Word processing programs for decades now have separated the in-memory and on-disk file data from the data that actually gets sent to a printer. So the file you are editing might not even have any hard "newlines" in it at all, except at paragraph breaks--and that's assuming the file format uses "newline" to mark paragraph breaks, instead of something else.


I find it funny that CRLF as a vestige of devices long gone is ridiculed, yet the same people don't bat an eye at emulating an in-band sort-of API for controlling cursor movement and display characteristics for similar devices of the past. Heck, *roff and man continue to format text by using overstriking to create underlined and bold text and rely on the terminal emulator to understand that this happened to create a particular effect on physical printers and emulate the result.

Neither world is clean, pure, and free of weirdness that's only properly understood when looking decades in the past.


Actually, the GNU tools (in particular grotty) advanced forward to 1976, and are capable of ECMA-48 control sequences that render actual italics and boldface like the source markup describes. It is just that the people who make operating systems have gone out of their way to disable this.

* http://jdebp.eu./Softwares/nosh/italics-in-manuals.html


Indeed, I'm looking at my copy of "The C Program Language", second edition (ANSI C). Page 241: A text stream is a sequence of lines; each line has zero or more characters and is terminated by a \n. [\n was defined earlier as ASCII 10 and designated either as NL or LF.] An environment may need to convert a text stream to or from some other representation (such as mapping '\n' to a carriage return and linefeed)"


> Answer: none.

Dot matrix printers are surprisingly common as they are still cheaper to run than the alternatives. That's at least one.


> I'm not quite sure how to interpret your last comment. You prefer the IE box model? Or you're saying they did it right from the get go? Because quirks mode was abandoned and everyone uses the Moz/W3C model now.

It's worse than that. His position was: MS's understanding and implementation of the box model is the correct interpretation of the W3C specs (yeah, I know).


In the old days, we knew to either open a file in text mode (where whatever the OS had would be converted to a single \n) or in binary mode (where it wouldn't, and you had to deal with the conversion yourself).

IMHO, what SHOULD happen is that if you have a device that has special timing requirements (like the old fashioned printers with no memory), then the driver is responsible for the handling the timing. Adding in weird bits to everyone's files is a bad idea.

And yes, I know the difference between carriage-return and new-line. And I know that in the old C specs, "\n" didn't have a guaranteed mapping to either.


Apart from the general slowness and crashes caused by the issues you mention, I remember UAC was way too intrusive (sometimes you would see 4 or 5 alerts in a row that would monopolize your screen), copying files was slow as a snail an tended to outright crash for large sets of files, and suspend never really worked in my laptop.

Anyway, how is a consumer product that provides a bad experience to the user (regardless of the reasons) "amazing"?


Copying files is an interesting case study. Vista was actually faster but perceived to be slower. XP wouldn't actually tell you when copying was 100% complete. If you turned off your power supply the second XP's copy dialog went away you'd lose data.

Vista sped up file copying operations but fixed that bug, leading to a faulty perception of slowness. Worse, the progress bar behavior encouraged that perception of slowness. A progress bar that speeds up at the end will be perceived to be faster than one that is perfectly even which in turn is perceived faster than one that slows down at the end. And vista's progress bar usually slowed down at the end because it didn't properly account for those disk sync et al operations ahead of time. The result was a worse feeling experience despite what was happening under the hood.


Did this combine with the prefetch situation? It always seemed to me that the files were doing some sort of inspection of state as the dialog box progressed. Even things like Ultracopier seemed to run into this problem.


The "improved" aka "less annoying" UAC that we have by default now is as good as no UAC [1].

[1]: https://blogs.msdn.microsoft.com/oldnewthing/20160816-00/?p=...


>As Larry Osterman noted, UAC is not a security feature. It's a convenience feature that acts as a forcing function to get software developers to get their act together.

What a great new perspective.


But noisy UAC ends up being ignored completely by users - there's a very fine line to walk in order to ensure that it protects truly sensitive actions and is recognised by users as such.


I don't know how it looks like nowadays, but I remember Apple forums used to have questions how to run as root to avoid having to deal with such dialogs.


All UAC ever did was further re-enforce the already deeply-ingrained MS convention of 'just click OK' without reading the dialog. It never told you why the program needed elevated access, only that it needed it for it to do what you wanted. And of course, the average user just wants their computer to do what they asked. I would see some users run a program and automatically move their mouse to the place the UAC prompt would appear, seconds before the prompt ever came up. That's a whole new level of programming people.

The only time I can think it ever came in handy was if the user had an unprivileged account and would need an admin to type in the password - hopefully the admin would ask the why question and dig into it before dismissing the UAC prompt.


Loving how the tone of the article is more about "you dumb users" and not "our architecture is so shit we couldn't implement a feature so we just lied to our users instead".


To be honest, suspend still doesn't work properly, even in the latest version of Windows 10.


Usually system manufacturer's fault. A recent example where I work is Lenovo's recommended Intel wireless driver (the one on the Lenovo site) was over six months old and had known issues with causing machines to not be able to wake from suspend. Installing the drivers directly from Intel resolved the problem.

That's pretty much the best case scenario. A lot of chipmakers (Conexant, Jmicron, Intel in many cases) don't allow you to download drivers directly from them. So you are stuck with whatever the OEM provides. In some cases I've found that newer model laptops by the same OEM use the same audio/media controllers under the hood in some case and I can use the newer driver from the updated model.


Lenovo and Dell. Never again.

My next computer will be something from Microsoft's Surface line. They seem to be the only manufacturer who can make proper devices (everything working and power bricks which last more than 6 months - thanks Apple).


>everything working

I wouldn't ascribe anything that generous that to a surface device as long as they still use the marvel wireless cards, which have a storied history of causing hardware connectivity problems[0]. I largely enjoyed my Surface Pro 3 but frequently had the same issues. The Surface Book seems to have issues with sensing connectivity between the keyboard and screen, as well. Anecdotal, at work we even had a developer hololens go paperweight because the wifi stopped signalling. MS told us to junk it, unrepairable.

[0]https://www.google.com/search?q=marvell+wifi+surface+problem...


I've had a few Dell's over the years and have never had a problem with them. The trick at first was to buy from their business line - no bloatware and better support. Now I just buy from the Microsoft Store and they come with no non MS bloatware. My 2-1 Dell is a pretty good computer. I just wish it had a 3:2 display instead of 16:9.


everything working and power bricks which last more than 6 months - thanks Apple

Face it, sample amounts being so big, you're not going to find anything which always worked for everybody. Anecdotally, having worked for a Dell-only place and typing this on a 7-year old XPS, I never encountered severe issues. Only standard hardware problems caused by wear (HD/memory/keyboard keys failing after +5 years of usage).


Lol, yep. My Win 10 desktop will suspend just fine, but then it wakes itself up for no apparent reason and just stays running indefinitely after that. I can't figure out what the cause is, but I've taken to just shutting it down in between uses.


I'm a frequent sleeper and I depend on the feature. Next time your machine wakes up by itself, go to a DOS prompt and type "powercfg -lastwake" That will tell you how it happened.

For me, part of the solution was to go into the device manager and edit the properties for my mouse and my network controller. On the "power management" tab I disabled the "allow this device to wake up the computer" option. I only use the keyboard to wake the PC.

Additionally, when I left the machine sleeping overnight, there was some scheduled task that would occasionally wake the machine. There is a way to disable that, but I forget the specifics.


Cool, I'll try that sometime. I'm traveling for the next couple of weeks, so it'll be too late to report back here, but thanks in advance.

I really wouldn't be bothered by occasional wakeups if it would go back to sleep afterwards...


My computer does it, and typing in powercfg -lastwake just says "unknown source". I've disabled every wake event, update service, wake-on-lan, disabled the ability of my mouse and keyboard to wake my computer up, and it still does it.


VirtualBox for example uses something called "wake timers" that wake up the machine right after it goes to sleep.


This. I was playing Starcraft before bed, and just shut my laptop to suspend it... it was suspended until about 4 hours later, when, in the middle of the night my wife and I were awoken by sounds of zerglings dying. Very unpleasant, and quite surprising.


I've been fighting with this literally since Windows 10 came out. My desktop PC wakes up every night, around midnight/1am, and will not go back to sleep. I've disabled everything single wake event in windows, there is no wake-on-lan(as a matter of fact it does it with lan unplugged and ethernet disabled). Looking in the power events just says the computer woke up due to "unknown source". It does not do it with Windows 7.


I had the same issue. I think, your keyboard/mouse are allowed to wake up from sleep automatically and some glitch gives your desktop the impression that a key was hit. If you disable this in the device manager, this issue should be resolved.


The problem with Vista was mismanagement of expectations.

The ad campaign featured people spotting deer at dawn from their home office, and the message was, this will completely change your life and bring about your inner sense of wonder, as if you were born again, a new person in a brave new world.

In reality, it was an OS upgrade that didn't work too well and that was more-or-less forced on you if you bought a new device, while at the same time XP continued to work just fine on all your older PCs.

People were disappointed, upset and angry. All other things being equal, a little more humility and a lower profile would have helped.


It was Marketing's fault!


Most things are.


Hyper-concise history of Western Civ, centuries XVIII-XXI ...


”It exposed how terrible device manufacturers are at writing drivers.”

In their defense, Windows Driver Model (https://en.wikipedia.org/wiki/Windows_Driver_Model) may make it possible to wring the last bit of performance out of a system, but it doesn’t make it easy to write a driver. Its documentation also was somewhat of the type “once you know what this page tries to tell you, you will be able to understand it” variety.

It also didn’t help that new hardware frequently introduced new sleep state levels at the time.


Another reason Windows Vista did so badly was that it really needed a clean install to make it work properly.

This is true of all Windows versions, but was particularly true of Vista, and of course the reality was:

* Very few people carry out a clean install when they get a new computer. This is as true today as it was ten years ago

* Hardware manufacturers loaded the PC's up with terribly written adware before shipping (this situation has improved slightly)

The requirements for Windows 10 aren't that much more than Vista, so, the average person would get their new Vista PC running on a Core 2 Duo/Pentium D and 1-4GB of DDR2 Ram, loaded up with crapware and not do a clean install, and it would run horribly.

By the time Windows 7 came out, the PC manufacturers where writing slightly more efficient crapware, hardware was generally a bit more powerful, and they had fixed a tonne of bugs in the OS itself.


I can attest to the relatively low requirements of Windows 10. I'm running it on a circa 2008 Core 2 Duo with 4Gb of RAM. It's my Plex Server.


Tablet with Z8300 low end Intel CPU and 2 GB RAM. Originally ran Windows 10 fine.

MS updates have made it worse because they stopped caring about this segment. Same story with phones on Windows 10 mobile.


Windows... On a server? interesting


Windows is the second most widely used server operating system in the world, second only to Linux. It's pervasive in companies outside of the tech industry, to give one example of typical usage.

Linux and Windows together dominate this market so thoroughly that everything else (UNIXes, BSDs, macOS) is practically a rounding error.


I am forced to host many of our services on Windows machines at work. I can attest there is nothing 'interesting' about it :)


Not what I was originally referring to, but yes at work our entire system is 30+ Windows Servers running C# programs and services using Consul, Nomad, Mongo, Sql Server, and Memcached.

Continuous integration and deployment is all done with Microsoft agents orchestrated by VSTS (Microsoft's hosted version of TFS). Yes we use git

Easy to maintain and no performance issues.


I wish more people in the bay area understood that Microsoft has a really awesome development and server ecosystem.


I came into a midsized company as the dev lead with no real development shop with free reign, a decent budget, and management support to build the department the way I saw fit. I had never used VSTS and had heard nothing but bad things about TFS. They already had it and I decided to play around with it. I was amazed how easy it was to create a CI/CD environment that followed generally accepted best dev ops practices.


If you want to serve files to Windows machines and you don't mind the license and painful remote administration, it's a reasonable choice. Guaranteed SMB compatibility.


How is remote administration painful? All 30+ servers I run have VSTS agents. I can do most administration by running Powershell scripts and choosing the deployment groups based on the purpose of the server. I have Consul agents for health checks, Consul watches for alerting and/or automatic recovery, Nomad for running executables across the app servers, HashiUi for monitoring and controlling the Nomad pool.

I can approve and deploy a release from my iPad (the website is painful on my phone) by logging into Microsoft's Visual Studio Team Services website.

I won't even start to gush about how easy setting up a build and release pipeline is in VSTS compared to the other tools I've used.


It is painful in the mindset of typical unix guy who is used to being able to "just ssh somewhere and vi /etc/something", which is not the way you want to manage large deployments but works even for large-ish ones. On windows there is no real middle ground between use GUI for everything and automate everything.

Also with unix servers various bastion hosts and similar "security measures" are minor inconvenience and usually even supported by automation tools, while on windows this usually ends up being major PITA.


It's been years since I last worked with Windows, but:

1) You can install an SSH server on Windows boxes just fine, then use Putty to SSH directly into PowerShell. PowerShell is not a classical shell, but rather a REPL for a procedural, imperative and object-oriented DSL for system configuration and administration based on .NET, with much saner syntax than my beloved zsh. In short, it works quite well.

2) With PowerShell capabilities - I'm a bit fuzzy on the details here, was a long time ago - you don't even need the SSH server, you can issue remote commands from your local PS instance. It required a bit of configuration up front, IIRC, but then you could replace your local session with a remote one with a single command.

So, in my experience - and note that it was probably nearly a decade ago! - Unix-style remote management was absolutely possible and not that much less convenient. And PowerShell is really a solid tool, with easy access to all of .NET and all of the system; the only annoyance I remember was certificate/signature management, dunno if it got any better.


The problem isn't a Powershell, remote administratoon is tightly tied into the Windows security framework which can really be a pain with tooling.

But part of the problem is I have a real prejudice toward local agent based solutions with a central server coordinating everything.


The biggest pain for Windows remote automation is security around accessing servers remotely, I'll grant you that. I gave up. That's why I have VSTS agents on every box. I can easily write a script and tell each agent to pull the script down and do X locally and insert the results if needed into a Mongo collection. But for the few times I do need to treat my servers like "pets" instead of "cattle". I do everything from the GUI.

There was a time when our net ops team did something and I couldn't Remote Desktop into a server to do something urgent and if course I couldn't just SSH into it where I had to write a quick Powershell script and deploy it via VSTS to make a change. It was ugly.


On the other hand the PITA-ness probably isn't that much caused by the OS itself but by it's historical security track record and mentality of ops and security teams that is caused by that.


Geez mate, just get PSRemoting setup (not hard) and use invoke-command and or enter-pssession...


We have three different different AD domains - one on prem (well at colo center) and two separate AWS environments. Getting them to be friendly with each other isn't possible. Since the local VSTS agents poll and only need outbound connections, I don't have to deal with firewall issues or domain issues. Also, I can run a script in parallel across as many agents as I want to. You can have as many concurrent agents running on a VSTS account as you have MSDN licenses.

Besides, I already have sane deployment groups and tags defined by server environment and function. I might as well leverage them.


It's not what GP was referring to, but... https://en.wikipedia.org/wiki/Windows_Server


You're completely right. UAC was too aggressive (they admitted that was their mistake) and a few other bits needed tuning but it was actually quite an improvement over XP. 7 was what it should have been.

All those glitches though came from MS's far too aggressive and unrealistic plans for Vista. A couple years before it launched, 2003 I think, I was heavy in the Mozilla world, and had a high-ish profile in the community (I ran MozillaNews.org and was a long time triager). Robert Scoble tried to hire me to be a bridge between MS and Mozilla, a tech evangelist for features of Longhorn (as Vista was known then) that could help Mozilla, or really features that Mozilla could be a showcase for and be a tech ad for MS. I set aside my suspicions and gave it a try, learning about the technical side of Vista. I learned a lot, and wound up not taking the gig. I told him I didn't think these things has any real benefit to a cross platform application like Mozilla, and that I had real doubts they'd have any real impact on the market even if they were delivered, which I had strong doubts about.

The three tentpoles MS wanted Mozilla to use were: 1. Avalon/WPF 2. Palladium 3. WinFS

1. I told Scoble that I saw no benefit in Avalon yet as in 2003/4 Mozilla wasn't really about to dedicate lots of time and attention to coding for some new graphics API that wouldn't be launched for years. He said it would be out much sooner. I said I had my doubts given it's rather early stage of development.

2. "Imagine users knowing their online banking and purchases are 100% secure thanks to the hardware and their OS!" I said I thought the idea was rubbish, a nonstarter, and I hoped it failed.

3. WinFS. My arguments were simple, "apps like this don't care about the FS. Plus, it'll never launch. I have zero faith this feature will be out before 2010. Filesystems are hard, and MS has a long history of cutting features to get products out the door. This is a prime target to be cut."

He argued it was solid and amazing, etc, as a good tech evang should, but in the end I said no to the whole deal. I couldn't in good conscience try to push tech that I didn't believe in and didn't even think would ever release. They were hell bent on shoving all this and more in Longhorn, rather than a smaller release in 2004 and finish the other features later. And thus, we got Vista. Lots of great tech, rushed out the door, and poorly configured.


> I see two reasons for why it has such a bad reputation. > 1. It exposed how terrible device manufacturers are at writing drivers. nVidia alone (which only has niche hardware) themselves stood for the majority of Vista BSODs.

One of the things Fathi (OP author) writes is

> [. . .] ecosystem partners hated [Vista] because they felt they didn’t have enough time to update and certify their drivers and applications as Vista was rushed out the door to compete with a resurgent Apple.

This goes some way to mitigating the characterization of device manufacturers as terrible at writing drivers. When considered in the context of "a resurgent Apple", it also provides a counterpoint in the specific example of nVidia as a niche hardware manufacturer.

2006 was the year Vista was released, at that time, Apple was shipping the quad-core Xeon Mac Pro with macOS Leopard (later Snow Leopard) which shipped with an NVIDIA GeForce 7300 GT video card. [0]

I used this particular computer all the way through Mac OS 10.7 (Lion) and if memory serves, I had a handful of kernel panics over the course of 6 years. From all I could tell, nVidia's video card device drivers on Mac OS never interfered with daily operation (up 24/7 as it was also an authoritative DNS server for my personal domains).

So, device manufacturers may be bad at writing drivers but those drivers also depend on stable and reliable APIs of the target OS, and such details have to be communicated between the two teams. Device drivers are an interface between host operating systems and hardware embedded systems. As such, the reliability of any device driver will depend on the sharing of information between the OS and device driver teams just as much, if not more, on the competence of the device driver engineers.

[0] https://everymac.com/systems/apple/mac_pro/specs/mac-pro-qua...

EDIT: remove extraneous words, a couple proper nouns where appropriate, punctuation.


I went with Vista in 2008 (having 4 GiB RAM) and never had a reason to complain; in my case, everything from drivers to games just worked. I liked the Aero and UAC. After a few years I permanently switched to Linux, but kept using Vista for compiling some projects for Windows under MSYS2 (until MSYS2 stopped supporting it).


Microsoft has always (as far as I can see) alternated between focus on core tech and focus on user experience. Vista was a heavy tech push and so the user experience and polish suffered a bit. 7 was entirely focused on user experience and so it was awesome to use, but it wouldn't have been possible if not for Vista.


I also remember it was too chatty (very much like Windows 10 now). You wouldn't spend a minute before the OS would ask you to authorise an outgoing connection or something else.


You forget the terrible issue with the inefficient windows management code, compared to the cleaned up Windows 7.

And the problems in UI. And...

In short, it was really good version to avoid, especially as an upgrade. On a new computer it could have been somewhat acceptable. But amazing... no.


I prefer Vista's taskbar - where running apps are grouped together.


You can do this on Windows 7 and Windows 10 too. Actually, I think it's the default, because it's one of the first things I disable whenever I get a new machine (I'm not a fan)!

I'm on mobile now, but from memory you right click the start button and go into taskbar settings.


Sorry, I meant taskbar similar to e.g. WinXP - where running apps are at one side, and icons (quick launch) at another side of the taskbar. It is possible to show quick launch on Win7 as well but icon/button sizes are different and somewhat ugly. As for "grouping" (into one button) - yes, that the first thing I turn off as well.


As I recall, the only way to group taskbar icons in 7 requires you to also display window titles in the taskbar, which is endlessly annoying; the way to work around this is to enable grouped icons/window titles and then edit your registry to impose a max pixel width on taskbar buttons so that the titles are unseen.


It's been a while, but I'm quite sure that when I ran Windows 7, I had grouped, icon-only taskbar buttons.


It's grouped by default on 10, and you can have the title on or off.

Personally, I have both set to "only when full". I miss that feature on Cinnamon.


win7 kernel had a few tweaks too IIRC

I never liked Vista, can't say why but so far I think the UI/UX annoyed me. Everytime I boot Win7 I feel home (almost like NT5 or 95)


You forgot to mention how Vista skyrocketed the need for higher CPU/GPU/RAM resources, too, compared to XP (which also skyrocketed resources compared to Win98).

I'm inclining to believe this was on purpose, and not just in a "we just want to make the OS prettier for users" way. I think there used to be a theory that Microsoft did this to help PC manufacturers and chip makers sell more hardware, too.

Microsoft's mistake was that the resource requirements were too high compared to XP, so like 90% of the PCs running XP were useless when running Vista.


Win 7 is basically Vista Service Pack 1. Several minor things like the slow-as-hell copy-routine of Vista got reverted back to almost XP level-speed with Win7. Unfortunately the Advanced Search dialog of Vista got removed in Win7. Most Vista problems were third party device drivers (blue screens), first (still new) mainstream 64bit OS and the related issues w 32bit and lack up 16bit support, and the vastly increased memory usage because wrong vision (consume all memory while idle is okay).

To this day the Win7 is arguably the best OS (supported til 2020), followed by the aging XP.


I'm very happy with 8.1 (support until 2023)+Classic Shell+Classic Explorer (no breadcrumbs)+MacType (sane font rasterizer).


Windows 10 is still terrible in that regard. Like, laughably bad.

1. I dare you to try it on a computer with a 5400 RPM hard drive. A fresh installation will spend the majority of its time sitting around with 100% disk usage as telemetry, superfetch, defender and more all eat up the entire ~2M/s hard drive bandwidth for more than 30 minutes after boot. And then halfway through your day, it'll decide to start recompiling .NET, or some other package with zero user notification and your computer will come to a halt. But hey, you're resourceful right? Just disable those services! Nope, too bad. Every version of Windows (including creators updates) have made it harder and harder for a user to disable features that break. Services get moved to TRUSTEDINSTALLER, an account that you can't override. Those services get restarted without asking you. Some as few as a couple hours. And Windows Defender will restart itself, and the before-last creators update KNEW people were disabling it so they moved the link in Metro/settings to be more obscure.

2. I just spent Friday for a client trying to "fix" Windows for Microsoft and failed. He bought a "Windows 10" laptop with a 30 GB SSD drive. Too bad, Window alone, took 97% of the entire hard drive. I removed literally every application (including 1 GB avast) except for Chrome and Windows 10. Every time I removed space, removed the hibernation file, removed any and all disk cleanup stuff... Windows would then fill it with patches.

It also had a 2.6 GB "C:\recovery" folder. I checked online and they said "Feel free to delete it, it's from an old OS." I tried deleting it, no permissions--even as an admin. I went in, changed the owner from the glorious TRUSTEDINSTALLER and made myself the owner of all the files. I deleted some files, but one file refused to delete. The file? 2.6 GB. It said this file was open in "Windows Provisioning". I checked Windows 10 backups, restorepoints, file history, all that jazz. Zero.

I check online. Maybe I'm insane. What are the Windows 10 requirements for hard drive space? Oh yeah, 15 GB. So there are, "Lies, damned lies, and Windows hardware requirements."

Meanwhile, Windows 10 keeps spamming that "You need to free up space to continue downloading windows updates!!!"

Really? REALLY? Thanks for the update.

I download Process Explorer. They say find the open search and find the file handle of the file open. I do it. ZERO RESULTS.

I download a tool that lets you delete a file on reboot before a program will acquire the file lock. It queues it up. It runs. It fails. Still NOTHING in services.msc that has Provisioning in the name.

Okay, change gears. ALL they want, is to freakin' install Office 356 on their craptop. They've got an SD card with a 32 GB card.

By now, after clearing up at least 4 GB, C: is now down to 100MB free.

I download the 5 MB Office auto-installer. It fails with a pop UNDER error that you don't notice at first under the loading screen. Okay, instead of giving you a description, it gives you an obscure error code. Clicking it at least gives you the KB for "out of disk space." Lovely.

I load up the Microsoft website, I find an alternative downloads link. I find the offline installer.

But back to task at hand! I load up the same link on this slugger of a Windows laptop and I go "Fine, I'll download it to the SD."

But wait, sorry! Thanks to the ultra-progressive, consumer-friendly Microsoft, they're too forward-thinking to let you have a download link. No, you get a Javascript button. It goes right into the full drive and fails. Okay, control click? Nope, Javascript. Okay, load up chrome settings and change where the default save location is and point it to the SD card.

I download it for ~40 minutes. Why? Because it's 4.6 FREAKING GIG for the offline Office suite. What basically boils down to an e-mail client, and word processor, is larger than an entire Linux distro with apps. (<-Yeah yeah, there's more apps, but I'm pissed at this point so I'm taking comedic liberty here.)

So I wait, and it finally downloads to the SD card, and at 99%, it stops and goes "download failed." There must be some Chrome bug with temporary space or something.

Well! I'm not defeated yet--this is my job and I'm paid for results. I've got a USB flash drive and my Linux laptop (read: running an OS that actually works and can be configured and fixed by the end user).

I go to the same website as before with my Linux netbook. But wait, the page... it's... different?

Everything is the same except that wonderful offline installer link? They removed from the page. That's right. Go there with Windows, and then Linux, and go to same Microsoft download links and they will intentionally hide the ISO links and only give you the auto-installer link to ensure you're only going to run it on a Windows system. So customer friendly! (They do the same thing with Windows 10 ISOs, try it out.)

At that point, the client's laptop owner had to drive back 3+ hours to his office location so he had to take his laptop back.

I spent at least half a work day.. trying to (fight Microsoft) to free some space... on a machine that 100% meets Windows system requirements.

Thanks Microsoft. I wonder why do all my game and app dev on a Linux box these days. It's almost like I like feeling like I own the machine I paid for. Could you imagine having to go through all of these anti-consumer, anti-solution when doing hardware upgrades? What if you couldn't release the case on your machine without getting a "poweruser" license key from HP first? After all, they're just trying to protect you and they know how to run their hardware better than you. The more you look at that analogy, it really becomes insane how much we let Microsoft get away with bricking our own machines. The answer to a working machine should never be, "throw it out and buy a new one" when simply changing a config setting (if you were allowed to modify those registry values--sorry!) would suffice.

"I see you're trying to turn your SSD onto ACPI mode. Have you purchased an Enterprise SSD license yet?"


Who buys a laptop with only 30 GB of storage? I didn’t even know that was possible these days. You should honestly tell your client to return it and buy something else.

For reference, absolute minimum requirements are 16 GB for 32 bit and 20 GB for 64 bit [1]. So in theory your client’s laptop should work, but it’d probably be a poor experience. (Likely also a bad experience with modern Linux on 30 GB.) Given that your client’s Windows 10 laptop has an “old OS” on it, I think there’s some info missing in this story. A fresh laptop shouldn’t have an old OS install on it. (Or maybe this is OEM recovery gunk?)

I just checked my laptop and the Windows folder is 18.7 GB. Did your client's laptop have a Windows.old folder taking a bunch of space? Large updates to Windows will create these. You can whack this if you need. [2] (Should also get deleted automatically after 10 days automatically.)

Disclosure: Microsoft employee

[1] https://www.microsoft.com/en-us/windows/windows-10-specifica...

[2] https://support.microsoft.com/en-us/help/4028075/windows-del...


> (Likely also a bad experience with modern Linux on 30 GB.)

Literally just typed "raspbian minimum card size" in Google and Google dug up this as the top result:

"/Pi Hardware /SD Cards. The minimum size SD card you can use for Rasbian is 2GB, but it is recommended to get a 4GB SD card or above. Card Speed. A Class 4 card, which is the minimum recommended has an average read/write speed of 4 MB/sec."

The default packages include things like webkit and libre office, so it looks to be a fully functional Linux install on a popular piece of hardware.

Now, 4GB still seems dangerously small. But if all a client wanted was office plus web, I bet someone like OP could make a workable system within that size limit without Raspbian filling the emptied space with updates.

[1] http://www.raspberry-projects.com/pi/pi-hardware/sd-cards


I have a 30 GB OS partition on my ubuntu box. That works nicely. Obviously you won’t be doing big data analyses, but everything runs fine, and with lots of apps installed.

Raspbian runs on 128 MB RAM or whatever.


> Who buys a laptop with only 30 GB of storage? I didn’t even know that was possible these days.

Ultra-cheap computers with MMC flash drives with pathetic read-write speeds. And pathetic other parts. Such as this charmer from Walmart. https://www.walmart.com/ip/Teqnio-ELL1103T-11-6-Laptop-Touch...


I don't understand the thought process behind cheaping out as much as possible on a terrible PC, then paying for many hours of work from a tech to try to get a pathetic machine to be usable. The correct course of action is to return the faulty machine and buy a better one, rather than throwing away the money on a tech who can really only do so much with such inferior hardware.

It also boggles my mind how, still to this day, it's so hard to get a lower cost desktop or laptop that ships with an SSD, despite the fact that SSDs offer up such a performance improvement that many people consider them mandatory. The average consumer will have a much better experience with a computer that ships with a 128 GB SSD than a 1 TB HDD, yet every manufacturer is offering plenty of the latter (at 5400 rpm no less) and none of the former at sane price points. The two components even have similar costs now. In this era of streaming everything, the average person really isn't using much hard drive space. I know that my non-technical family members certainly aren't.

I just got my mom a $450 refurbished 2012 Dell workstation for common desktop use (mostly email and word processing). She loves it. It's night-and-day faster than the machine it replaced. And the single biggest performance improvement in it comes from, you guessed it, the SSD. A $450 five-year-old used workstation is trouncing any modern desktop in the sub-$1,000 range in practical performance. I would've gotten her a new one, but couldn't find anything in the price range that has an SSD, and the kinds of computers that do ship with SSDs also tend to have unnecessarily upgraded (and costly) processors and graphics cards, which are only useful for gaming.

(Oh, and the used workstation has a Core i7 in it too, so it's not exactly a slouch along any dimension except for 3D graphics performance.)


I don't think that people understand what they are buying. There is an expectation that Walmart wouldn't sell something that cannot work at all, but they do.

Don't buy 5 years old hardware second hands, it's poor investment and I speak from experience. Hardware has a limited lifespan then it just dies. The hard drive, the motherboard or the screen fail without notice and you're screwed.


You're right that people don't understand what they're buying. $200 new, modern Windows laptop is a market segment that cannot exist -- it's like a $5K new automobile in the US. Except there are standards in the automotive market in the US, so no one is allowed to sell the kind of trash that would be a $5K car. You can buy such a thing in, e.g., India, but it's exactly as bad as you'd think it would be, with terrible emissions and crash performance.

As for hardware endurance, I don't think you're giving quality hardware enough credit. I've owned a lot of computing hardware in my lifetime, and the only failures I've ever experienced have been fans going bad (which is easy to fix) and spinning hard drives crapping out. Oh, and I dropped a laptop really badly one time and broke it that way, but that's not really the hardware's fault. Solid state components last quite a long time.


It's funny you say that because I was already writing that there are cars in Indian selling for much less than $5k before finishing to read your first sentence.

Entry cars in Europe are in the range $5k to $10k. Not sure closer to which ends. They are certified for regulations and safety.

I certainly had some hardware and I've seen everything die sooner or later. My order would be rotating hard drive, then gaming GPU, then display, then motherboard.

Never seen any computer reach 10 years without any replacement. You're significantly past half life when buying 5 years old.


The cheapest car in Europe appears to be the Dacio Sandero, which works out to around USD 8,500. There's several problems with the base trim level that would render it unacceptable on the US market: No A/C, no radio, no automatic transmission, and a truly anemic engine that takes 13 seconds to go from 0-100 km/h. That engine might be acceptable on a city car in Europe, but most US drivers are going farther (and faster). But hey, at least it's certified for collisions and emissions; you can't say the same for the Indian cars we're referring to.

I've seen plenty of computers last >10 years. So, we'll see how this one goes. Even if one component does need replacing at some point, it'll likely still have been the best choice. Nothing else offers that kind of performance at a remotely comparable price point unless you're willing to build a PC from scratch.


The low end Dacia are a demonstration of making affordable cars by abandoning options like motorized windows, A/C and radios. They are very successful. I think you can pay a bit more to have all options, which is still a good deal for a brand new car.

Yes, Europeans generally speaking have smaller cars than Americans. All cars have manual transmission.

Not 10 years with all original components.


Ugh. That looks pretty miserable. Also, why is that running 64-bit with only 2 GB of RAM?


> (Likely also a bad experience with modern Linux on 30gb.)

I just did an install of modern Linux (the latest CentOS 7, with Gnome deskop), so I can check. The root partition is using at the moment 4.2G, plus a 2.0G swap partition and a 1.0G boot partition. So if this were a 30G disk, I'd have more than 20G left, even after installing a few applications.


That’s a fair bit smaller than I’d expect honestly. I’m surprised it’s not using more than that with the basic apps installed.


There's plenty of serious Linux distros that still ship on a single CD. Arch Linux, for example, comes on a 522 MB ISO. That gets you a basic functional desktop environment, and anything else you might need can be installed from the Net.


There is no desktop on the 522 MB Arch Linux ISO, or if there is, I've never seen it. It boots into a root shell on tty1, and is only supposed to be used for installation. I would be very surprised to even find an X server in there.

EDIT: For a more realistic number, I just checked my Arch-Linux-based home server, which has a fairly small installation (including some multimedia and X11 stuff for PulseAudio, mpd and youtube-dl, though). Needs just over 2 GiB for the entire system and applications:

  df / -h
  Filesystem      Size  Used Avail Use% Mounted on
  /dev/sdc2       110G  2,2G  103G   3% /
Of course, you should have some breathing room, but usually not more than 8 GiB on a server, and maybe 16 GiB on a desktop.


Interestingly, Ubuntu requires 25 GB.

https://help.ubuntu.com/community/Installation/SystemRequire...

So consumer-oriented distros don't seem to be nearly so much lighter than Windows. With the caveat that Ubuntu probably comes preloaded with more apps like Libre Office.


That's probably more generous than necessary.

I just installed Ubuntu 16.04 a month ago (new laptop), and with /home/ on its own partition, after installing everything I use regularly, root is only using 9.3G


That's probably just a CYA requirement.

I have 17.10 in a VM and it takes 8,7 GB on the disk, with a few extra apps compared to default install.


A decent amount of that space ends up being on-disk swap for the RAM, for what it's worth. And note that, on that same page, Xubuntu and Lubuntu are offered up as alternatives for less performant computers. They only require 5 GB of space. Windows doesn't have a light version like that.


That still leaves plenty of space for additional apps and data. Also you can uninstall e.g. Libreoffice post-install to slim it down.


> Who buys a laptop with only 30 GB of storage?

Somebody who doesn't work in the IT industry - the target demographic of Windows.

I'm also surprised it's possible - but if it cuts the price, then why not? I assume 10GB more than the minimum is enough to earn the Windows badge.

But I don't think the crux of the issue was lack of disk space.


For reference, a 32GB SD card is $10 and a TB disk is $40.


When the entire laptop is $300, the difference between those two is 10% of the consumer facing cost. And somebody buying a $300 computer clearly doesn't think (or doesn't know) they need 1 TB of storage.


I gave example on a 1 TB because that's what I could find.

Maybe the manufacturers could have 128GB disks for as little costs as the memory card.


I fought this battle with a 64GB SSD, and gave up and cloned the drive onto a hybrid SSD/7200RPM drive (using EaseUS). I can't imagine what you went through to get it to fit on 30GB!


16GB

https://www.newegg.com/Product/Product.aspx?Item=9SIA1K665X7...

https://www.newegg.com/Product/Product.aspx?Item=9SIA6R46267...

AS you're an MS employee you'll know about WIMBoot and whatever the newer less stupid version of WIMBoot is.


Those are ChromeBooks, not Windows laptops.

Also, I'd never heard of WIMBoot. Being a Microsoft employee doesn't make me a Windows expert. I don't work on Windows and don't own any systems that WIMBoot targets. Looking at WIMBoot, it doesn't seem relevant for the case discussed here, either, since this client clearly didn't have the small space usage WIMBoot enables.


https://www.newegg.com/Product/Product.aspx?Item=9SIAAG56KU1...

https://www.newegg.com/Product/Product.aspx?Item=9SIAF136G48...

https://www.newegg.com/Product/Product.aspx?Item=9SIAC4Z6CZ4...

https://www.newegg.com/Product/Product.aspx?Item=9SIA24G6DV4...

https://www.newegg.com/Product/Product.aspx?Item=9SIA6R46JS9...

etc etc.

There are very many machines sold now, today, with Windows 10 (and previously Windows 8) that have only 32 GB drive space.

With WIMBoot any updates would eat drive space and that space was not recoverable, and this would sometimes prevent updating to Windows 10. MS says this here: https://blogs.windows.com/windowsexperience/2015/03/16/how-w...

> The reason Windows 8.1 devices using WIMBOOT are not yet able to upgrade to Windows 10 is because many of the WIMBOOT devices have very limited system storage. That presents a challenge when we need to have the Windows 8.1 OS, the downloaded install image, and the Windows 10 OS available during the upgrade process. We do this because we need to be able to restore the machine back to Windows 8.1 if anything unexpected happens during the upgrade, such as power loss. In sum, WIMBOOT devices present a capacity challenge to the upgrade process and we are evaluating a couple of options for a safe and reliable upgrade path for those devices.

There's a complex workaround of "delete everything, and use two USB sticks" which isn't great for the target user.

The new file compression stuff is much much better than WIMBoot.


I have a laptop with 250GB ssd and i7 cpu. The experience is eerily similar. Not in detail (office installs easily) but the feel is the same.


What psychopath sells PCs with a 5400 rpm hard drive as the main drive?


Dell, and a lot of other vendors.

And. I purchased one last year because it was outrageously inexpensive and had a i5/7500U, a nice 1080p touchscreen, and a miserable spinning harddrive and a miserable amount of socketed ram.

I consider replaceable harddrives and socketed RAM to be a feature, one I promptly made use of and now have a machine which is quite competitive to machines costing > $1k more than what it cost me for the machine+HD+RAM.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: