> As a side issue, both are crap on the desktop so I'm sitting here on Windows 8.1...
Which just goes to show how we all are different. Windows 8.1 was what pushed me to move my main laptop (also used by wife, etc) to Ubuntu.
It has worked great, and just yesterday I discovered Linux automatically handles (SANE) scanners network-transparently via saned. I had no idea. Connect scanner to server and start scanning applications (including scripts) on the laptop. It just works. With zero configuration. Try that on Windows!
I'm literally finding Linux on the desktop the greatest thing ever these days.
Linux on the desktop isn't terrible until you hit an edge case to be honest. Typically for me, it's printers and power management.
I have a wireless scanner and printer combo (HP 2450). To set this up on windows, I turned it on, pressed the WiFi button and the WPS button on my router and File -> Print and/or open up Fax and Scan and that's it. Just works. No setup for scanning or printing. On linux, 30 minutes arguing with hplip and the output looks like ass whatever switch you flip and SANE doesn't even see it.
Then there's PM. On my 9-cell Lenovo X201, 8.5 hours on windows 8.1. I managed to nab max 5 hours out of every Linux distro I tried (Ubuntu, Debian, CentOS) with powertop tuning. The cruel irony is that CentOS gets better battery life in a VM in Windows than it does on the bare metal.
YMMV as they say but I really can't be arsed with anything that gets in the way of doing stuff these days. Tuning a Linux distro was fun about 10 years ago for me. Not any more.
Funny thing, I've had the opposite experience with Windows 8.1 vs. Ubuntu.
Printing to my 6 year old Ricoh color laser proved a difficult task, tangling with driver hell. It seemed unnecessary given the printer is equipped with PCL and Postscript emulations, but Windows didn't care about those standards. The only way it would work is with the crappy Ricoh driver forcing the sacrifice of some basic functions.
OTOH under Linux, gutenprint drivers worked even without altering the default settings. Maybe it was easier because I was more familiar with the CUPS setup, nonetheless the difference was noticeable.
I'd concede that newer printers might be easier to configure for Windows vs. Linux or other OS, but it's troublesome that perfectly good equipment becomes "obsolete" when a few years old. In that respect Windows can be a disadvantage.
Consumer devices are just targeted for Windows. That is the explanation. I had similar nightmares getting a mainstream brand consumer printer/scanner (Epson I think, but not sure) working wirelessly with Mac OS X. Never succeeded. It's a result of decisions by the manufacturer, not any lack of capability in Linux.
In MANY other ways I find Windows to be a constant, not just edge-case, constraint on my productivity. So I don't use it. To each his own of course.
I run Gentoo on my Dell Inspiron 5520 and it took only a minimal amount of configuring (holy moly!). The most difficult part was audio and touchpad drivers, which took my google-fu to another level. Otherwise, wow is Gentoo fast if you configure it correctly!
Last year I bought a asus n-550jv with win 8.1 and after a few months started having problems with wifi, the keyboard, bluetooth and a general slowdown. Now eveything I tried only yielded minor results... So at one point I decided to try linux for the first time, I installed Ubuntu 14 and voila! Half my problems were gone, and over the next few months I was able to fix most of the other problems as well. On the other hand, I still havent been able to do anything meaningfull with windows.
They way it is now, hijacking DNS (between the domain NS servers and any CA) would allow an adversary to request and obtain an illegitimate certificate by way of domain validation. If the registrar is the only valid issuer of certificates, this loophole is closed since there is nothing for the CA to verify - the registrar already knows who the customer is and can offer certificate signing without an insecure DNS based domain validation.
For end-users with a hijacked DNS, the registrar-issued SSL certificate would still protect the transmission, because the DNS-spoofing adversary would not be able to present a valid certificate signed by the registrar. In fact, in today's CA environment, if the DNS hijacker is playing ball with a rogue CA, they could also spoof the SSL. With my suggestion, they couldn't, unless the rogue CA is actually the registrar, because the browser would see that the SSL certificate was not signed by the appropriate registrar.
You could create a DNS-style CA setup like this:
* Browsers would ship with ONE root CA, the public key of the "." root zone operator
* Each TLD ("com", "org" etc) has a CA signed by the root zone operator, valid only for signing TLD registrar CAs. The root zone CA could publish a signed list of TLD certificates daily/weekly/monthly (they shouldn't change too often and the number of entries would be relatively low). Anyone could compare notes and see if these change. There aren't any hidden intermediary CAs. Browsers could sync this list daily/weekly/monthly, the deltas should be minimal. ISPs could even provide mirror services, because the whole list is signed by the root zone operator anyways.
* Each registrar under a TLD has a CA certificate signed by the TLD operator, valid for only signing secondary domains under the given TLD. (i.e. customer domains). Just like the RootZone->TLD signed list of CAs, the TLD CA operator could publish a signed list of registrar certificates daily/weekly/monthly (again, they shouldn't change too often and the number of entries would be relatively low). Again, anyone can compare notes and see if things change. No hidden intermediary registrars. Browsers could yet again sync this daily/weekly/monthly and ISPs can mirror the list since it's signed.
* Registrants of customer domain ("example.com", "example.org" etc) can request a domain-specific CA at their registrar, and only at their registrar. This domain-specific CA is valid only for signing leaf domains, i.e. "www.example.com", "mail.example.com". With the domain-specific CA in hand, the customer can create their own leaf domain certificates "at home". The registrar should offer non-EV domain-CA certificates for free (there is no work to be done to validate domain ownership as the customer already has an account there), or charge a fee for EV certificates.
* There needs to be a way to determine who is the valid registrar for a given domain. This could be for example a TLS-based machine-readable WHOIS service address operated by each TLD. The TLS-based WHOIS service would negotiate TLS with the same TLD-CA-certificate as specified in the root list. Thus, a browser can connect to the TLD's WHOIS service and validate that a given domain is under a given registrar, and that the leaf SSL certificate is signed by the domain CA certificate, which is signed by the registrar CA certificate, which is in the signed-by-the-TLD list of registrars, which is in the signed-by-the-root-zone list of TLDs. This stuff could also be cached at the ISP level because everything is chain signed.
Maybe this is how DNSSEC works (I haven't looked into the details), but I think this would be a pretty neat way to limit the damage done by rogue CAs.
Netflix may be the biggest thing since the wheel. I don't care. I'm still not going to support the company which initially poisoned the HTML standard with DRM. I find it counter intuitive to reward such behaviour with money. And I advice anyone else in here which believes in an open web to do the same.
Once again goes to show that Apple is mostly interested in the security of its iStore, platform lock down and DRM.
I'm not exactly shocked.
Just for kicks... Does anyone remember the I'm a PC ads, where macs were magically "secure", couldn't get viruses or hacked or anything? Turns out, with marketshare they can! Just like Windows. Strange thing eh?
> Does anyone remember the I'm a PC ads, where macs were magically "secure", couldn't get viruses or hacked or anything?
Well in 2006 when those ads were first being shown, XP was still the newest version of Windows and it had no privilege separation. As viruses that patched MBR sectors or system DLLs were extremely common, MacOS X was in fact "magically" (inherently) more secure since that vector of attack on a Mac would require a password prompt to elevate the program's privileges.
From then to this day, Mac viruses have been effectively non-existent in the wild. There have been some trojans and worms, but they can't rightfully be classified as viruses (no infection of other files).
> Well in 2006 when those ads were first being shown, XP was still the newest version of Windows and it had no privilege separation
That's probably false. Windows introduced this feature in Windows 2000 (from 1999) and you could define a "normal" user and a "power" user. Only when you needed to install something would you run as the power-user.
This all worked inside the same desktop session.
That maybe only 1% of the users (the "paranoid" ones) used it, doesn't mean it wasn't there.
In fairness, for most of the 2000's in my experience at least -- which included defacto admin duties for a decent size office fill of Macs -- dealing with malware and viruses really just wasn't much of a problem to worry about.
By contrast, it seemed like owning a Wintel machine pretty much guaranteed you'd have issues unless you were utterly ruthless and/or didn't have any layman users browsing the internet to worry about.
Has that in fact changed since? I am no longer as familiar with the Windows side of things as I used to be, but I do know from experience that there's a very solid reason why this stereotype took root in the first place.
I just wonder what the vast userbase of uneducated people (seniors, teen bloggers, ironically education institutions, etc) who moved over to macs because they bought the lie will feel when they too later discover that the promises were a lie.
Because unlike Microsoft, Apple doesn't have a battle hardened OS where security has been worked on systematically, for over a decade.
And I could have told you the same story years ago. I don't need blatantly obvious bugs like this one to back that claim.
There was no lie. It was true then, and is still clearly and obviously true now, that Mac users have a small fraction of the malware issues that Windows users have. The difference between iOS and Android is even more stark.
You're also hilariously wrong about Microsoft having a supposedly "battle-hardened" OS where security has been worked on systematically. OS X is based on BSD Unix, where security has been worked on since the 1970s, before Microsoft even existed. OS X itself is now 15 years old.
I administer hundreds of Macs and PCs. I can objectively state that the PCs have about 10-50x as much issues with malware as the Macs have, and those issues are more severe and affect users and admins more. Everyone who manages both Macs and PCs in the enterprise is well-aware of this.
> For me it was strange how can Apple market their system as virus-free. Now that's ridiculous.
Not really. I've been using Macs for as long as I can remember (I'm 30), and in that entire time, I've only ever actually seen 2 pieces of malware myself (I've heard of others but never actually encountered them). One of them was the rather benign Merry Xmas Hypercard trojan from way back, which doesn't actually harm your computer, all it does is search for other hypercard stacks on your computer to infect, and if you open an infected stack on December 25th it will play sound and wish you a Merry Xmas. The other one was one of those Adware apps, I forget its precise name, and I didn't actually even see that, I talked with someone else on the phone who had it and walked them through the instructions at https://support.apple.com/en-us/HT203987 for removing it.
And just to note, the latter one isn't even a virus, because it's not self-replicating (the former one technically is, because it infects other stacks on the same computer, but it was pretty darn benign and did not rely on an OS security flaw to operate).
So yeah, there exists malware for the Mac, and there's more of it now than there ever has been in the past, but it's like a completely different universe from Windows malware. You pretty much have to go out of your way to hit this on the Mac.
As an aside, the first widely-spread Mac malware I ever heard of was spread via a pirated copy of iWork '09 being distributed on BitTorrent. Someone had altered the DMG to include the virus before uploading it. It was kind of funny hearing about people being infected because you knew the only way they could have done that was by trying to pirate iWork '09 (this was the only distribution vector). And even that apparently doesn't count as a "major" security threat because the Wikipedia page for Mac Defender, which is dated to May 2011, describes Mac Defender as "the first major malware threat to the Macintosh platform", even though it wasn't even a virus it was just a trojan (and FWIW it didn't even require antivirus software to remove, Apple rolled out an automatic fix themselves, although it did take them a few weeks to do so).
"PC has caught a virus and is clearly under the weather. He warns Mac to stay away from him, citing 114,000 known viruses that infect PCs. But Mac isn't worried, as viruses don't affect him."
"Trying to hide from spyware, PC is seen wearing a trench coat, a fedora, dark glasses, and a false mustache. He offers Mac a disguise, but Mac declines, saying he doesn't have to worry about such things with OS X."
"PC appears wearing a biohazard suit to protect himself from viruses and malware. He eventually takes mask off to hear Mac better, then shrieks and puts it back on."
"She has lots of demands, but her insistence that the computer have no viruses, crashes or headaches sends all the PCs fleeing"
The bangs work any place in the string. They don't have to be a prefix.
In Firefox you have a dedicated search bar (ctrl-k) which remembers your search across locations. So you search for "heisenbug oracle ipv6" press enter and are not immediately happy about the precision of your search.
So you press ctrl-k again and append "!so". Bang, your previous search is now applied to stack overflow and you have your answer straight at spot 1.
It's a very good flow. It will never work in retarded browsers who insist on removing the search bar though (like Chrome, Safari, IE).
In a deeply misguided act of Chromeism, Firefox was considering going in that direction too, but the outrage in the userbase hopefully caused them to never venture that line of thought again.
It works the same way in the safari bar. If you use it to search, the search bar maintains your search string, not the URL. So you can use ⌘L on the search page, and your URL bar will be focused with the plain text of what you just searched, and you can append !g as yon would expect.
And copying the search text in the URL bar actually copies the link too, which is nice.
But the URL changes from a search string to the URL of the results. A dedicated search bar retains what you last searched for. For example mine has "league of legends !w" at the moment. If I wanted to search on Startpage for that I just hit ctrl-k and change !w to !sp.
> I know this idea has long missed the boat, but why wasn't IPv4 address space extended by adding an IPv4 Option header that could carry extra address bits?
Because that wouldn't be compatible with existing IPv4 deployments and would cause a reliability havoc when a node not configured to deal with it mangled packets transparently somewhere in between your source and destination end-points.
I don't get the resistance against IPv6. It works. It's a fresh take. Yes it requires some new stuff to be deployed and configured here and there, maybe even requires you to learn something new, but if you thought extending IPv4 would have been any other way you are deluding yourself.
If you're going to do a significant change to something as big as the internet (and introducing a new address scheme is that, no matter how you implement it), you might as well step back and think it all through instead of applying yet another hack.
So tell me. Why are you opposed to IPv6? Why do you want to hang on to this old IPv4-thing which is already at the bursting point, at the edge of what it can take?
I'm not opposed to IPv6. Indeed, I'm one of the few % of people who has a dual-stack home network and ISP. However I don't pretend it was easy to set up - I'm no novice, yet several aspects still confounded me and the large Fedora community that I asked:
You can still track the prefix though. In a standard /64 deployment only the suffix will change so there is still some information leaking if you have a static prefix (obviously, as in static IPv4). However, my ISP defaults to dynamic /64 prefix unless you opt in for static prefix. I believe most ISPs should probably do the same.
If I need to run a server, I anonymously lease hosted VPS or whatever, and SSH via Tor. My concern is keeping my personal Internet connectivity anonymous. I use nested VPN chains, generally using pfSense VMs as VPN clients, and Tor.
I get that the Tor Project is working on IPv6. But I also want IPv6 NAT, in pfSense or whatever. That's to keep my local device IPv6 addresses (hosts and VMs) private, even from Tor entry guards. I guess that it's time to learn how to ensure that.
> On the other hand, this prevented a project like the linux kernel from moving to gplv3 since the distributed corpus of contributors could not all agree to switch.
That's not enitrely true.
If I remember correctly Linus himself does not like the GPLv3 and opposes its use for the Linux kernel.
Basically his argument is that the changes and new restrictions imposed by GPLv3 in practice makes it a new kind of license, but that the naming seems designed to "lure" people who think it's a regular license upgrade into adopting a license they would otherwise not consider using.
I'm not sure if this is in writing anywhere, but I recall him talking about this on the same Debconf where he admitted that he had never managed to install Debian Linux.
So moving the Linux kernel to GPLv3 may not be entirely trivial, but that's not the reason it hasn't been done. Linus simply didn't want to.
Getting it running on actual iron took more effort than I expected, but lately distros like Ubuntu has spoiled the linux-crowd.
The hardest part about going all in is of you find some of the software you rely on not being packaged. There's just no way to "cheat" and ./configure && make'ing for way around: you have to learn the Nix language and package the software yourself. (Which can arguably be said to be a viral feature for getting more software packaged)
To me this was too much work to fit in an otherwise busy weekend and I just had to give up. Had I had more time to do things properly, I would probably have stuck with it.
The concepts it introduces are quite nice and well executed.