Hacker Newsnew | comments | show | ask | jobs | submit | josteink's comments login

> As a side issue, both are crap on the desktop so I'm sitting here on Windows 8.1...

Which just goes to show how we all are different. Windows 8.1 was what pushed me to move my main laptop (also used by wife, etc) to Ubuntu.

It has worked great, and just yesterday I discovered Linux automatically handles (SANE) scanners network-transparently via saned. I had no idea. Connect scanner to server and start scanning applications (including scripts) on the laptop. It just works. With zero configuration. Try that on Windows!

I'm literally finding Linux on the desktop the greatest thing ever these days.


Linux on the desktop isn't terrible until you hit an edge case to be honest. Typically for me, it's printers and power management.

I have a wireless scanner and printer combo (HP 2450). To set this up on windows, I turned it on, pressed the WiFi button and the WPS button on my router and File -> Print and/or open up Fax and Scan and that's it. Just works. No setup for scanning or printing. On linux, 30 minutes arguing with hplip and the output looks like ass whatever switch you flip and SANE doesn't even see it.

Then there's PM. On my 9-cell Lenovo X201, 8.5 hours on windows 8.1. I managed to nab max 5 hours out of every Linux distro I tried (Ubuntu, Debian, CentOS) with powertop tuning. The cruel irony is that CentOS gets better battery life in a VM in Windows than it does on the bare metal.

YMMV as they say but I really can't be arsed with anything that gets in the way of doing stuff these days. Tuning a Linux distro was fun about 10 years ago for me. Not any more.


Funny thing, I've had the opposite experience with Windows 8.1 vs. Ubuntu.

Printing to my 6 year old Ricoh color laser proved a difficult task, tangling with driver hell. It seemed unnecessary given the printer is equipped with PCL and Postscript emulations, but Windows didn't care about those standards. The only way it would work is with the crappy Ricoh driver forcing the sacrifice of some basic functions.

OTOH under Linux, gutenprint drivers worked even without altering the default settings. Maybe it was easier because I was more familiar with the CUPS setup, nonetheless the difference was noticeable.

I'd concede that newer printers might be easier to configure for Windows vs. Linux or other OS, but it's troublesome that perfectly good equipment becomes "obsolete" when a few years old. In that respect Windows can be a disadvantage.


You are right. Windows is better with newer hardware, Linux is better with older hardware.

I've had nothing but trouble when trying to install Ubuntu on brand-new laptops, but the exact same laptops run Ubuntu perfectly well a couple of years later.


Consumer devices are just targeted for Windows. That is the explanation. I had similar nightmares getting a mainstream brand consumer printer/scanner (Epson I think, but not sure) working wirelessly with Mac OS X. Never succeeded. It's a result of decisions by the manufacturer, not any lack of capability in Linux.

In MANY other ways I find Windows to be a constant, not just edge-case, constraint on my productivity. So I don't use it. To each his own of course.


Disagree. The HP 2540 works fine with the wife's MBP on 10.10. It just appears and you print to it. Same with windows. Same with iOS/AirPrint as well - it just works.

They're $70 in the US / £30 here for an all-in-one scanner, printer, copier combo so this is rock bottom cheap ass hardware and it works flawlessly.


>power management

Are you running TLP? It's easily the best power management tool out there and it's a big reason I'm not switching to a BSD.


I've generally only run with thinkpads, but I find OpenBSDs ACPI to be measurably better than everything else on power management.

I run Gentoo on my Dell Inspiron 5520 and it took only a minimal amount of configuring (holy moly!). The most difficult part was audio and touchpad drivers, which took my google-fu to another level. Otherwise, wow is Gentoo fast if you configure it correctly!

Last year I bought a asus n-550jv with win 8.1 and after a few months started having problems with wifi, the keyboard, bluetooth and a general slowdown. Now eveything I tried only yielded minor results... So at one point I decided to try linux for the first time, I installed Ubuntu 14 and voila! Half my problems were gone, and over the next few months I was able to fix most of the other problems as well. On the other hand, I still havent been able to do anything meaningfull with windows.

I don't see how many people have so many problems with windows. I use a clean install on just about anything and it pretty much always just works and does so for months.

Stay away from anything Broadcom, AMD, Radeon and pick a decent SSD (Samsung 840 pro here) it's bomb proof.

The only playing around you have to do is on hardware that is way newer than the windows version and the network interfaces aren't supported.


So what you're saying is... Windows works fine if you shop hardware from known good vendors and nothing too bleeding edge? That sure sounds like what people said about Linux a decade ago. ;)

Actually it works on anything out of the box but you might have to source some drivers for network cards so you can get to windows update for everything else.

The best bit is on my older X201, it installs all the Lenovo official drivers as part of windows update. You install windows, wait about 15 minutes, then reboot when it tells you to and bam, sorted.

That doesn't sound like Linux a decade ago ;)


It does actually, package managers in linux update all of your drivers along with you programs for more than a decade now...

> If the registry cannot be trusted, SSL kinda goes out the window anyway

You seem to have gotten things backwards. If the DNS is hijacked, SSL is the only thing protecting a visitor from being lured into the hijacked site.

Connecting these two things into one makes SSL effectively worthless.


If DNS is hijacked, the hijacker could also get valid SSL certificates (using MX records to get confirmation emails, etc).

If it is a EV certificate, that would certainly not be sufficient.

Hmm, I think it is you who have things backwards.

They way it is now, hijacking DNS (between the domain NS servers and any CA) would allow an adversary to request and obtain an illegitimate certificate by way of domain validation. If the registrar is the only valid issuer of certificates, this loophole is closed since there is nothing for the CA to verify - the registrar already knows who the customer is and can offer certificate signing without an insecure DNS based domain validation.

For end-users with a hijacked DNS, the registrar-issued SSL certificate would still protect the transmission, because the DNS-spoofing adversary would not be able to present a valid certificate signed by the registrar. In fact, in today's CA environment, if the DNS hijacker is playing ball with a rogue CA, they could also spoof the SSL. With my suggestion, they couldn't, unless the rogue CA is actually the registrar, because the browser would see that the SSL certificate was not signed by the appropriate registrar.

You could create a DNS-style CA setup like this:

* Browsers would ship with ONE root CA, the public key of the "." root zone operator

* Each TLD ("com", "org" etc) has a CA signed by the root zone operator, valid only for signing TLD registrar CAs. The root zone CA could publish a signed list of TLD certificates daily/weekly/monthly (they shouldn't change too often and the number of entries would be relatively low). Anyone could compare notes and see if these change. There aren't any hidden intermediary CAs. Browsers could sync this list daily/weekly/monthly, the deltas should be minimal. ISPs could even provide mirror services, because the whole list is signed by the root zone operator anyways.

* Each registrar under a TLD has a CA certificate signed by the TLD operator, valid for only signing secondary domains under the given TLD. (i.e. customer domains). Just like the RootZone->TLD signed list of CAs, the TLD CA operator could publish a signed list of registrar certificates daily/weekly/monthly (again, they shouldn't change too often and the number of entries would be relatively low). Again, anyone can compare notes and see if things change. No hidden intermediary registrars. Browsers could yet again sync this daily/weekly/monthly and ISPs can mirror the list since it's signed.

* Registrants of customer domain ("example.com", "example.org" etc) can request a domain-specific CA at their registrar, and only at their registrar. This domain-specific CA is valid only for signing leaf domains, i.e. "www.example.com", "mail.example.com". With the domain-specific CA in hand, the customer can create their own leaf domain certificates "at home". The registrar should offer non-EV domain-CA certificates for free (there is no work to be done to validate domain ownership as the customer already has an account there), or charge a fee for EV certificates.

* There needs to be a way to determine who is the valid registrar for a given domain. This could be for example a TLS-based machine-readable WHOIS service address operated by each TLD. The TLS-based WHOIS service would negotiate TLS with the same TLD-CA-certificate as specified in the root list. Thus, a browser can connect to the TLD's WHOIS service and validate that a given domain is under a given registrar, and that the leaf SSL certificate is signed by the domain CA certificate, which is signed by the registrar CA certificate, which is in the signed-by-the-TLD list of registrars, which is in the signed-by-the-root-zone list of TLDs. This stuff could also be cached at the ISP level because everything is chain signed.

Maybe this is how DNSSEC works (I haven't looked into the details), but I think this would be a pretty neat way to limit the damage done by rogue CAs.


> Maybe this is how DNSSEC works

Not exactly, but it's very similar. DNSSEC has no concept of registrars, and each zone is entirely signed by the same entity.

You'd trust that single entity to inform the registrar anyway, so descentralizing it gains you nothing.


Netflix may be the biggest thing since the wheel. I don't care. I'm still not going to support the company which initially poisoned the HTML standard with DRM. I find it counter intuitive to reward such behaviour with money. And I advice anyone else in here which believes in an open web to do the same.

Once again goes to show that Apple is mostly interested in the security of its iStore, platform lock down and DRM.

I'm not exactly shocked.

Just for kicks... Does anyone remember the I'm a PC ads, where macs were magically "secure", couldn't get viruses or hacked or anything? Turns out, with marketshare they can! Just like Windows. Strange thing eh?


> Does anyone remember the I'm a PC ads, where macs were magically "secure", couldn't get viruses or hacked or anything?

Well in 2006 when those ads were first being shown, XP was still the newest version of Windows and it had no privilege separation. As viruses that patched MBR sectors or system DLLs were extremely common, MacOS X was in fact "magically" (inherently) more secure since that vector of attack on a Mac would require a password prompt to elevate the program's privileges.

From then to this day, Mac viruses have been effectively non-existent in the wild. There have been some trojans and worms, but they can't rightfully be classified as viruses (no infection of other files).


I agree. I guess the guys down here in the greyed out area are more after some kind of justice. It's more the arrogance of apple that people want to give back to apple.

> Well in 2006 when those ads were first being shown, XP was still the newest version of Windows and it had no privilege separation

That's probably false. Windows introduced this feature in Windows 2000 (from 1999) and you could define a "normal" user and a "power" user. Only when you needed to install something would you run as the power-user.

This all worked inside the same desktop session.

That maybe only 1% of the users (the "paranoid" ones) used it, doesn't mean it wasn't there.


In fairness, for most of the 2000's in my experience at least -- which included defacto admin duties for a decent size office fill of Macs -- dealing with malware and viruses really just wasn't much of a problem to worry about.

By contrast, it seemed like owning a Wintel machine pretty much guaranteed you'd have issues unless you were utterly ruthless and/or didn't have any layman users browsing the internet to worry about.

Has that in fact changed since? I am no longer as familiar with the Windows side of things as I used to be, but I do know from experience that there's a very solid reason why this stereotype took root in the first place.


>Just like Windows. Strange thing eh?

Not strange if you grasp the fact that malware is just a program that has elevated access.

For me it was strange how can Apple market their system as virus-free. Now that's ridiculous.


Yes, to any techie the lie is obvious.

I just wonder what the vast userbase of uneducated people (seniors, teen bloggers, ironically education institutions, etc) who moved over to macs because they bought the lie will feel when they too later discover that the promises were a lie.

Because unlike Microsoft, Apple doesn't have a battle hardened OS where security has been worked on systematically, for over a decade.

And I could have told you the same story years ago. I don't need blatantly obvious bugs like this one to back that claim.


There was no lie. It was true then, and is still clearly and obviously true now, that Mac users have a small fraction of the malware issues that Windows users have. The difference between iOS and Android is even more stark.

You're also hilariously wrong about Microsoft having a supposedly "battle-hardened" OS where security has been worked on systematically. OS X is based on BSD Unix, where security has been worked on since the 1970s, before Microsoft even existed. OS X itself is now 15 years old.

I administer hundreds of Macs and PCs. I can objectively state that the PCs have about 10-50x as much issues with malware as the Macs have, and those issues are more severe and affect users and admins more. Everyone who manages both Macs and PCs in the enterprise is well-aware of this.


> who moved over to macs because they bought the lie will feel when they too later discover that the promises were a lie.

what? I thought they bought macs so we wouldn't need to give free tech support ;)


> For me it was strange how can Apple market their system as virus-free. Now that's ridiculous.

Not really. I've been using Macs for as long as I can remember (I'm 30), and in that entire time, I've only ever actually seen 2 pieces of malware myself (I've heard of others but never actually encountered them). One of them was the rather benign Merry Xmas Hypercard trojan from way back, which doesn't actually harm your computer, all it does is search for other hypercard stacks on your computer to infect, and if you open an infected stack on December 25th it will play sound and wish you a Merry Xmas. The other one was one of those Adware apps, I forget its precise name, and I didn't actually even see that, I talked with someone else on the phone who had it and walked them through the instructions at https://support.apple.com/en-us/HT203987 for removing it.

And just to note, the latter one isn't even a virus, because it's not self-replicating (the former one technically is, because it infects other stacks on the same computer, but it was pretty darn benign and did not rely on an OS security flaw to operate).

So yeah, there exists malware for the Mac, and there's more of it now than there ever has been in the past, but it's like a completely different universe from Windows malware. You pretty much have to go out of your way to hit this on the Mac.

As an aside, the first widely-spread Mac malware I ever heard of was spread via a pirated copy of iWork '09 being distributed on BitTorrent. Someone had altered the DMG to include the virus before uploading it. It was kind of funny hearing about people being infected because you knew the only way they could have done that was by trying to pirate iWork '09 (this was the only distribution vector). And even that apparently doesn't count as a "major" security threat because the Wikipedia page[1] for Mac Defender, which is dated to May 2011, describes Mac Defender as "the first major malware threat to the Macintosh platform", even though it wasn't even a virus it was just a trojan (and FWIW it didn't even require antivirus software to remove, Apple rolled out an automatic fix themselves, although it did take them a few weeks to do so).

[1] https://en.wikipedia.org/wiki/Mac_Defender


I have never seen Apple market their system as "virus free". Can you point me to that one?

It was a ubiquitous part of Apple's marketing for many years. Their two main (almost only) arguments for overpaying for their computers were that they were "easier" and didn't get viruses.

For example, read through Adweek's summary of Apple's "Get a Mac" campaign, which Adweek calls the best ad campaign of 2000-2010:

http://www.adweek.com/adfreak/apples-get-mac-complete-campai...

"PC has caught a virus and is clearly under the weather. He warns Mac to stay away from him, citing 114,000 known viruses that infect PCs. But Mac isn't worried, as viruses don't affect him."

"Trying to hide from spyware, PC is seen wearing a trench coat, a fedora, dark glasses, and a false mustache. He offers Mac a disguise, but Mac declines, saying he doesn't have to worry about such things with OS X."

"PC appears wearing a biohazard suit to protect himself from viruses and malware. He eventually takes mask off to hear Mac better, then shrieks and puts it back on."

"She has lots of demands, but her insistence that the computer have no viruses, crashes or headaches sends all the PCs fleeing"


Easy. https://m.youtube.com/watch?v=GQb_Q8WRL_g

Which was never aired since 2010, removed everywhere from Apple's website and youtube channels.

Not strange if you grasp the fact that malware is just a program that has elevated access

I would define malware more simply as a program that does something the user doesn't want.


The bangs work any place in the string. They don't have to be a prefix.

In Firefox you have a dedicated search bar (ctrl-k) which remembers your search across locations. So you search for "heisenbug oracle ipv6" press enter and are not immediately happy about the precision of your search.

So you press ctrl-k again and append "!so". Bang, your previous search is now applied to stack overflow and you have your answer straight at spot 1.

It's a very good flow. It will never work in retarded browsers who insist on removing the search bar though (like Chrome, Safari, IE).

In a deeply misguided act of Chromeism, Firefox was considering going in that direction too, but the outrage in the userbase hopefully caused them to never venture that line of thought again.


It works the same way in the safari bar. If you use it to search, the search bar maintains your search string, not the URL. So you can use ⌘L on the search page, and your URL bar will be focused with the plain text of what you just searched, and you can append !g as yon would expect.

And copying the search text in the URL bar actually copies the link too, which is nice.


>It will never work in retarded browsers who insist on removing the search bar though (like Chrome, Safari, IE).

Uh huh. Because they'll lose their ability to tell if something is a search or a URL because of bang parsing, right?


But the URL changes from a search string to the URL of the results. A dedicated search bar retains what you last searched for. For example mine has "league of legends !w" at the moment. If I wanted to search on Startpage for that I just hit ctrl-k and change !w to !sp.

I suppose it saves typing, but I don't really mind doing site:news.ycombinator.com on Google.

So when I've called Google Chrome for "spyware" in the past I can now add Chromium to that list.

Google's not even trying to not be evil these days.


If you skim it it might look like a null is getting passed along when it actually isn't.

Maybe op misread the code?

-----


> I know this idea has long missed the boat, but why wasn't IPv4 address space extended by adding an IPv4 Option header that could carry extra address bits?

Because that wouldn't be compatible with existing IPv4 deployments and would cause a reliability havoc when a node not configured to deal with it mangled packets transparently somewhere in between your source and destination end-points.

I don't get the resistance against IPv6. It works. It's a fresh take. Yes it requires some new stuff to be deployed and configured here and there, maybe even requires you to learn something new, but if you thought extending IPv4 would have been any other way you are deluding yourself.

If you're going to do a significant change to something as big as the internet (and introducing a new address scheme is that, no matter how you implement it), you might as well step back and think it all through instead of applying yet another hack.

So tell me. Why are you opposed to IPv6? Why do you want to hang on to this old IPv4-thing which is already at the bursting point, at the edge of what it can take?

-----


I'm not opposed to IPv6. Indeed, I'm one of the few % of people who has a dual-stack home network and ISP. However I don't pretend it was easy to set up - I'm no novice, yet several aspects still confounded me and the large Fedora community that I asked:

http://www.spinics.net/linux/fedora/fedora-users/msg459370.h... (tldr: It's 2015, yet the default firewall in a major Linux distro can't handle IPv6)

-----


> So tell me. Why are you opposed to IPv6?

I'm very concerned about implications for anonymity. If every device has a unique IPv6 address, then just one leak can compromise all other anonymized connections.

-----


No version of IP provides anonymity reliably. If you want anonymity, use something designed for that purpose, like Tor, use it from a coffee shop, and pay cash.

-----


> I'm very concerned about implications for anonymity. If every device has a unique IPv6 address, then just one leak can compromise all other anonymized connections.

IPv4 did not offer any anonymity. The closest you get to that, is an external entity being unable to differentiate the traffic you make vs the one your family makes.

-----


Check out RFC 4941, "Privacy Extensions". It's probably already implemented on the computer you're using right now.

-----


You can still track the prefix though. In a standard /64 deployment only the suffix will change so there is still some information leaking if you have a static prefix (obviously, as in static IPv4). However, my ISP defaults to dynamic /64 prefix unless you opt in for static prefix. I believe most ISPs should probably do the same.

-----


If you have a dynamic prefix then you can't (easily) run servers; you'd need some sort of dynamic DNS as well. And if you have dynamic DNS and dynamic reverse-resolution…there goes anonymity anyway.

IP isn't anonymous; it wasn't designed to be, and probably (given the problems it's trying to solve) shouldn't be. Anonymity should be an overlay.

-----


If you run a server you pretty much need to expose your IP one way or another anyway, even in IPv4. Well, I suppose you can use something like dynamic dns but still...

I totally agree, though, anonymity is not part of IP. I was not implying that, just that there are still ways for home users to somehow shuffle their IPs within the IPv6 framework.

-----


If I need to run a server, I anonymously lease hosted VPS or whatever, and SSH via Tor. My concern is keeping my personal Internet connectivity anonymous. I use nested VPN chains, generally using pfSense VMs as VPN clients, and Tor.

I get that the Tor Project is working on IPv6. But I also want IPv6 NAT, in pfSense or whatever. That's to keep my local device IPv6 addresses (hosts and VMs) private, even from Tor entry guards. I guess that it's time to learn how to ensure that.

-----


> On the other hand, this prevented a project like the linux kernel from moving to gplv3 since the distributed corpus of contributors could not all agree to switch.

That's not enitrely true.

If I remember correctly Linus himself does not like the GPLv3 and opposes its use for the Linux kernel.

Basically his argument is that the changes and new restrictions imposed by GPLv3 in practice makes it a new kind of license, but that the naming seems designed to "lure" people who think it's a regular license upgrade into adopting a license they would otherwise not consider using.

I'm not sure if this is in writing anywhere, but I recall him talking about this on the same Debconf where he admitted that he had never managed to install Debian Linux.

So moving the Linux kernel to GPLv3 may not be entirely trivial, but that's not the reason it hasn't been done. Linus simply didn't want to.

-----


Getting it running on actual iron took more effort than I expected, but lately distros like Ubuntu has spoiled the linux-crowd.

The hardest part about going all in is of you find some of the software you rely on not being packaged. There's just no way to "cheat" and ./configure && make'ing for way around: you have to learn the Nix language and package the software yourself. (Which can arguably be said to be a viral feature for getting more software packaged)

To me this was too much work to fit in an otherwise busy weekend and I just had to give up. Had I had more time to do things properly, I would probably have stuck with it.

The concepts it introduces are quite nice and well executed.

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: