Hacker Newsnew | comments | show | ask | jobs | submit | gilgoomesh's comments login

I don't know anything specific about zapcc but based on the numbers here:

http://www.zapcc.com/case-studies/compiling-boost/

even their first compilation (the one you still need to perform when using ccache) is twice as fast as GCC.

In any case, I think the primary audience for this is people using clang, not GCC.


You are correct, zapcc should be compared only to clang from the same svn. clang to gcc is another matter altogether.

Even in first compilation, zapcc will cache between the first and second files compiled. The only case where there is no caching is when zapccs starts or re-startes, for example when compilation flags are changed.


I understand that Microsoft might have felt they were backed into a corner on this issue but...

This is a horrifically bad thing to do.

When a user can change an option as simply as selecting a checkbox and that option doesn't simply cause some actions to return a clear error but instead causes those actions to dramatically change runtime behavior without any change in API behavior, that's bad, bad news.

Deploying and supporting software for hundreds of thousands of customers is really annoying and this type of obscure, broad reaching option is the biggest reason why. Every week I receive a bug report from a customer that turns out to be a minor setting I've never heard of, preventing installation, or blocking some system resource, or changing file system behaviors, or network behaviors etc.

If there's a clear error code that reports what option is blocked and why, then it's annoying but manageable. When there's a massive change in behavior but nothing captured in any log, crash report, etc... then AAAARRRGG!!

-----


Most of these settings can be controlled by group policy or registry settings. This really just illustrates how serious they are about not breaking compatibility with the widest variety of software.

If you are having issues with users being able to change things like this on modern windows systems you need to look into removing their permissions on the host. Since XP most of the kinds of settings like the one described in this old article require admin permissions to change. Maybe that's not an option for you...I get that.

You are right, though, about supporting lots of users...there are always going to be local config that makes that hard when they can change so much of the system. The upcoming universal platform makes that less likely, but it's definitely a hard problem.

-----


OK how do I detect that the bug is caused by someone changing one of a thousand apparently unrelated settings that doesnt report its interference? I don't want to turn off all customizations that users may need.

-----


Most drawing in OS X is done by the CPU. The GPU is used for compositing (layering the views drawn by the CPU over each other) and additional special cases (CoreImage effects, OpenGL, etc).

I have no information on what is going weird in 10.10.4 for OWaz but it's far more likely to be a Thunderbolt driver issue than anything else (i.e. simply related to pushing data over the cable).

-----


There's more than a couple situations where Swift forces you to repeat yourself. No default implementations for protocols and missing language features for dealing with enums would probably be more prominent examples than this narrow availability of default arguments problem.

I'm not sure any one of these "work in progress" related issues are a reason to "hate Swift" (or at least, where Swift is going). It's more a reason to hate the current status of Swift as a project – since it will probably be a year or two before these sorts of non-critical issues bubble to the top of the Swift development team's priority list.

-----


Apple rumors are not a typical example. There's a whole industry of idiots "analysts" who do nothing but release click-bait speculation about Apple.

-----


That puts "com" first as you read it, despite the fact that it conveys no useful information.

Top-level domains have rarely been used as the ontological category they were intended to be. They are little more than flavor-text that is annoyingly required for uniqueness purposes. I think they rightfully belong in the position of least significance.

-----


> That puts "com" first as you read it, despite the fact that it conveys no useful information. They are little more than flavor-text that is annoyingly required for uniqueness purposes.

Except that's not true at all.

The reason they became "flavor-text" was because they appeared to be tacked on to the end for no reason other than uniqueness. Previously existing organizational schema worked for decades with proper categorization: Usenet is a wonderful example of just how powerful it is.

Had URLs been defined correctly, "com" would have immediately told the user "a commercial entity", "org" would have immediately meant "an organization", "net" probably wouldn't exist, and these newer TLDs like "google", "audio", "apps" would have made a hell of a lot more sense.

-----


> Usenet is a wonderful example of just how powerful it is.

Usenet suffered from some a similar problem to domain names: everything started creeping into the "alt" top-level because it was the popular top-level free-for-all.

And the usefulness of the classification for humans was debatable since every topic could be found in multiple locations and some, like rec.arts.tv and alt.tv, rapidly ended up dwarfing entire top level categories like humanities.

-----


But nowadays TLD-s have little relation to the content category of the domain, look at the .com distortion for example. It's a valid claim that TLD-s are not really useful information. If you look at the current domain structure it's more like a file extension analogy.

-----


The whole point is that this is only the case because of the TLD's position at the end of the root address, which has caused it to be perceived as a tacked-on additive for uniqueness' sake. If it had been placed at the beginning all along this likely would not be the case nowadays. TLDs very well could have been useful information; that's the point @awalton was making.

-----


I agree to an extent, it would be interesting to see how the distribution of domains across TLD-s would look if it were reverse from the beginning, but I'm skeptical about a big shift. Country codes would still dominate local content and my feeling is .com / .net would be more balanced.

-----


Sure, but "www" much less significant.

-----


In an ideal world, 'www' would be superfluous. Your browser would know what server to contact for the WWW service by requesting SRV records for the bare domain.

-----


Based on that list, 18.61% of pin codes can be guessed before the 3 attempt lockout, no fancy power tricks required.

-----


You're mistaken if you think that anything in C++11/14 makes it memory-safe like Rust guarantees.

C++11/14 lets you assign a std::unique_ptr to another variable then re-use the original variable, causing a null dereference. You can also have a reference to the same std::unique_ptr from two different threads causing a race-condition on the underlying pointer.

There is no bounds checking on std::vector by default. You can bounds check a std::vector (using "at" or various configuration/compiler settings which change the semantics of the standard library) but by default, it is not checked.

Fundamentally, the issue is that C++ lets you have null pointers and references (yes, references can be null). It lets you use before initializing. It lets you run out of bounds. It lets you access the same variable from multiple threads without any safety. Yes, idiomatically, none of these should be an issue but it's not possible to be a perfect programmer.

Rust forces idiomatic memory management and eliminates all these issues. If you make a memory safety mistake, the compiler will force you to fix it.

-----


Your points are clearly valids. But wouldn't some of them corrected by a strict mode?

- For instance, the reuse of the original variable could be deduced by the compiler. - The bound checking for std::vector can effectively be enable. One could imagine an std::strict_vector that do so.

What I am wondering is: does the same idiomatic memory management applied to C++ would require some huge tweaks to the language, some bad tricks, new keywords, or can it be done without changing its design but enforcing some rules?

I, for instance, have no clues how to deal with your "one unique_ptr two threads" problem. Could it be done in an elegant way in C++?

-----


Linear types (as in Rust) can prevent (at compile time) some of the more trivial use-after-free issues e.g. for unique_ptr, but I think the main reasons you won't see undefined behavior eliminated from C++ wholesale is that it a) often requires extensive support at runtime (see ASAN, UBSAN, etc.) and b) presents a huge barrier to optimization in certain cases and c) (thus) is going to be waaay too slow for production use. (I.e. if C++ were to go in this direction someone would basically either "fork" C++ or a new (similar) language would supplant it.)

Unfortunately, currently no "sufficiently smart compiler" exists, so that high-level code can be optimized sufficiently to beat what a good micro-optimizing C++ compiler (which can assume that no undefined behavior can occur at runtime) can achieve.

-----


Undefined behavior does permit optimizations, yes, but I think you're overselling its benefit. Rust doesn't have undefined behavior and there are many instances where its strict semantics mean that it is far more optimizable and runtime-efficient than C++ (though some of those optimizations are yet to be implemented).

-----


Perhaps, these days you're right -- assuming you want to only support mainstream architectures. These days you can mostly rely on all mainstream architectures to do something sensible with e.g. signed integer overflow[1] or excessive shifting, but that wasn't necessarily the case when most of C++ was standardized. As an example of a similar nature -- as I'm sure you know -- Rust has chosen to not abort/panic on signed overflow although almost all instances of such are most probably plain logic errors and could lead to security problems[2]. As far as I could understand, this was for performance reasons. Granted, this is not quite as disastrous for general program correctness as UB, but it can lead to security bugs.

Point being: Underspecification can give you a lot of leeway in how you do something -- and that can be hugely valuable in practice.

Just as an aside: Personally I tend to prefer safety over performance, but I was persuaded that UB is valuable by comments that Chandler Carruth of Clang (and now ISO C++ committee) fame made about UB actually being essential to optimization in C++. Sorry, can't remember where, exactly, those comments were made.

[1] Everybody's probably using two's-complement (for good reasons).

[2] Not nearly as easily as plain buffer overflows, but there have been a fair few of these that have been exploitable.

-----


Even mainstream architectures don't handle excessive consistently, e.g. for shifting an n-bit integer, I believe some mask the shift by 2^n - 1, some by 2^(n+1) - 1, and some don't mask at all (i.e. 1 << 100000 will be zero). Of course, being UB (rather than just an "unspecified result" or some such) probably isn't the best way to handle the inconsistency.

In any case, I believe Rust retains many of the optimisations made possible via UB in C++ by enforcing things at a compile time. In fact, Rust actually has quite a lot of UB... that is, there are many restrictions/requirements Rust 'assumes' are true. For example, the reference types & and &mut have significantly more restrictions around aliasing and mutability than pointers or references in C++. The key difference between Rust and C++ is that it is only possible to trigger the UB with `unsafe`, as the compiler usually enforces rules that guarantee they can't occur. People saying "Rust has no UB" are usually implicitly meaning "Rust cannot trigger any UB outside `unsafe`".

-----


Rust will actually now panic when not in release mode when an integer overflow happens, as opposed to how things used to be before, where it would just accept the overflow silently with wrapping semantics. Here is the discussion from when this change was announced: http://internals.rust-lang.org/t/a-tale-of-twos-complement/1...

-----


Oh, that's good news!

-----


A "safe mode" in C++ is completely impossible in practice, because C++ does not have a module system but an automated text copy-and-paste mechanism (the preprocessor). Hence your "strict" mode would refuse to compile any unsafe constructs in your standard C++ library headers, boost headers, Qt headers, libc headers, or any other headers of popular and mature libraries that made you choose the C++ language in the first place. If you can't re-use any code anyway, why not pick a sane language?

-----


An interesting article but frustrating to read as a non-linguist/grammaticist.

While it's about the incorrect usage of the word "lain", the article only actually uses "lain" correctly once in a sentence ("Those skeletons had lain under that supermarket for centuries"). Additionally, the article uses but doesn't really define the terms "transitive" or "past participle"; ignorance of which are the key sources of confusion for regular speakers.

-----


I was going to respond with an explanation of the obscure terms, but I'm sure without looking that Wikipedia can do a better job than I.

Instead, I would like to exhort you and everyone else to take an interest in linguistics[1]. I really wish we could replace our 12 years of mathematical education, which I think is overall quite useless[2], with 12 years of linguistic education. If we spent 12 years learning how all of the world's languages work, I also think we would go a long way to reduce xenophobia and racism. [3]

Now, don't get me wrong. I don't want people to learn how to speak correctly. I want people to expand their innate curiosity for just how goddam smart humans are at languages. Like the article said, the unusual thing isn't that we can't keep lie/lay/lied/lain/laid straight, but that we can keep almost everything else straight. Language is a unique human phenomenon. We have opposable thumbs, but so do monkeys, and elephants are quite handy (haha!) with their trunks. Even a raven can use tools, but no other animal exhibits the breadth of ability of language that we do. Birdsongs are complicated and maybe even culturally transmitted, but they can't be used to dictate laws or record writing or persuade others as I am now trying to persuade you.[4]

Linguistics is a science: it proceeds by gathering empirical data of how humans speak, then formulates hypotheses that will predict how they will speak, and confirms or denies these hypotheses. What more interesting object of study than ourselves! Language is something we all do on a daily basis, spontaneously, naturally. Ever wondered why we do it the way we do it?

Sadly, the beauty of this science is clouded behind the way it is taught today in schools, along with a jargon that further distances some of us from the actual object of study. To the prescriptivist grammarians, I say their objective is as futile as trying to educate ants on architecture: ants will build anthills as they see fit. But for the jargon, I am sad to say that some of it is inevitable, because we need the lens of analysis and classification in order to see the true attraction of our linguistic abilities.

Transitive or intransitive verbs don't occur only in English: virtually every human language has something like them. Participles are less universal, but they are also not a uniquely English phenomenon. But why does this happen? Why does every known language have verbs but only some has participles? Why do humans craft languages as they do? Therein lies the science! (or lays?)

So, try to learn some linguistic jargon. Underneath it lies a very interesting set of concepts that describe an ability that uniquely characterises our species. :-)

----

[1] The language log is a good place to start finding interesting things:

http://languagelog.ldc.upenn.edu/nll/

[2] Particularly when calculus is the ultimate goal of elementary mathematical education. When was the last time you or most adults around you had an urgent need for calculus? Can you even state, say, Rolle's theorem without looking it up?

[3] For example, did you know that ebonics has way more verb tenses that express very nuanced moments in time, nuances which standard English lacks?

https://en.wikipedia.org/wiki/African_American_Vernacular_En...

[4] Here is a pretty interesting that theorises that human language may have evolved from some characteristics found in birdsong!

http://phys.org/news/2013-02-human-language-evolved-birdsong...

-----


> I really wish we could replace our 12 years of mathematical education, which I think is overall quite useless[2], with 12 years of linguistic education

I wonder why it is that people can never just say "you, know, X is quite important and maybe deserves more of our attention" and instead have to go all bombastic and pretend "X is the most important thing in the world and our modern society couldn't exist without it".

-----


I do enjoy bombasticness.

-----


Bombasticity?

It's probably actually just 'bombast', isn't it?

-----


Bombasticitinessation.

-----


The Language Log is great. Geoffrey K. Pullum, who wrote this article in the Chronicle, is a contributor there:

http://languagelog.ldc.upenn.edu/nll/?author=3

He writes a lot of good stuff. I particularly enjoy his crusade, there and elsewhere, against Strunk and White:

http://languagelog.ldc.upenn.edu/nll/?p=1485

http://languagelog.ldc.upenn.edu/nll/?p=15509

http://chronicle.com/article/50-Years-of-Stupid-Grammar/2549...

http://roomfordebate.blogs.nytimes.com/2009/04/24/happy-birt...

I see he also weighed in on Girrafedata's quest to rid Wikipedia of "comprised of" (see HN passim):

http://languagelog.ldc.upenn.edu/nll/?p=17636

-----


Just improve education in general. People might study eg math for years at school, but most are not learning several years worth of material.

-----


Interesting, sure, but so is math if you're not just performing arithmetic or memorizing equations without knowing what they mean, which many mistake for it. Why would linguistics be any more "useful" to a student to have their 12 years of math replaced with? Instead of replacing math, replace the kind of nonsense in English courses here with linguistics: https://news.ycombinator.com/item?id=8812388

-----


Why do we have to replace either subject? Both math and literature are generally poorly taught. That does not mean that understanding of both is not requisit for being an educated person.

-----


I meant only replacing part of the English curriculum, not all of it.

-----


Oddly enough we covered these terms in High School, but it was in Japanese class.

-----


To summarize, the author has two problems:

1. Bit shift is not part of the IntegerType protocol, when it should be (although the author could avoid the issue by accumulating the bytes in a UIntMax instead of the generic type).

2. Construction from (and conversion to) a UIntMax bit pattern is not part of the IntegerType protocol, when it should be (done correctly, this addresses the author's sign and construction complaints)

The author incorrectly claims/implies that these are problems with generics or protocols or the diversity of integer types in Swift. They're really a problem of omissions in the standard library protocols that are forcing some very cumbersome workarounds. The necessary functionality exists, it just isn't part of the relevant protocols. Submit these as bugs to Apple.

Edit:

As a follow up, here's a version that gets around the standard library limitations using an unsafe pointer...

  func integerWithBytes<T: IntegerType>(bytes:[UInt8]) -> T? {
    if (bytes.count < sizeof(T)) {
        return nil
    }
    var i:UIntMax = 0
    for (var j = 0; j < sizeof(T); j++) {
        i = i | (UIntMax(bytes[j]) << UIntMax(j * 8))
    }
    return withUnsafePointer(&i) { ip -> T in
        return UnsafePointer<T>(ip).memory
    }
  }
Of course, at that point, why not simply reinterpret the array buffer directly...

    func integerWithBytes<T: IntegerType>(bytes:[UInt8]) -> T? {
        if (bytes.count < sizeof(T)) {
            return nil
        }
        return bytes.withUnsafeBufferPointer() { bp -> T in
            return UnsafePointer<T>(bp.baseAddress).memory
        }
    }

-----


The more protocols that are added, the more concepts there are to scare people away from what they think of as relatively simple primitives. 26 is already a scary number of concepts to tie to simple whole numbers.

To be clear, the complexity is inherent in using numbers programmatically. The only real way around it would be to reduce flexibility around overloading operators, forcing people to implement bundles of related operators that are all associated with a concept (protocol). This would decrease the utility of overloading for implementing some algebras.

-----


I definitely agree that, by far, the best way to do this is to just use type punning with pointers. But if you really want to rebuild this yourself, you can still do so without the unsafe pointer, by handling signed and unsigned separately.

Unsigned is easy:

  func integerWithBytes<T: UnsignedIntegerType>(bytes: [UInt8]) -> T? {
      if bytes.count < sizeof(T.self) {
          return nil
      }
      var acc: UIntMax = 0
      for var i = 0; i < sizeof(T.self); i++ {
          acc |= bytes[i].toUIntMax() << UIntMax(i * 8)
      }
      // UnsignedIntegerType defines init(_: UIntMax)
      return T(acc)
  }
Signed is trickier because of the sign bit. You have to sign-extend manually:

  func integerWithBytes<T: SignedIntegerType>(bytes: [UInt8]) -> T? {
      if bytes.count < sizeof(T.self) {
          return nil
      }
      var acc: UIntMax = 0
      for var i = 0; i < sizeof(T.self); i++ {
          acc |= bytes[i].toUIntMax() << UIntMax(i * 8)
      }
      if sizeof(T.self) < sizeof(UIntMax.self) {
          // sign-extend the accumulator first
          if bytes[sizeof(T.self)-1] & 0x80 != 0 {
              for var i = sizeof(T.self); i < sizeof(UIntMax.self); i++ {
                  acc |= 0xFF << UIntMax(i * 8)
              }
          }
      }
      // We're assuming that IntMax is the same size as UIntMax and therefore
      // uses init(bitPattern:) to convert, but that should be safe.
      let signed = IntMax(bitPattern: acc)
      // SignedIntegerType defines init(_: IntMax)
      return T(signed)
  }

-----


> not part of the IntegerType protocol, when it should be

> Submit these as bugs to Apple.

Language designers keep making this mistake. .Net has this problem. Supposedly a bug was submitted for it and it's not possible without breaking some backwards compatibility. Luckily Swift is beta (right?).

Either way, how I solve this "safely" in .Net:

1. Never do bit operations against signed integers. The behavior for this varies wildly across languages, it best to just avoid this altogether.

2. A UInt64 is bit-identical to the Int64 that it was cast from.

I'm guessing at the Swift syntax, but those concepts translate to:

    func integerWithBytes(bytes:[UInt8]) -> UInt64? {
	    if (bytes.count < sizeof(T)) {
		    return nil
	    }
	    for (var j = 0; j < sizeof(T); j++) {
	        i = i | (bytes[j] << (j * 8)) // error!
	    }
        return i
    }
That's right: you simply don't need a generic method.

-----


This is a great pragmatic example.

Swift is a fantastic language with many features that let you write safe and strict code. But do not forget, Swift sits on top of a foundation of C and Objective-C. In my opinion a great Swift programmer knows when to take advantage of that.

And, if written correctly, like the last example above, it is totally possible to write elegant and safe code. In my opinion there is nothing wrong with an approach that uses something like withUnsafeBufferPointer().

-----


> Of course, at that point, why not simply reinterpret the array buffer directly...

... because then the behavior of the code depends on the endianess of the CPU, and thus is a potential future portability gotcha... ? :)

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: