Hacker News new | past | comments | ask | show | jobs | submit login

> The fact is, these "idioms and best practices" will not be followed perfectly on projects of any reasonable size if they are not enforced by code. Why would you not enforce them with code?

Compiler shall not be an obstacle to a man. We tried enforcing things before, many times. Things got abandoned.




Sigh. We build machines to automate the repetitive, to eliminate the daily drudgery and to repeat steps with perfection (that we would repeat with perfection ourselves if only we were as focused as a machone). So why do we keep finding ourselves arguing that a lazy compiler, which offloads the work of a machine on to a dev team, is an acceptable compromise?


Meta-comment: I believe the difference in opinion here (which seems to recur, over and over, and has for decades) is because the job title of "software engineer" actually encompasses many different job duties. For some engineers, their job is to "make it work"; they do not care about the thousand cases where their code is buggy, they care about the one case where it solves a customer's problem that couldn't previously be solved. For other engineers, their job is to "make it work right"; they do not care about getting the software to work in the first place (which, at their organization, was probably solved years ago by someone who's now cashed out and sitting on a beach), they care about fixing all the cases where it doesn't work right, where the accumulated complexity of customer demands has led to bugs. The first is in charge of taking the software from zero to one; the second is in charge of taking the software from one to infinity.

For the former group, error checking just gets in their way. Their job is not to make the software perfect, it's only to make it satisfy one person's need, to replace something that previously wasn't computerized with something that was. Oftentimes, it's not even clear what that "something" is - it's pointless to write something that perfectly conforms to the spec if the spec is wrong. So they like languages like Python, Lisp, Ruby, Smalltalk, things that are highly dynamic and let you explore a design space quickly without getting in your way. These languages give you tools to do things; they don't give you tools to prevent you from doing things.

The second group works in larger teams, with larger requirements and greater complexity, and a significant part of their job description is dealing with bugs. If a significant part of the job description is dealing with bugs, it makes sense to use machines to automate checking for them. And so they like languages like Rust, C++, Haskell, Ocaml, occasionally Go or Java.

The two groups do very little work in common (indeed, most of the time they can't stand to work in the opposing culture), but they come together on programming message boards, which don't distinguish between the two roles, and hence we get endless debates.


My point was: tools that prevent you from doing things should not do that without explicit permission. Because thinking is hard and any interruption by a tool or a compiler will impose unnecessary cognitive load and will make it even harder, which may lead to a logical mistake. It is much better to deal with the compiler after all the thinking is done, not during.


I'm pretty sure you've never used a language with a good type system then.

You describe a system where you have to keep everything a program is doing that's relevant in your head at once, and when you're forced out of that state, it's catastrophic. You seem to be assuming that's the only way to get productive work done while programming. I happen to know it's not.

If a language has a sufficiently good type system, it's possible to use the compiler as a mental force multiplier. You no longer need to track everything in your head. You just keep track of minimal local concerns and write a first pass. The compiler tells you how that fails to work with surrounding subsystems, and you examine each point of interaction and make it work. There is no time when you need the entire system in your head. The compiler keeps track of the system as a whole, ensuring that each individual part fits together correctly. The end result is confidence in your changes without having to understand everything at once.

So why cram everything into your brain at once? Human brains are notoriously fallible. The more work you outsource to the compiler, the less work your brain has to do and the more effectively that work gets done.


Yes but tools that you from doing thing you would prefer not to have done in the furst place (but still grant you permission to override this when desired) would be a fairer assessment of what a sting compiler is.

We all agree that a null dereference is a bad thing at runtime. I see no advantage for me as a programmer to be allowed to introduce null dereferences into my code as a side effect of "getting things to work" if then when the code runs it doesn't work right. This increases my cognitive load as a programmer, it does not decrease it.

I would argue that you don't think about the compiler anymore when using a language like haskell than you do when using Python. But you do get more assurances about your program after ghc produces a binary than after Python has finished creating a .pyc -- and that is a win for the programmer.


Agreed. But every production language that I'm aware of has an out that allows you to escape its type system, with the exception of languages whose type systems are intended to uphold strong security properties and verification languages that feature decidable logics. I can't remember for sure, but I think even Coq--which is eminently not a language designed for everyday programming--may diverge if you explicitly opt out (though I could be wrong about that).

The questions, to my mind, are

1. How easy it is to opt out?

2. How often do you have to opt out?

3. How easy it is to write well-typed expressions?

4. What guarantees does a program being well-typed provide?

For example, you almost never have to opt out of the type system in a dynamic language, but the static type system is very basic. In a language like Rust, you opt out semi-frequently (unsafe isn't common but it's certainly used more often than, say, JNI), and it can be hard to type some valid programs, but opting out is simple and the type system provides very strong guarantees. In a language like C, you never have to opt out of the type system, and the annotation burden for a well-typed program is minimal, but the type system is unsound--being well-typed guarantees essentially nothing in C.

All languages fall somewhere along this spectrum, including Go. It's just a question of what tradeoffs you're willing to make.


I think it's worth clarifying that Rust's `unsafe` doesn't opt-out of the core type system per se, it allows the use of a few additional features that the compiler doesn't/can't check. I think this distinction is important because, as you say, `unsafe` isn't very uncommon and so it is nice that one still benefits from the main guarantees of Rust by default inside `unsafe`. :)


Sophisticated type systems have not been abandoned by any stretch of the imagination.

We tried static code generators before, as well as linters and static code analysis on untyped code, and those have pretty well been proven to be ineffective. All of which are supposedly "new innovations" in Go. So if you want to defend Go that's not an approach you can really take.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: