Hacker News new | past | comments | ask | show | jobs | submit login
Rust's Type System Is Turing-Complete: Type-Level Programming in Rust (sdleffler.github.io)
280 points by sleffy on March 11, 2017 | hide | past | favorite | 117 comments



Hey I'm the person who raised the question (and I work on Rust, mainly on the trait system discussed in this post).

I agree with the author that while this is interesting, for users it doesn't matter much. In particular, the folks (with respect) pontificating about whether this is a good or bad thing are quite exagerating its consequences. In practice, you can neither conveniently use this property for metaprogramming, nor will you ever get a nonterminal typechecking (not only is there a recursion limit, it is extremely unlikely you will write the kinds of impls that hit that limit except in extreme typelevel programming like documented here).

The most interesting thing is that the two examples we have now of turing completeness both use only features available in Haskell without making typechecking undecidable. The reason our system is Turing complete is that we allow certain impls that Haskell wouldn't, which are in practice very useful.

For example, we have an impl of "ToString for T where T: Display" (in Haskellese "instance Display a => ToString a"); that is, anything that can be pretty printed also has a to_string method. This would be disallowed without the -XUndecidableInstances flag in Haskell. However, if your type can't be pretty printed, you can separately add a ToString impl for it (even with undecidable instances, Haskell wouldn't consider this coherent because Haskell coherence ignores constraints).

If you're interested, the exact sorts of impls we allow that Haskell wouldn't are covered by what's called the "Paterson condition" in Haskell docs.


I have managed to hit the recursion limit in rust before: https://is.gd/7IH5gE

What happens in rust closures have a unique type[1]. So the recursive call in cps_factorial, instantiates a new cps_factorial which contains a new unique closure type which instantiates a new cps_factorial...

[1] This is done so that closure calls can be determined statically reducing run-time overhead and helps optimisation.


I'm quite of my depth here, and while I can sort of understand the recursive construction of new types, it seems as if the type checker should be able to almost trivialy 'shortcut' the recursion unless the return type is unconstrained in some sense that it can be a recursive type with some type arguments.

Most likely there is something obvious I'm missing as I don't know a lot about rust, but unless the compiler isn't doing what is essentially a type level equivalent of tail call elimination, which it almost certainly does, finding the type of that function should be trivial unless the return type can become recursive with unbounded depth.


> unless the compiler isn't doing what is essentially a type level equivalent of tail call elimination

It doesn't, why would it? That's a pretty niche thing to have.

> finding the type of that function should be trivial unless the return type can become recursive with unbounded depth.

The return type is recursive with unbounded depth. The typechecker doesn't know the value of `n`, so to it this returns an infinitely recursive type.

Closures are statically dispatched. If you want something like this to work, you should box them as a trait object so that they get dynamically dispatched.


This example code must be monomorphized at compile time, meaning that each closure will have a different representation. "Tail call optimization at the type level" is not possible with this compilation model.

(The problem is solved by using a trait object to dynamically dispatch the inner call. https://play.rust-lang.org/?gist=0d3c0a5f1291eb34ba61bd39449...)


From memory I solved it using more CPS and currying. Well manually currying with functions like curry1, curry2, etc...

Although I'm pretty sure you can overload the Fn* traits to implement currying in Rust. (Fn traits are unstable). Which is neat.

    // Something like this:
    impl<F, A, B, R> (FnOnce(A) -> Curry<F, A, B>)  for  F
        where F: FnOnce(A, B) -> R,
    { ... }

    // and
    impl<F, A, B, R> (FnOnce(B) -> R) for Curry<F, A, B>
        where F: FnOnce(A, B) -> R

I think I got it working... but you would need some work for more arguments. (And it worked different on different versions of rust-nightly :/)



In GHC, you can get the "this is the default when it satisfies those constraints" behavior with DefaultSignatures.


New rule. Any sufficiently advanced type system is an ad-hoc re-implementation of Prolog. Once you have unification and backtracking you basically have Prolog and can implement whatever Turing machine you want with the type system.


In the near future, Rust's trait system literally will be a logic language similar to Prolog that runs at compile time. We walk the AST and generate statements to prove. This is expected to be easier to reason about, to be easier to extend, and to have shorter compile times - for essentially the reason you cite. Its even given us some (as yet unexplored) ideas for new extensions.


Serious question... should I spend time in 2017 on Prolog? It seems like another case of "everything old is new again"...


Yes, you surely should, even if it is just wasting a few hours over a few weekends.

Getting to grasp logic programming would improve your skills, even if you never use Prolog again.

Use SWI-Prolog, one of the best free implementations out there.

http://www.swi-prolog.org/


Yes, for the concepts. Just like it helps to know relational databases, even if you always use an ORM in practice.

It is debatable if Datalog is enough or if cuts et al are worthwhile.


> even if you always use an ORM in practice.

I seldom use an ORM, I rather leave the work for the database engine.

My knowledge of PL/SQL, Transact-SQL, PL/pgSQL is way better than trying to make an ORM perform better.

Something like Dapper.NET or jOOQ are good enough for the remaining part of the work.


Not sure if ORMs are a good analogy here. ORMs are a leaky and misleading abstraction on top of an actual RDBMS.


The GP seems to be thinking in the opposite direction. His analogy is that knowing an ORM but not SQL is like not knowing Prolog, and that adding knowledge of SQL to your ORM knowledge is like learning Prolog.

So the argument is that it is a similar eye-opening transition from ignorance to awareness.


Well, SQL is a pretty complicated implementation of Datalog. I think it's easier to learn Prolog from first principles, since the pure subset is pretty simple.


Probably related? I still have to go over this:

"Writing custom type systems for Python in Prolog"

https://news.ycombinator.com/item?id=12108041


Try to implement the unification algorithm yourself.

Example https://unify.ly5.ch/ (Caveat: it's an old implementation)



Next step: unit tests for the type system


Here's a great article about making the Prolog re-implementation in Rust's type system less ad-hoc and more awesome: http://smallcultfollowing.com/babysteps/blog/2017/01/26/lowe...

The best days are when Niko posts about his work on Rust. Always interesting and they often exploit the power behind ideas like "It's basically Prolog" instead of apologizing for it. It's exciting and the kind of thinking that makes me remember why I fell in love with programming in the first place.


If only. Prolog is actually reasonably pleasant to program in.

Accidentally Turing Complete type systems are usually not all that pleasant in practice. Examples: C++ Templates, and now it seems, Rust.

EDIT: Just because people may not know this: A relevant quote by Alan Perlis: "Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy." This applies to type-level programming as it does any type of programming.

EDIT#2: Also, look into Shen for a principled way to do this type of thing.


Hence the ad-hoc part. I don't think people go into writing a type checker and type inferencer thinking they're gonna re-implement Prolog.


OTOH, if you're a Prolog programmer and you write a type checker, you rather quickly realize what's going on ;-)


I noticed that once and I even implemented a library which type checks Python with custom "type systems" written in Prolog, it was shocking how flexible is that, a bit like declarative grammars/parsing

https://github.com/alehander42/hatlog


Alternatively, you could just use Shen, which actually uses Prolog to implement its type system.


Ya but for some reason Shen uses weird sequent calculus syntax. I'd much rather write Prolog rules than sequents.


Is the license for Shen still odd? I know Tarver doesn't believe in free software.


There is this page suggesting it's mostly BSD, but the License link is a 404: http://shenlanguage.org/ethics.html


Will have prolog make easier to implement the type system in a language? ie: I have base language (F#, for example) and wanna implement a new one, I do the type system on F#.

If instead I add to the mix prolog, (or a prolog interpreter?) this will help a lot, or not worth the effort?


I guess one way would be to use one of the mini kanren variants.

http://minikanren.org/

There is also a F# implementation, https://github.com/palladin/logic


That's a result of the Curry Howard Lambek isomorphism. So I'm afraid it's not a new rule. :-)


I read that with Bill Maher's voice in my head.


This is cool but generally a bad idea. It introduces weird edge-case scenarios where you have to 'artificially' halt the compiler, as type-resolution (in these kinds of systems) is undecidable. Rust halts type-resolution by limiting recursion, for example. This is sensible, as @tatterdemalion argues, but it feels like a necessary fix created by an unnecessary problem.

This code, for example, will never stop compiling in gcc 4.4.1[0]:

  template<int I>
  struct Infinite
  {
    enum { value = (I & 0x1)? Infinite<I+1>::value : Infinite<I-1>::value };
  };

  int main ()
  {
    int i = Infinite<1>::value;
  }

[0] http://stackoverflow.com/questions/6079603/infinite-compilat...


So don't do that.

As programmers, we can indeed write infinite loops. Why is it "a bad idea" if the loop happens at compile time rather than runtime? Why should we reduce the expressiveness of our language to avoid this possibility?


Because it's not necessary. If a Turing-complete programming language can't be expressive enough without a Turing-complete metalanguage, then it's probably not designed very well.

What next? Will we start seeing meta-metalanguages because, hey, it makes compilation less error-prone (but adds something we'll call pre-compilation)?


> If a Turing-complete programming language can't be expressive enough without a Turing-complete metalanguage, then it's probably not designed very well.

What is your proposed alternative to the Paterson conditions (described in the top comment)?


I think C/C++ #ifdefs could qualify as a meta-meta-language. I've certainly found that there are situations where static analysis can't comprehend C++ without pre-compilation...


Yes, I think you're right, actually. I doubt the C/C++ pre-processor is Turing-complete, though.


"yes-ish" it is turing complete http://stackoverflow.com/questions/3136686/is-the-c99-prepro... (read the second answer, not the accepted answer)


Wow, TIL. Good find!


> Why is it "a bad idea" if the loop happens at compile time rather than runtime?

Because you need at least one reliable tool to tell you when there's a problem, and if that tool isn't a compiler, what are you left with? When do you realize that a long compilation is actually in an infinite loop and not simply a long compilation? How do you debug the cause of an infinite compilation?

These questions may have answers, but current compilers aren't designed to address them.


We (perhaps stupidly) make the assumption that compilers halt all the time, such as in shared CI environments.


Shared CI environments should always impose some sort of resource limit.

Even without turing-complete type systems, you can still make compilation take effectively forever. It's been a while since I tested, but by my recollection, creating a fixed-length array of sufficiently large size in Rust would make compilation hang indefinitely (well, for much longer than I was ever willing to let it run). Of course this particular behavior may have been fixed (or maybe not; I'm not in a position to test right now), but I'm sure there's other ways to produce artificially long compilation times as well.


Rule of Least Power


The Rule of Least Power leads people to lock themselves into low-power languages, only to find that they need more power, which leads to extending the language incrementally to support incrementally more power, which inevitably leads to a mess. Most heavily-used file formats (even "config" formats) eventually evolve into turing-complete languages, but inevitably they are bad languages, because they were not designed with this end-state in mind.


> So don't do that

Sure boss, I'll fix it. Would have fixed it already, if only nontermination wasn't the most unhelpful kind of compiler error message, ever...


I'd agree, but consider this counter-point: Turing Completeness is so easy to achieve that it's often acheived accidentally. (C++ templates, now Rust, Scala type-level computation via implicits/singletons. I think Haskell's type system is TC via the "correct" extensions?).

Actually, I'd rather want a system that was really designed around this -- well, OK, maybe not TC[1], but perhaps Pacman Completeness.

[1] Purely to avoid decidability problems.


I wonder how many HN users see that headline and think "cool" vs. how many see it and think "uh-oh".


I feel like I've been digging a metaphorical hole for the entirety of the time that I've learned about programming. Often times, I'll look up at the surface where I started and feel pretty impressed with myself at the depth and the breadth of my hole. Then someone will pop his/her head up from the bottom of my hole to show me a nugget he/she has unearthed from miles below me. And while it's undoubtedly cool and interesting, it makes me realize just how much farther I can keep digging. It's both inspiring and ego-crushing.


Ditch the ego, focus on the inspiration; you'll be happier.


As a language designer, Turing complete type systems are scary to reason about semantically. In the best case, they require magic numbers to restrict recursion, and you have to ensure those numbers are applied everywhere necessary. A source of early scala bugs, for example, was finding another case where recursion limits needed to be added. And this wasn't code that was designed to stress the type system!

In the worst case, they just require way too much thought on behalf of users reading and writing the code. Amount up expressiveness often leads to type errors that are not very easy to understand, or lead to type errors that occur under very strange conditions.


Graydon was originally quite adamant that Rust's type system not be Turing-complete, but he eventually acknowledged how dreadfully hard it is to prevent Turing-completeness from accidentally leaking into a system and gave up that fight.


> [...] how dreadfully hard it is to prevent Turing-completeness from accidentally leaking into a system and gave up that fight.

Yeah, that's my impression too. TC doesn't actually require all that much. (Of course, making it pleasant is a whole 'nother matter.)


I am on the "cool" camp, because I am all for the advance of the programming concepts on mainstream languages.

Just because a language allows for doing cool stuff, kind of newspaper quiz, doesn't mean we should write code daily like that.

It is up to us to make the better judgement, according to the team skills and project complexity, about what should be used.


It's cool... but uh-oh, people are going to write obscure libraries using these features (eyeing c++ template).


> but uh-oh, people are going to write obscure libraries using these features (eyeing c++ template)

People do that in C++ because C++ lacks some features they want, and goes a long time between updates.

Rust has many built-in metaprogramming features already, and regularly adds more. And if you have something you want to do and can't currently, you can file an RFC.

That makes me much less worried about people (ab)using type-level programming to implement weirdness.


I'm in the latter camp.


Can you explain why? A Turing complete type system allows sophisticated constraints to be embedded and programs to be rejected at compile time that would otherwise throw runtime exceptions.

It seems best that errors are caught at compile time


In general, metaprogramming capabilities(macros, templates, some forms of late binding) create brittle, high-friction abstractions when applied unjudiciously. They're like writing a customized compiler, except the resulting error messages are all awful or nonexistent and the code can't be debugged as easily.

Thus, even in languages where metaprogramming is a prominent feature(e.g. most Lisps) the culture tends to evolve towards writing in a plain style and sparingly using these abilities because they have so much footgun power.

Now, it's possible that Rust's method of extension doesn't have the same capability to cause harm as something like macros, and is more along the lines of generic types, which have a narrower scope and are more in line with everyday needs. But everyone who's seen it happen is justifiably wary about a proclamation of "this time is different".


In a statically-typed language, the starting point is "nothing typechecks". Then each additional language feature (e.g. generics, records, ADTs, lifetimes) must be implemented as an additional capability of the type checker. This "nothing is allowed unless explicitly permitted" is a necessary condition for soundness.

The most important goal for any programming language is that it is useful, i.e. it can express common patterns without hacks or kludges. Languages that are not useful tend to be forgotten.

However, chasing the dual goals of usefulness and soundness inevitably leads to type-system bloat. If "nothing is allowed unless explicitly permitted" and "everything must be somehow allowed" then it the list of things that are explicitly permitted will end up being very, very long.


Turing complete (or more generally, expressive) type systems are not only about metaprogramming!


Turing complete type systems are not metaprogramming.

You talk about writing a custom compiler / type checker -- this is exactly what unit tests are. Are you against those as well? What's wrong with integrating certain sorts of test into the source itself?


If the type system is Turing-complete, how are you going to test the soundness of your types? Who is going to write unit tests for the types?


From having dipped into C++ at a point in my history, it make me think of C++ templates which were also turing complete. Many of skilled people with templates could go overboard into a point where a significant implementation of the program was in an very separated "template metaprogram space" where it was difficult to manage and debug. You would have to read some primary code, then start doing template transforms in your head to figure out what was actually happening. The compile and run-time errors that could pop out were also terrible and long, giant dumps of template syntax trees..

You can get absolutely phenomenal performance, but any major template capability has a high risk of becoming a DSL with terrible debugging ability and slow comprehensibility.

Judicious use could be very nice though...


That's a problem with c++ in particular though. Languages like Idris and and even Haskell make working with the type system a joy.


Haskell's type system isn't Turing-complete or even particularly sophisticated. There are extensions to its type system implemented in GHC that in theory give you unbounded computation, but a) those aren't actually Haskell and b) nobody does any real work with them and c) it gives up trying to typecheck if it takes too long. I don't know much about Idris but a few seconds of searching tells me its type theory is not Turing-complete either.[1]

[1] http://cs.stackexchange.com/questions/19577/what-can-idris-n...


UndecidableInstances is used in more libraries than you would think. In general, it doesn't inpact safety although certain programs may fail to compile because the compiler gives up after a while. However, this won't result in the acceptance of ill typed programs. Ultimately, the compiler is just acting like any other language run time -- at some point, your Python interpreter will give up (run time type error), your browser will give up (that annoying stop script pop up), and your operating system will give up (out of memory exception).

Also, your link indicates idris's type system is Turing complete.


>Idris won't reduce partial functions at the type level, in order to keep type checking decidable.

Totality precludes Turing-completeness. Specifically, functions that don't terminate are by definition partial, and if you can't express functions that don't terminate then it isn't Turing-complete.


If you're talking about things like UndecidableInstances or FlexibleContexts, um, they are n't exactly uncommon!

In any case, saying that Haskell 98 is the One True Haskell(TM) isn't accurate. Almost all code (beyond the LYAH level) written today uses a few Haskell 2010 extensions, mostly for ergonomics (things like OverloadedStrings are a good example of this), not to mention all the extensions that make mtl let you write "lift whatever" instead of "lift . lift . lift ..." and infer how "high" you want to go up the monad transformer stack.


The "performance" you get by using templates is offset by binary size and/or compilation time. You can't get something for nothing.


Agreed, and then for example when you go to further reexamine performance and maybe try to fit parts of the binaries into cache, then you're trying to possibly optimize at two-different levels at the same time. This isn't to say that Rust will have the same problem, I hope it avoids it entirely. But if the question is why is there a vague feeling of dread when Rust declares a turing complete type system - a C++ template meta-programming ghost is the reason for me.


As tatterdemalion's top comment in this thread states, the deliberate restrictions imposed by the Rust compiler prevent type-level programming from being used for the arcane tasks that TMPL achieves in C++. To say nothing of the fact that Rust has far easier and better ways of achieving metaprogramming than abusing the type system...


Compilation time is a consequence of lack of modules.

In C++11 you can pre-instantiate common used templates (extern template)and hopefully C++20 will finally get modules.

Until then, those that are able to use just VS C++ 2015 and 2017, can make use of incremental linking and experimental modules, to ease the pain a bit.


Non-Turing complete systems can also allow extremely strong constraints to be embedded, while also allowing the type system to be decidable. Remember, while Rust makes a lot of noise about being safe, it isn't Ada Ravenspark, or ATS, or Idris/Agda.


Decidability is not really important for type systems. As long as the type system is sound, you're good. If your compilation is taking too long, I would classify that as a bug.

Thanks for the obvious point that rust isn't agda. However, certainly you can agree that agdas type system is able to capture a strictly greater set of constraints than a typed language without a Turing complete type system


Agda's type system is not turing complete, you can only run on the type-level functions that are proven to be terminating [1] (using well-founded induction or co-induction).

Same for Coq, and probably Idris.

[1]: http://wiki.portal.chalmers.se/agda/pmwiki.php?n=ReferenceMa...


You are right about agda. Idris however is not total by default.


What do Idris and Agda have to do with safety? They have a GC RUNTIME; Rust leverages the type system to get ergonomic, safe and fast code without a runtime.


Idris does not have one exclusively.

Allocations can be removed just like in Rust thanks to uniqueness types:

http://docs.idris-lang.org/en/latest/reference/uniqueness-ty...

Also funny that you mention ergonomics, given that improving the current state is one of the main goals of 2017's roadmap.


Program correctness (safety) is way more than just the memory access guarantees Rust gives you.


Until the code becomes impossible to comprehend as a result.


I mean sure. I for one advocate using non-Turing complete languages for things that don't need it. However, if you're using a Turing complete language, what's wrong if the type system is Turing complete as well.


> I mean sure. I for one advocate using non-Turing complete languages for things that don't need it. However, if you're using a Turing complete language, what's wrong if the type system is Turing complete as well.

Let's unpack that. You advocate using non-Turing complete languages for things that don't need it. Great. Something like DTrace does not need a Turing complete language and D isn't Turing complete. Awesome.

But then you say that if you're using a Turing complete language, what's wrong if the type system is Turing complete as well?

Well, does a Turing complete language, C or Rust, need or in any way benefit from a Turing complete type system? Does a low level systems language, C or Rust, need or benefit from a Turing complete language?

Other than an implementation pun, I don't see any point in this needless complexity. The point of a type system is safety. I don't see how a Turing complete type system gets you to heaven.


> The point of a type system is safety.

Unless I'm missing something, having Turing completeness in a type system doesn't mean it's less safe; it just means the type checker might not terminate. In which case, of course, the code will not compile and thus cannot do anything unsafe (of course the type checker itself might, say, allocate infinite memory and crash your machine... But that's a different story)


It doesn't mean it's more safe. It doesn't mean it's safe. It just means it's complex, needlessly complex.


I wasn't claiming that it makes it more safe. The implication of the post I was responding to, or at least, how I read it, was that a Turing complete type system might compromise safety.


Yes a Turing complete language should have a Turing complete type system. Systems level work has the most to gain from this. Application code can crash with relatively little consequence. System code usually needs to be stable, so enforcing as much as possible at compile time is nice.

Do you believe in unit tests? Ultimately, this is using he taget language to type check itself. If you had a well integrated Turing complete type system (c++ does not), then you could start enforcing some of these at the compilation stage


Have you ported LLDB so that I can debug my Turing complete type system unit tests?

Methinks you are defending pointless complexity. How did we ever write unit tests before this awesomeness was sprung upon us?


Who is going to write unit tests for the Turing complete types?


> Rust has a feature called traits. Traits are a way of doing compile-time static dispatch, and can also be used to do runtime dispatch (although in practice that’s rarely useful.)

Hmm, I dunno about that. If you need to keep a collection of callbacks, that would most certainly be a collection of Fn/FnMut trait objects, that is this:

    struct Thing {
        callbacks: Vec<Box<FnMut(i32) -> i32>>
    }
and not this:

    struct Thing<F> where F: FnMut(i32) -> i32 {
        callbacks: Vec<F>
    }
The same applies for any use of traits where the exact impl to be used is not known until runtime, like choosing a behavior from some set of behaviors in a configuration file.


How does that contradict what the post says? A trait object (the boxed fn in your example) is used for dynamic dispatch.

OP says "can also be used". It sure looks like the Fn trait is being used for dynamic dispatch in that example.


He's objecting to the "in practice rarely useful" bit.


Good.

Metaprogramming is programming -- it's scripting the compiler, to produce code that is faster (at runtime) or catches more errors at compile time. But for some reason in C++, metaprogramming is written in a completely different language from everything else -- a functional language, but not a very good one. As a result, code that relies heavily on metaprogramming ends up inscrutable to most programmers.

Instead, we should embrace metaprogramming and allow it to be performed in a language that is similar to other code, so that we can understand it. Give the compiler an actual API and let me write code that calls that API at compile time.

(I don't know enough about Rust to know how well they've accomplished this.)


Generics in Rust are supposed to be used as generics, not really for metaprogramming like this (but you can, but it won't be pleasant).

But Rust has a hygenic higher-level macro system that helps.

On top of that, Rust is getting procedural macros. Right now this is limited to `#[derive()]` macros, but we want it to cover all cases. This means that you can write a macro that runs arbitrary rust code at compile time.

(While it currently is limited to `#[derive()]` macros you can abuse the API to make it work for inline bang-macros as well)


On the one hand this article is extremely interesting to me. On the other hand, I don't feel like see a lot of articles from the trenches talking about how to do something in production for Rust, which gives the impression (possibly false) that not many people are using Rust in production.


Are you just looking on HN? There's lots of really cool uses of Rust kicking around. I guess it depends what you mean by production. I mean obviously there's Rust itself, Servo, Cargo, /r/rust usually has some interesting posts.

There's also projects like ripgrep, alacritty, redoxOS, just off the top of my head. Dropbox and npm also come to mind.

You can also check out https://www.rust-lang.org/en-US/friends.html for a big list of companies that use Rust.


Don't know enough rust to follow the article in depth, but isn't using macro a bit cheating ? Is the type system turing complete even without macros ?


Macros expand to the code you could have written yourself. It just makes it a bit shorter and easier to write.


According to http://progopedia.com/dialect/smallfuck/ re. Smallfuck: "The memory tape is limited, so the language is not Turing-complete."

Edit: ah, further down in the article: "Smallfuck is a minimalist programming language which is known to be Turing-complete when memory restrictions are lifted."


> (N.B. The word “fuck” appears multiple times in this post. I recommend that the reader temporarily not consider “fuck” as profanity, as it isn’t used that way here.)

How sad is it that a serious article for grownups has to put a disclaimer about a fucking word so that some flakes don't get offended? It's truly a sad state of affairs.


Why would you be offended by that disclaimer?

If you don't care about the disclaimer, it isn't for you, and you can ignore it freely.

You can complain that expiration dates on food should be removed, but using that to complain that people complain too easily is hypocritical.


How sad is it that grownups jump to conclusions and construct strawmen in order to complain about imaginary complaining? When you write about Brainfuck on your personal blog it's entirely reasonable to put a disclaimer up top for the benefit of laymen (including hiring managers) who are googling your name.


>How sad is it that grownups jump to conclusions and construct strawmen in order to complain about imaginary complaining?

It would only be imaginary/strawmen if the author (and us all) hadn't already seen the same complaining in hundreds of articles... As it is, it's merely pre-emptive...


Can you please link an example of a complaining comment in any of those hundreds of articles?


I've seen tons of those over the decade or so on HN. But just off of the top of my he^H^H a quick google search:

"I'm all for cursing in private conversation with friends and such when it's appropriate - but why do people continue to think it's acceptable in public examples of work? Will this go on someone's resume like this? What will an employer think?"

https://news.ycombinator.com/item?id=12460640

"Reading through the comments I am glad I am not the only one who finds over the top profanity upsetting. Yes , the author is free to make his/her point any way they want to. and No, not everyone who thinks that this profanity is undue is a prude. and I dont think it is fair to make a culture or age characterization based on a user's response. Extreme or sometimes, any profanity changes the tone of the article. that alone is a good reason to avoid over the top proclamations. IMO the author comes across as loud and noisy , and not strong and forceful. Just like a stand up comedian, who says 'fuck' for every joke."

"potty mouth."

(and numerous other comments focusing on the use of "fuck" in the post)

https://news.ycombinator.com/item?id=5694173

"Was it really necessary to drop an F bomb to ask this question?"

https://news.ycombinator.com/item?id=11122711


Doesn't that prove my point? If a hiring manager is offended so much by a word used in a programming language's name, that they would not consider you for a job - that is sad! And is it really a company you would want to work in? At what point is money not they only incentive, and you say "fuck this company culture bullshit, I want to feel free to use any word I want, and I would still be judged on the merits of my work and achievements, not the words I use"?


Indeed, we could commiserate all day regarding the boneheaded sexual repression of prevailing American culture, but how is that topic at all relevant or productive in the context of this blog post, and (more importantly) why in god's name is it the highest-rated comment in this thread?


>and (more importantly) why in god's name is it the highest-rated comment in this thread?

Because it hit a nerve?


I'm just wondering if there are senses in which "fuck" isn't profane. I mean, it could be used to mean actual copulation, but AFAIK it's still considered slang/vulgar/profane even in that (original?) usage (even if the act may not be), so...?!?

Anyway, as to the meta-point -- personally, I think it's fine to warn people of upcoming profanity, but it seems to fall kind of flat here. I mean, using the "fuck" in a warning about using the word "fuck" seems to sort of defeat the point of a warning. (Unless of course the writer is himself making a meta-point that I'm not seeing.)


It would be a sad state of affairs if he did in fact have to do that. He didn't though. Completely unnecessary.


No sadder than a world where people consider a bland nod to the existence of arbitrary aesthetics worth complaining about on HN.

People are different. A big part of the job of "not being an asshole" is recognizing this fact.


Fuck that.


Who says he has to? I wouldn't.


i wonder if this post demonstrates a sort of nihilism.


A practical definition of Turing complete is any language that can exercise the full capabilities of the computer.

Given any two languages can do so then they are in fact, interchangeable.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: