Hacker News new | past | comments | ask | show | jobs | submit login
I finally understand why I'm not allowed to use Lisp (groups.google.com)
134 points by gnosis on Feb 20, 2011 | hide | past | web | favorite | 114 comments

I learned something a long time ago. It's far, far easier to dismiss others' code as "complex", "unmaintainable", or "clever" than it is to try and understand the code well enough to figure out whether that code was written that way for a valid reason. It's true whether you're new and approaching a codebase for the first time or whether you're reviewing someone else's code.

And no, I'm not saying that it's ok for code to be overly complex or unmaintainable. I'm saying that your first impression of code is probably wrong and you need to understand it before you dismiss it.

Here's one rule-of-thumb that just occurred to me. If there is "prior art" for code, the onus is on the person reading the code to show that it is "complex", "unmaintainable", or "clever." If there is no prior art for code, the onus is on the person writing the code to show that it is "simple," "maintainable," and "straightforward."

Taking parser combinators as an example, if you write some code with parser combinators, simply document your choice as usual and include a hyperlink to http://en.wikipedia.org/wiki/Parser_combinator.

If I read your code and think this is overëngineered, it's up to me to show a better way, I can't simply wave it off as being too hard. That doesn't mean it gets a free pass, it could be that parser combinators for a custom DSL are too complex, perhaps the same problem could have been solved with JSON or something, but the default should be to accept it since it's a technique that is explained "in the literature."

On the other hand, if I roll my own complex code and I can't point to a reference for the technique, then it's up to me to make the case that it's simpler than the alternative. For an example, Benjamin Stein and I wrote #andand for Ruby. It's related to the maybe or option monads, but it doesn't take the standard form so I can't really document it with a hyperlink. The onus is on me to show that in Ruby, #andand is simpler than testing for null, using the null object pattern, or organizing code such that there are no unexpected nulls.

I think you might be overgeneralizing just a little. I'm mostly talking about how you approach an existing codebase. When I'm new to a project, I have to fight the urge to call it overly complex because I know that it's easier to dismiss code than to understand it. The people who created the project put work into it and deserve effort to understand it. This is something I learned quite quickly as a programmer (I mean, the project maintainers will usually be more than happy to tell you that).

A more recent realization is that this goes both ways. If someone submits code to my project, I owe them the same thing: an honest attempt to understand their code even though it's easier to label it overly complex and go watch House.

It sounds like you're thinking more in terms of how you market a new project, which is a different problem that I'm far from being qualified to solve.

It's always twice as hard to read code as it is to write code. For that reason alone, clever code is usually unmaintainable.

It's always twice as hard to read code as it is to write code.


For that reason alone, clever code is usually unmaintainable.

How does this follow logically from the first statement? And more importantly, what alternative do you suggest? Are you positing that for a given problem, there is always some alternative solution that is "less clever" and "more maintainable?"

If you've never seen folding and unfolding before, code that makes heavy use of "map," "each," "select," and so on looks clever. But in reality, it is simpler and less bug-prone than fooling around with for loops and off-by-one errors. To my eye, for loops are clever and unmaintainable.

Moving up the scale, what about recursive combinators?


They look very clever. But then again, rolling your own recursion from scratch every time you need it makes the person reading your code do a lot of work to figure out what you're doing. Is it "clever" to separate the concerns of how to implement recursive algorithm from the code that does something recursively? Or is it clever to roll your own code every time?

WHat about parser combinators? Clever or not? How about rolling your own interpreter for a DSL? That's an official pattern from the GoF book. Can I do that and write maintainable code?

Anyhow, I'm saying the same thing over and over. What is "clever?" Making use of a language feature or algorithm or well-known practice that someone else hasn't seen before and is too lazy to learn? And what is the alternative? Greenspunning the same functionality in an under-specified, bug-prone one-off way?

I'd say clever is defined by your audience. If you're writing something that others on your team would consider clever, then you're responsible for making sure they understand it. Even if that means losing whatever time you saved by writing something clever (the advantage is still creating a smarter team though).

Responsible developers write a lot less clever code because it means they feel obliged to comment the crap out of it and find the time to go through it with other team members.

Unfortunately there's a lot more clever code that only one person really gets. If you're too clever for your team, then quit slumming it and find a smarter team. You're going to be toxic to a team that isn't as clever as you are unless you assume the role of mentor and all of the responsibility that comes with it... and the required patience.

Your response assumes that clever code exists to save time writing programs, and that for any problem there are a wide range of equally good solutions possible from "takes a long time to write but is easily readable and maintainable" to "quick to write but impossible to read and maintain by those who don't have doctorates in functional programming idioms."

I don't dispute the existence of code that is unnecessarily clever, but I suggest that sometimes what appears to be "clever" for a given audience is in actuality the local optimum. Attempting to optimize it for readability without further study will worsen the code in some way, such as conflating concerns that have been carefully separated or introducing dependencies.

I suggest that sometimes what appears to be "clever" for a given audience is in actuality the local optimum. Attempting to optimize it for readability without further study will worsen the code in some way, such as conflating concerns that have been carefully separated or introducing dependencies.

If "clever" is often necessary to avoid things like conflating concerns or introducing dependencies, I think this points us in the direction we need to take in improving programming languages. (Off the top of my head, this applies to both Ruby and Smalltalk.) Though, I've also observed that bolt-ons to programming languages to address these issues are often a bit too "clever" themselves. (Example: The original implementation of Objects in Perl.)

If "clever" is often necessary to avoid things like conflating concerns or introducing dependencies, I think this points us in the direction we need to take in improving programming languages.

I agree!!!! As a rough rule of thumb, "horizontal" or "general purpose" libraries are a potential source of inspiration for language improvements.

In my own case, I think #andand is a terrible kludge. But if I were designing a new programming language, one question I would ask myself is, "How do I make this go away?"

I might add support for monads, Or maybe I would decide that null is a bad idea. I don't know, but I would certainly give it some thought.

I can agree with that. I'd add that if something seems clever you might want to make sure others on your team are aware of it too. At least that way you won't get blamed for being too clever, and more people will understand what's going on.

    // Here be gryphons and scabrous wyrms

For that reason alone, clever code is usually unmaintainable.

An entire application of "clever" for clever's sake is unmaintainable. This tricky sort of clever is sometimes exactly what's needed, but should be used sparingly, where it is most beneficial, and it should be well commented.

On the other hand, clever directed at the goal of readability and maintainability can be wonderfully maintainable. Clever for readability's sake should be throughout the whole application. Accomplishing this is an iterative task, ideally throughout the whole life of the application. This is one of the most important purposes of refactoring.

While it's much easier to brag about the 1st kind of clever and show it off to your coworkers, you'll benefit much more from the 2nd kind of clever.

A tip from my experience: Managerial emphasis on the 1st "clever" is sometimes accompanied by a culture of coder showmanship. Managerial emphasis on the 2nd kind of clever tends to be more focused on sustainable results.

I think really good high-level documentation would be a good place to start here. If someone looks at your code and immediately says "wtf", it seems like it would be good form to point them to a great piece of documentation. Without that, it seems unreasonable to ask your colleagues to dive into something well outside what they're used to seeing in that code base.

> I work in C# (worst language EVER!)

You lose all credibility right there. C# is far from the worst language ever, and a pretty decent one among mainstream object oriented languages.

As soon as I read that I looked for the date of the post. May, 2007. This is before C# 3.0 was released and chances are he was even using 1.0 based on impressions from his boss.

C# isn't half bad now, but back then, I agree it was fairly crummy.

IIRC generics came in 2.0 which narrows it a little.


This was very obviously a rant where the author was in a bad mood. I think some hyperbole can be forgiven.

Familiarity breeds contempt. Often it's the language that coders are currently using that seems like the worst, because they're dealing with that language's problems on a daily basis.


I would choose Python over C# for my own projects any day. However, I can't comprehend the senseless hatred that people have for C#. It is a nicer language than Java (imo) and it has very good performance. Furthermore, due to Mono, C# is no longer a closed platform.

It never was a closed platform. C# was always an ECMA standard. MS even created rotor which was a FOSS implementation of C#.

Re Mono:

Microsoft is not suing Google for patents on operating systems but instead pressures the handset manufacturers using Android.

Is there a guarantee that a customer can use Mono for commercial applications, without a shake down from Microsoft?

IP in Mono falls into several categories.

1. There is stuff covered by the ECMA/ISO standards. This is covered by the Microsoft Community Promise, which is legally binding. This stuff is safe from Microsoft.

2. There is a lot of .NET stuff that Microsoft has released under the Apache 2 license, which includes a patent pledge. If any of this stuff is implemented in Mono, it is safe.

3. There is stuff in .NET that is not covered by the standards, and has not been released by Microsoft under a license that includes a patent pledge. Mono has independent implementations of the interfaces of some of these things. There is no Microsoft patent pledge covering these parts, and Microsoft may have patents on their own implementations. These patents would cover specific methods used in the implementation, not the interface.

The risk from #3 is small. Most important, it is about the same as the risk if you use that functionality in a non-Mono system. For example, I believe that there is an XML parsing library included in category #3. If Microsoft has a patent covering some particular technique of implementing an XML parser, there is no reason to believe that the people who implemented Mono's XML handling are any more likely to have infringed it than the people who implemented Java's XML handling, or Ruby's XML handling, or Python's XML handling, and so on.

Also, most of the things in #3 are things specifically related to Windows programming, so you'll only find them in .NET programs that were written for Windows. If you are writing for Linux, you don't need those things. Even if you want your programs to run on both Windows and Linux, you can do that without using the category #3 stuff. For instance, instead of using WinForms, use GTK#.

If Microsoft did decide to sue someone over their patents that cover the category #3 items that aren't Windows-specific, it is doubtful they would go after Mono. Microsoft has done enough to support and encourage the development of Mono that they would have a big estoppel problem to surmount if they wanted to sue over Mono. It would make a lot more sense to sue over Java or Python, where they would not face the estoppel issues. Remember, as I noted earlier, the category #3 patents are as likely to be infringed by those as Mono.

>1. There is stuff covered by the ECMA/ISO standards. This is covered by the Microsoft Community Promise, which is legally binding. This stuff is safe from Microsoft.

And that is only thing that is remotely safe. All other is murky. But unfortunately, even that can be unsafe if Microsoft sell patents to CPTN (like they did with Novell patents) and let them sue even for ECMA parts.

Moreover, ECMA parts are unusable on their own. Eevery single mono app also depends on non-ECMA parts and Mono is always shipped with non-ECMA parts: http://www.the-source.com/2010/12/on-mono-packaging/

As for Estoppel, they didn't promise anything for non-ECMA. It doesn't work the way you think it works. In fact, they even promised to sue:

"If someone implemented a product that conforms to the specification, we believe we have a patent or one pending that’s essential to implementing the specification."


"The .NET framework contains the latest developer platform for the future, and it must be licensed like Windows. Subsets have gone about as far as they should go in the standards bodies, but we need a compact subset for phones and TVs. It was noted that we have to be careful because once the horses are out, they are out forever. At the right royalty, we can have discussions around technology beyond this."

My understanding is that Microsoft hired the IronPython developer full-time for a period of years.

I think that it's only natural that MS would target Mono if they felt it was a threat. Right now it is in their interest to support Mono because it introduces more people to C# and MS feels that this is good for the long-term health of their platforms. They are also able to iterate quickly enough that Mono (and Moonlight, the Mono-based Silverlight implementation) is always playing "catch-up" and the cool kids are all going to want to use the official implementations with the Cool New Stuff before it makes it into Mono two years later.

Mono developers can easily become Windows developers, and the likelihood of that happening is not that low if a given hacker falls on hard times and needs a job. This hacker will then buy Windows and Visual Studio (or get his company to buy them, either way works), probably multiple times. Hence, supporting Mono makes sense for now.

If Mono ever becomes a threat to Windows' dominance instead of a help, MS is going to go after it hard and try to kill it, just as they go after anything else perceived to threaten Windows or Office. Mono will be targeted because it makes things that used to be Windows-specific accessible on other platforms, destroying Microsoft's lock-in, leveling the playing field and making Microsoft an undesirable option. This is not something MS likes, and they will do everything possible, including patent suits if possible, to get Mono out of the way if they have to. They have already postured for it with their indemnification deals for Novell customers.

The same applies to WINE, although Microsoft for the moment just never mentions this because they're never asked about it, and they don't want people to find out about it and discover that they don't need Windows to run Windows applications. There is no benefit to promoting WINE like there is with promoting C# and other MS-centric languages.

Linux is a big threat, given how they have destroyed a lot of Windows Server's market. And I'm pretty sure Microsoft has a few patents on operating systems, that they could sting Linux for.

However, there are too many developers who would riot (i.e. advise their boss not to go MS) if MS started playing too dirty.

Windows Server and Linux kind of "grew up" together in that Linux didn't really steal away a dominant Windows market with servers, but they both established their market shares in this space over the same time frame.

Microsoft has tried to underhandedly support efforts to kill Linux (like buying tons of crap from SCO after its suit was filed), but they haven't gone "all out" yet. I don't think that means that they won't ever do that. Again, the deal with Novell is posturing; by offering customers the guarantee that they won't get sued for using Linux by MS, they are essentially saying they may sue non-customers as they feel appropriate.

Microsoft's cash cow has always been in the desktop -- desktop OS and office productivity suites. This is the market they'll fight to the death to protect. Bringing legal action against Linux now would only serve as a distraction and a big PR help to things like Ubuntu, exactly what MS doesn't want. They seem to understand the Streisand Effect. There's no point attacking an enemy that is struggling for traction.

Hugunin made his farewell to Microsoft.


The #1 items covered by the Microsoft Community Promise could become problems if Microsoft transfers the related patents to someone other than Microsoft, who would presumably not be bound to uphold an agreement to which they were not a party. I would love to be wrong about this, though.

* Mono is backed by Novell and not Google

* From the Mono Licensing FAQ [1]: "Microsoft has announced that the ECMA standards for C# and the CLI are covered under the Community Promise patent license. Basically a grant is given to anyone who want to implement those components for free and for any purpose."

* The Novell-Microsoft agreements provide further protection for companies that choose to use Mono. [1]

* Many companies are using Mono for commercial purposes. There is a list at http://www.mono-project.com/Companies_Using_Mono

* Notably, Linden Studios uses Mono for Second Life servers. (See http://wiki.secondlife.com/wiki/Mono)

[1] http://www.mono-project.com/Licensing#Patents

There's been some criticism of Microsoft's "Community Promise", such as this:




and of Mono in general:


In the professional Linux community, almost everyone views Mono-haters as nutters/trolls. If you have real concerns about Mono, I suggest that you contact Miguel de Icaza directly.

Email: miguel@gnome.org

Twitter: @migueldeicaza (http://twitter.com/#!/migueldeicaza)

Blog: http://tirania.org/blog/index.html

Well, I am professional and I see Miguel De Icaza as Microsoft zealot and nutter/troll. Person that is "psyched" (see http://www.networkworld.com/community/blog/open-source-guru-... )about Nokia going Wp7 can't give unbiased opinion about anything connected to Micorosft, especially anything that he is actively pushing... and he pushes .NET more than Mono these days.

> techrights.org

That's where I stopped reading, but I'll say this: Nowhere in the FSF's critique of the MCP is it mentioned whether or not the promise itself is legally binding. From a bunch of people who are supposed to be legal experts. What does that tell you?

Whether this "Promise" is actually worth much depends on much more than just the question of its legal legitimacy, as you'd have learned if you actually bothered to read the articles.

But, as far as that particular issue is concerned, as the techrights article points out,

"It may become legally binding ... if used as a challenge in court. But of course it does actually need to be tested in court /first/."

Yes well, I could have used the same argument against the GPL back in the day. And really, I don't need to read through an article by someone who declared the Mono project "dead" a few months ago because he couldn't find the SVN repository. Or spams Reddit endlessly. Not to mention the years of insults and ideological attacks on various FOSS people.

In any case, you might want to look up the legal term estoppel.

Mono pushers keep repeating estoppel meme but nobody ever explained how estoppel would apply to Mono. Microsoft never promised anything more than ECMA spec. They made clear they are not giving anything more. Yet, we know that even basic bits of Mono overstep ECMA http://www.the-source.com/2010/12/more-mono-misinformation-m...

So please stop repeating estoppel meme, it is nonsense. Also, attacking all people who show the world truth about Mono shows you have no arguments. Reddit was spammed by Mono pushers who were impersonating other people.

I did a search on techrights.org of your username and it seems you're one of the regulars there. I guess paranoia pays off sometimes. The "gnosis" account that posted on this same thread is a, um, friend of yours, I take it?

In any case, sorry, but I'm not going to argue this issue with the likes of you. You may have a point somewhere, perhaps, but anyone associated with that blog has about as much credibility vis-a-vis Mono as Rush Limbaugh does when he talks about... well, anything. I bid you adieu sir. Good luck.

So? What if I visit techrights regularly? If you asked me I would tell you that. I go there because it is good site with well researched information, and if you were less pro-mono biased you would recognize that.

I don't know who "gnosis" is, first saw that nick.

Anyways, if you need to attack people and make this kind of nonsense, it means you have no arguments whatsoever, so you just do what rabid Mono pushers always do: character assassination and libeling of every critic, with heavy use of all kind of fallacies.

http://en.wikipedia.org/wiki/Association_fallacy http://en.wikipedia.org/wiki/Ad_hominem

C ya

Now I'm a "rabid Mono pusher", brilliant. Thanks for reminding me why most everyone hates people like you.

That "argument" makes no legal sense.

You lost all credibility by citing techrights (AKA boycott novell).

"Argumentum ad Hominem: the fallacy of attacking the character or circumstances of an individual who is advancing a statement or an argument instead of trying to disprove the truth of the statement or the soundness of the argument."

You cited a site that routinely makes claims either without citing sources, or with cites to sources that contradict those claims. To a large part, it appears to simply make up stuff, and does not issue corrections when its errors are pointed out.

Citing Boycott Novell about software (or about anything, actually) is about equivalent of citing your Astrologer in a science argument.

You keep making ad hominems against techrights rather than addressing the substance of their argument.

Why don't you actually address what they wrote in their article?

I would be glad to address the substance of their argument if they ever manage to produce an argument with substance.

Your character wasn't attacked, your credibility was. It's perfectly legitimate to question your credibility if the questioner believes your source to be biased.

An ad Hominem attack would be calling you stupid for linking the biased source. That didn't happen.

The ad hominem was made against techrights, not against me.

I didn't even make an argument. I just linked to a few articles that criticized Microsoft's "Promise".

Did tsz address the substance of techrights' argument? No. He just smeared them by implying that merely citing an article by them would make me lose credibility.

That is a perfect example of an ad hominem.

Attacking me was more of a case of shooting the messenger.

> smeared them

If you do a Google site search on that blog you'll see that the term 'smear' is used endlessly and quite carelessly by the author. It's interesting to see that you use it in the same way - and like the author I don't think you understand very well what the term means. I sure hope "gnosis" isn't one of the various aliases the guy allegedly uses across the internet, including on Slashdot where he became a bit of a legend for maintaining dozens of them.

I don't think I have to point out that this is yet another ad hominem.

But, be that as it may, I'll just say that yesterday was the first day I've even heard of techrights, when I found their site through a google search.

And, honestly, I don't even care much about Mono. I'm just very suspicious of anything originating from Microsoft, and the techrights and FSF articles support these (well founded) suspicions.

Stop. Stop. Stop.

Instead of dancing in circles, please look at the contact information I posted above. Miguel de Icaza is a good guy. I am certain he will answer any questions you might have.

> http://www.fsf.org/news/dont-depend-on-mono

...The danger is that Microsoft is probably planning to force all free C# implementations underground some day using software patents. ... This is a serious danger, and only fools would ignore it...

It seems as if they are passing off wild speculation as inevitable fact. What is to prevent MS from using its portfolio of patents on any given project? Is a free C# implementation infringing on more patents? The supporting links just looked like general anti-patent pages.

Yeah, I initially wrote a comment insulting him for not being able to figure out how to fold a list in C#, but the fact is, that wasn't in the standard library until late 2007, so I can sympathize.

I think I can explain the author's point of view. C# 2.0 was still the cutting edge for most of 2007. Most shops were still using 1.1.

The author's qualm, coming from a Lisp POV, is most likely that if you are a consumer of C# 1.1 you can't just build C# 2.0 or 3.0 or 4.0 features. Using Lisp, he had access to everything that would eventually become C# features and then some, because if you want a fancy new language feature in Lisp you just add it. You don't have to touch the compiler (you write functions, macros, or a micro-compiler at the limit).

So there's some hyperbole there, but it's true that C# 2.0 or 1.1 was really not anything special and felt a lot like "Microsoft's Java" at the time.

He loses all credibility because he don't like you religio.. I mean language? Strange metrics of credibility. I disagree with you, therefore you have no credibility.

You misrepresented my point. C# is far from my favorite language. He loses credibility however by making a statement which would appear wrong to any person with some knowledge of the programming language panorama. It's a red flag to either ignorance or intellectual dishonesty. Note that he didn't say, "C# is a language I don't like" or even "I find C# to be a horrible language", which would have been perfectly reasonable stances.

I remember running into this early on at Quickoffice. I was tasked with essentially re-writing the entire application stack as the existing code was in pretty rough shape.

C++ was mandated by the platform, but I spent a lot of time getting the STL and Boost up and running. The entire thing relied heavily on templates, including some of the more advanced meta-programming techniques. In each case the 'cleverness' was warranted as it greatly reduced the complexity and redundancy of the code.

My boss at the time raised concerns over how maintainable this all was. How we're junior programmers going to be able to work on it? Being all of 22 I naively responded "well, lets just hire people who can handle it."

Interestingly, that's exactly what we did. We were much more thorough in our hiring process precisely because we needed folks who could rise to the level of the code we had written.

It seems to me that people (and companies) tend to rise to the level of the expectations that you set. In this case, making the decision to use LISP means that you're consciously making the decision to hire the caliber of talent that can use lisp. That may be a good thing (it was in the case above for sure).

I suppose it really comes down to making sure that the complexity of the application warrants the use of the more advanced abstraction. A simple web-site for someone selling tractors might warrant a different ("easier") tool than something inherently more complex.

I find nothing interesting in the original post to be honest. Just somebody that has a superficial impression about C# is complaining. Even the first 25 comments that I read are of no particular interest.

Meta: I wildly guess the HN post gets all the upvotes because initally people think that it will be about a google employee not allowed to use Lisp in google.

For this reason, I wrote a chrome extension that shows more of the subdomain on hacker news: https://chrome.google.com/extensions/detail/amenlkcfjlmchdpo...

Yes, I also expected an official Google stance on using Lisp.

I am new to HN, but I think you should try to read posts before you vote them up. At least that is what I do, after clicking on the comments page. Often, HN comments are even more valuable than the article itself.

You seem to have read an article about someone complaining about C#.

The article I read was about someone complaining about not being allowed to use all the features in the language selected for the project.

Meta: That would be this post:


Which I think has been posted on HN before, Ron Garret's popped up a few times here, but it's still a great story.

A better question is whose going to maintain all those for-loops that are buggy because human error means even a basic thing like a loop will get screwed up. Managers are always talking about code reuse but as soon as you pass a function, they are like "Woah thats crazy how can we maintain that"

Only managers would think that

  int accum;
  for(int i = 0; i < arr.length; i++){
    accum += arr[i];
is more readable and maintainable than...

  arr.sum(x => x)
Oh look and even though I've written a million for loops depending on the language there is a bug because i didn't intialize accum.

This was written in May 2007. C# 3.0 was released in August. We are talking about c# 2.0 here. No Lambda syntax, no Linq, no Type Inference, no nothing. Anonymous methods were clearly intended to handle events, and even that was discouraged in favor of On_SomethingHappend(...) methods in the MSDN docs.

His code must have looked like this:

    public delegate TResult Func<T,TResult>(T input);
    public delegate TResult Func<T,U,TResult>(T inputA, U inputB);
    public static TResult Fold<TAcc,T>(IEnumerable<T> collection, TAcc state, Func<TAcc,T,TAcc> f) 
        TAcc acc = state;
        foreach(T item in collection) 
            acc = f(acc,item);
        return acc;
    public int FunctionalMasturbation() 
        int totalWithdrawed = Fold<int,int>(0,_withdrawals,
                delegate(int acc, int value) 
                    return value+acc;
This is an abomination. Right, for developers with a background in FP, than it's pretty clear what's going on, even if it looks like puke. But in 2007 you could hold a conference for all .net devs with knowledge of Functional Concepts in a telephone booth.

Are you explaining the difference between code you have to write and debug and code that someone else has already written and debugged?

What I'm saying is that passing functions (or functional programming) allows you to reuse a lot more code than typical imperative programs with a lot less effort. I personally find the functional style to be much more readable and don't understand why everyone has such a problem with the maintainability of code written in a functional style even if it's a mostly imperative language.

I'm not sure where managers come into the discussion, but the second example is good, but even better like this:


In what language are arrays of numbers their own class?

That would be legal in C# 3.5 or later.

Though technically it's not that "array of numbers" is a class, but that array of T implements the interface IEnumerable<T>, and Sum() is extension method defined for IEnumerable<int>

It's even in the standard libs, you don't have to build it. Documentation is here: http://msdn.microsoft.com/en-us/library/system.linq.enumerab...

It looks like there's two directions for a language's type system to go:

1) strict typing, but with lots of constructs (e.g. inheritance, interfaces, generics, extension methods) to work around that strictness so that you can do what you want, if you understand the rules and syntax to get it to compile.

2) loose, "duck typing" so you just do what you want, but without compiler checking that it's possible, and with the posibility that it fails at runtime if the right method isn't found.

C# is in the first direction.

In dynamic languages where arrays can be homogenous, the array can still have a sum method, it just assumes decent coercion rules and overloaded + (or it barfs...).

  > They follow where We the Blessed Gurus lead them. But
  > this time it is to the slaughterhouse, because the
  > world needs only fifty Lisp programmers to write All
  > the Code.
Oh comp.lang.lisp. Why did I even bother to read the replies?

I was going to tell him off for missing Enumerable.Aggregate, but then I noticed that he posted 6 months before that was included in the standard libraries, so I'm going to have to let him off.

Somebody uses the following line in the thread: "not because they are good programmers, but rather because they are good communicators".

Hell yeah. Overly-abstract code can be like using variable names x1 to x267 because it is more efficient to type. Time I spend grokking your code is time our website spends offline.

This reminds me of PG's essay on Java, in which he wrote "Java's designers were consciously designing a product for people not as smart as them.".

Ben says that smart people should also work with these languages because non-smart people might have to work with the code. I think that is actually what happens a lot in the industry. Smart people who know Lisp, Ruby, Python, etc. still end up coding in Java because that's the language everybody else knows.

What I don't like about this is that it's basically saying that people can't become good at programming. It's accepting that the majority of professional developpers can't learn to use languages like Lisp correctly or to understand a Lisp program that uses powerful abstractions.

Maybe the ones who really can't are not in the right business ?

I take the point about "Java's designers were consciously designing a product for people not as smart as them."

But, consider the reverse - The C++ spec always seemed to as if the language designers were showing off - competing to add fatures that showed off how clever they were, without as much regard for the readability and maintainablity of the resulting code in the language. I don't think that strategy is optimal either.

Also, if you think only simple code can be done in C#, have a look at the Rx framework. C# as a language is suffering from a bit of bloat too, but not as bad as C++.

I agree with PG and I also think that Java and C# maintainers are wrong to keep adding features to these languages.

I've done a lot of work with Java (and some with C#) and most developers have a hard time even with original Java's scope rules.

So, if these languages want to be the COBOL of the future, they should keep it simple and dumb. Closures and Generics aren't for most people.

Or they could add some compiler directive for switching on advanced features.

I think the problem of Lisp is, that it's very powerful and very dynamic. As long you're working alone, that's not that much of an issue, but with more developers you need to add more security nets, without the aid of a powerful type system.

Most people love dynamic typing, because they hate the static typing of languages like C++ or Java. Powerful languages also need a powerful type system, that the developer can fully express his intentions, with the aid of a compiler validating them. Mainstream languages should start looking at type systems of languages like Haskell.

The more fundamental reason, I believe, is that it is often hard to just hack something together in Lisp. This mostly due to the library issues and a lack of users creating good documentation and guides.

For example, a few weeks ago I wanted to start writing a program that analyzed some spreadsheet files, spit back out some relevant information (including graphs), and then served this information up on some specified port. I initially wanted to write this program in Lisp so I began to look at available libraries. For the gui component, my first pick was Qt, for which Lisp actually has bindings. I spent about a week trying to get all the dependencies for it installed unsuccessfully. The documentation essentially says, "You need all these installed," and then gives you a bunch of links to their respective websites. All of the other details were missing.

Specifically, I could not install the smoke bindings for Qt. I searched for any guides or documentation and always found a git/svn repository. Once I had the files, I had no idea what to do with them; where they needed to go or how to compile them, etc. Raw .cpp and .h files that have dependencies do nothing in isolation. Furthermore, there was no support forum or any other place to ask for help.

Finally, I just gave up. It was too much work to just hack together what was supposed to be a fun weekend project. So I moved onto Python, which just worked. I'm sure I'm not the only one who has had such frustration.

Did you try using Quicklisp?

Also, I would recommend that when you have problems like these you come ask about them on #lisp and #quicklisp on freenode.

There are many very helpful and knowledgeable people there. Often, they're the authors and maintainers of the very tools you're trying to use.

I didn't even know it existed. Thanks for the tip; I'll check it out.

OOOH I was so wanting to tell the OP

What is so hard about this??? //assume some sorta IEnumerable derivitive IEnumerable<foo> bar = new IEnumerable<foo>() { obj1, obj2, obj3, obj4, ...};

var accumulator = bar.TakeWhile(x => x.property == someValue);

but then I saw the .NET 2.0 timeframe.

But even with .NET 2 you had generics which makes things pretty easy. But then again, he/she is ranting so I should just ignore him/her.

It doesn't matter what language you are using, if you cant write something that is legible to solve the problem, think about it more.

If you had read the article you would have noticed that using generics the way they were intended to solve this problem was what he was called out for by his manager.

I suspect they were using .NET 1.x at the time, which would explain a lot.

I initially thought that as well, but the post did mention generics.

Asked me to sign in to Google Groups. Closed tab.

I do not understand why giving feedback about the usability of the link deserves downvotes. I didn't even understand what had happened when I was suddenly at a google log-in, shrugged, and closed the tab.

Funny thing is, if it were a Facebook login wall, tens of upvotes would have ensued.

I don't comment with the expectation of up votes. Infact, I commented that just to warn others that they'd be staring at a google-wall.

Anyone else remember when you didn't have to log into google groups to look at a newsgroup thread?

This is a case where the author put his needs over the needs of the organization.

Kudos to his manager for actually looking at the code and recognizing the problem. Maintainable code is extremely important. Just keep your code simple. Adding another layer of abstraction instead of writing a simple and readable loop (if the language doesn't have accumulators) is not a good solution.

I've seen this too many times. Smart developers write complex code, just because they can (and often it does make it shorter), but then mid- and junior-level developer struggle with it. So company has to spend more money on smarter developers.

Here's a relevant post by Linus:


Whether your "simple loop" is actually simple depends on the complexity of the underlying problem. At a certain point, it takes less time to understand the abstraction + the code using that abstraction than to understand the fully-expanded code without the abstraction.

If you're just adding up the elements of an array, no big deal use a simple loop. If you're iterating through three lists of potentially unequal length processing some triplets and skipping others and returning some data structure based on that iteration, a raw loop is going to be anything but simple and readable while using an abstraction like CL's LOOP is going to result in something very manageable.

To give an empirical example off the top of my head, look at any compiler code written in C. Iterating over flow graphs is a big PITA because they're nested heterogenous structures (a procedure has basic blocks which has instructions which has uses and defs). You iterate over them a lot and a lot of iterations end up being complex triply-nested affairs. Nearly all compilers written in C use some sort of abstraction to deal with this complexity. Eg: GCC has a bunch of macros of the form FOR_EACH_ that use the pre-processor.

> Smart developers write complex code, just because they can (and often it does make it shorter), but then mid- and junior-level developer struggle with it.

You know, I can't help but be reminded of Harrison Bergeron when I read this. I'm waiting for the day when someone comes up with a handicap for programmers so that we can all write code with the same ability as the least intelligent and least experienced members of our teams.

Did it ever occur to you that there might be a method to smarter developers' madness? I mean, maybe they write code the way they do because, you know, they're smart and know something you don't?

"I'm waiting for the day when someone comes up with a handicap for programmers so that we can all write code with the same ability as the least intelligent and least experienced members of our teams."

Java? ;-)

Touché. I must admit that I've never thought of Java as something from Kurt Vonnegut, but the analogy is apt.

> Kudos to his manager for actually looking at the code and recognizing the problem.

In the majority of the cases however you deal with managers who lost understanding of anything but simple language constructs because all they did since becoming managers is management and no code development at all. At some point they start rejecting everything they don't understand. They justify their position by arguments of code clarity and maintainability. But they are misguided. To recognize readable and maintainable code you actually need to practice programming continually.

I dealt with such managers. They would tell me that a dozen of simple code lines are unreadable because they saw a new keyword they don't understand. But they would accept a hundred of code lines that do the same thing in which they can recognize all keywords. They wouldn't care about what the code actually does, nor they would care about what it takes to parse a hundred lines of spaghetti code.

It always fascinated me that some managers think they understand developing code better than developers they manage. I've had the best experience with managers that trust their developers to make decisions and the worst experience with managers that try to micromanage.

> Kudos to his manager for actually looking at the code and recognizing the problem.


> My manager looked at the code and asked "Who's going to maintain this? How will they understand it?" That's not the first time I've encountered these questions. I heard it when I used function pointers in C. I heard it when I used templates in C++.

So essentially the manager bans the use of basic language features because they have hired incompetent programmers who shouldn't be working in the languages before understanding them. How exactly did the manager "recognize the problem"? The problem isn't that people use languages like they are supposed to be used, it's that they have made bad hires, which is not the competent programmers fault.

I agree. I suspect any manager who looks at code and says it is not maintainable would have a very different side of the story, that might sound something like, "We already had an accumulator that did exactly what was needed. It just used a named delegate. I told him to either use that, or refactor the existing code to use his new and get rid of the old one. Maintaining both is wasteful. He seemed to get in a tissy about it."

Sure that's one possibility. Some other things the manager might say:

* Whose idea was it to hire developers that are smarter than I am?

* Of course I told him not to use function pointers. You have to put programmers in their place from time to time. You know, show them who the boss really is.

* Function pointers? Are you serious? He could have put someone's eye out.

link redirects me to a login wall

The same can be said for any statically typed language. Any language that binds types as late as possible is infinitely more enjoyable to work with than one that fixes types at compile time. Short of developing missile guidance systems I don't think static types are warranted for anything.

For me type inference changes the picture, since it reduces the duplication you see throughout Java-style statically typed code, but doesn't give up all the benefits.

The main thing that make programming in statically typed languages painful isn't the additional type declarations (which, as you rightly point out, need not even exist in languages that do competent type inference), but with the constant need to wrestle with the type system to get your code to compile at all.

This is made doubly painful by the incredibly obtuse error messages spat out by some compilers, like:

  This expression has type   ((int * int) * string * string * channel) list
  but is here used with type ((int * int) * string * string * channel) list
or even more baroque and confusing monstrosities.

Programming in OCaml made me feel like I'd need to take years of type theory classes in order to feel really comfortable in the language.

In comparison, programming in Lisp is a joy, and very easy.

But, in defense of the modern statically typed languages like OCaml and Haskell, I'll have to admit that once you've finished wrestling with the type system and actually gotten your program to compile, you've probably eliminated whole classes of bugs that might still exist in a similar dynamically typed language program. Not to mention that it will save you the writing a ton of unit tests.

That's a very optimistic error message!

  error: conversion from std::_Rb_tree_const_iterator<std::pair<
  const std::basic_string<char, std::char_traits<char>, std::allocator<char> >,
  std::basic_string<char, std::char_traits<char>, std::allocator<char> > > >
  to non-scalar type std::_Rb_tree_iterator<std::pair<
  const std::basic_string<char, std::char_traits<char>, std::allocator<char> >,
  std::basic_string<char, std::char_traits<char>, std::allocator<char> > > >

Interestingly enough, the sort of error messages that you mention in OCaml occurs partly due to the inference itself. Because inference is essentially a unification between implicit constraints, once it finds an anomaly, the engine can't always predict correctly which among the conflicting constraints is the actual error from a user point of view.

Anyway just an aside

The only benefit I'm aware of is slightly faster code and even then the dynamic version is almost always more readable and easier to maintain and refactor. The ideal would be a dynamic language with optional static typing but I have yet to see a language like that.

"slightly faster" is a bit understated, you're usually talking about statically typed languages being an order of a magnitude faster for things that are computationally expensive. Granted most things aren't, so for a web page or whatnot, it probably doesn't matter; and certainly the expressive ability you gain from dynamic languages might be worth the trade off. But the trade off is undeniably there.

(Try comparing various languages here: http://shootout.alioth.debian.org/u32/benchmark.php?test=all... if you don't believe me)

Some dynamically typed languages can approach the speed of static languages, like Lisp, but they generally do that by introducing voluntary static typing hints, or having a JIT compiler introduce speculative code paths that guess the types coming in after some analysis.

> The ideal would be a dynamic language with optional static typing but I have yet to see a language like that.

That would be Common Lisp.

That would be Common Lisp.

There was such a version of Smalltalk called Strongtalk, but it never got a community behind it.

Actually - "but then the Java phenomenon happened and we eventually had to switch to Java before ever releasing it".


I'm waiting for perl 6.

You should look into typed racket.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact