For example, in Haskell (as you know, but readers may not), `1:2:3:4:5:` can be written as `[1,2,3,4,5]` which is much prettier than a lot of cons cells (even if we use the infix (:) for cons).
I just looked at TFA and yes, it's truly awful. It doesn't have to be that way. See e.g. https://github.com/m2ym/optima (a pattern matching library for common lisp). I haven't tried it, but I've tried the predecessor of that library.
It's not like people have been building more programs in Lisp than in languages without sexpressions.
"The project of defining M-expressions precisely and compiling them or at least translating them into S-expressions was neither finalized nor explicitly abandoned. It just receded into the indefinite future, and a new generation of programmers appeared who preferred internal notation to any FORTRAN-like or ALGOL-like notation that could be devised."
- John McCarthy, History of Lisp
If you're going to market something as an X then it may not be a good call to remove something that many of the people who use X seem to regard as a desirable attribute of it.
So, you could use the "sugared" syntax, and revert to the isomorphic (that's the world?) sexpression syntax when you want to do s-expressy stuff like macros etc.
Of course you'd have to keep in your head the mapping between the sugary and sexpressions version, but it's not that difficult, and you'd only need to do that if you wanted to use sexpressions, not all the time when you don't really need them.
1) The presence of macros. I can extend the language with new syntax however I please. If you just write a parser, the minute I do so, it's going to be very difficult for that parser to determine the order of operations. It has to have some general solution to the problem of precedence.
2) It... I suspect... has to be homoiconic. That is the underlying datastructure that the code corresponds to has to be immediately visible to the human coding. When you write programs that manipulate programs, the lack of this ability cripples you.
If new programmers learn ... what I suppose you might call a cleaner syntax ... and then have to switch into sexpressions to see what's going on, I don't much rate their chances of learning Macros - they'd essentially be learning a new language when they got to the point where they'd want to do that.
3) It has to be mappable - and preferably in a reverse compatible way - to, if not all, the vast majority of existing Lisp code, (otherwise people who are already comfortable with Lisp code won't have much incentive to use it.)
I'm not going to say that's not possible. What I will say is that a fair number of people, who I don't think are idiots, have tried to update Lisp syntax in the past. Among others attempts RLisp, Dylan, and IACL2. I'm not aware of any project that's managed to maintain the ability of Lisp to clearly manipulate programs as data and to swap back and forth between interpreted and non-interpreted code.
But it's possible they just couldn't see the problem as clearly as we could because they couldn't see that the approaches they tried were going to fail. If you can think of a syntax that solves things, I'd read up on past attempts to see if someone's tried it before and then give it a go. Writing your own reader can be a bit of a pain, but it's probably more doable in Lisp than any other language I can think of. :)
Edit: You might find this worth a read - site that goes into considerably more depth on this: http://sourceforge.net/p/readable/wiki/Problem/
We decided composable functions (lenses) were better.
> We aim intuitive representation of algorithms and formalization of human recognitions. We believe this is the shortest way to the artificial intelligence.
I find it interesting that improved programming languages, especially Lisps, are suggested as the shortest way to AI. This approach has already given well-known failures both in Japan (5th generation computer project) and overseas (Lisp machines).
A few years ago I happened to take a class in AI in Japan which was entirely taught in Common Lisp. One of the projects was an environment where boxes have some properties and relationships with other boxes, for example you could query "what is above blue box?" and get back "green box". The professor teaching the course published only AI papers in Japanese journals, never in English. Most were about games like Shogi (Japanese chess).
It feels strange, like part of the Japanese research community is still trying to apply the 70's approach to AI.
When used as a query language I would like to understand how Egison compares to past efforts such as Datalog or F-Logic which attempted to bring formal semantics and unification to queries. These were even more expressive than some of the Egison examples.
On the other hand the language is quite new and I have lots of code in python already for that project...
I got an idea of Egison when I was working with logic expressions. Pattern-matching of the existing languages are not strong enough to do what I'd like to do. They can't express pattern-matching against set intuitively. Therefore, I've created Egison.
Elixir (http://elixir-lang.org/) has lovely built-in pattern matching syntax. Ditto for Scala.
Lisp has a lot pattern matching libraries and extension languages which are much more compact.