This is my hang up with Lisps over and over again. I read through SICP and did almost all of the exercises. I've played with Clojure and done some small projects in it. I've recently started dabbling in ClojureScript. And I just can't seem to get to the point where Lisp becomes readable and quickly parsable by my brain. That snippet of Python you wrote is immensely more readable than the Lisp in the article. The equivalent snippet in Ruby, JavaScript, C#, Haskell, whatever, would also be immensely more readable than Lisps.
Does it just take some serious perseverance? Or am I just Lisp-dumb?
When we read code, we use our language abilities. Our ability to read syntax uses some sort of built in grammar recognition, which has a maximum capacity that it can handle and doesn't abstract well. Thus in "normal" programming languages a certain amount of flow of control is pushed into syntax and grammar, and the offloading of that work makes it easier.
Lisp goes the opposite way. There is pretty much no syntax, thus all flow of control is automatically pushed to your higher reasoning area, which suddenly does a lot more work and you notice it. Thus it feels like more work. The tradeoff is that everything is more flexible and easier to build abstractions on. But after you have programmed enough Lisp, the "flow of control" key words get wired properly into your grammar processing, and you stop noticing the effort again. But it takes longer for your brain to start processing things this way.
Something similar happens in normal language. The same grammar areas of the brain pay attention to pauses, speech emphasis, and so on. Which in written form we notice as punctuation. But it is also wired to pick up and process small connective words like and, not, or, etc.
Note, I have no evidence for my theory other than "it makes sense to me". However it is based on my limited knowledge of how brains process speech, which we have learned about because of the fact that when strokes take out specific areas of speech, we wind up with specific speech impediments.
It's not so much that Lisp has no grammar, but that it's a VSO language (or it has the tail-first/head-final parameter set, if you prefer), where infix programming languages more closely resemble the SVO natural languages most of us in Europe or North America are used to. I'm sure that something more "normal" like Python, C, Pascal or BASIC would be as hard for uninitiated speakers of verb-first languages to wrap their heads around.
That's a fair point, but it's a different point than the one btilly is making. (I'm not trying to be rude, I just think btilly's theory is interesting enough to merit some defense.)
Consider the formal grammar that might be used to implement a programming language. When we use the word 'grammar' here, it's obvious that Lisp has less of it than, say, Python. It's roughly equivalent to saying that Lisp has less syntax. So, his argument is independent of any VSO/SVO distinction.
Ben is arguing that Python's additional syntax, by formalizing common structures, allows us to (in some sense) externalize the inherent complexity of a problem (i.e. out of our brain). The downside is that it introduces some rigidity into the language. Lisp makes a different tradeoff: we are forced to handle all of the complexity ourselves, but in return the language is flexible enough that we can build exactly the right abstractions for our problem. There's a kind of conservation law.
Yes and Clojure has a lot of syntax too. However that is still nothing compared to the syntax of other languages, like Java. The amount of syntax in a langage like Java is just maddening: even keyword (like final) have totally different meaning depending on their context.
So while I think that it's somehow wrong to always paint Lisp dialects as "having almost no syntax", I do also think that it's totally correct to say: "Lisp dialects have almost no syntax compare to mainstream language".
P.S: don't get me started on JavaScript where syntax gets inserted for you automagically, leading to bugs that can be very hard to track (re- Crockford on the error that inserting automated semi-colons in JavaScript was).
I meant to say "no syntax" there. I've now changed it.
I wouldn't say that the VSO vs SVO distinction is Lisp vs infix languages. Because in an infix language I can and do write things of the form do_this(some_thing, various, parameters).
However it does come up with OO programming where we have some_thing.do_this(various, parameters). Versus the CLOS (do_this some_thing various parameters). People seem to prefer the standard OO syntax even though the CLOS approach is more general (since do_this will dispatch to the correct method based on its parameters, and can pay attention to more than one parameter to do so if needed).
Even though I started out with Algol-like languages I never liked all the different rules for braces, parens, comma's, square braces, assignment, variables, etc. etc. so Lisp was a natural fit for me.
Modern English is not exclusively an SVO language. You probably aren't aware of how often you use VSO sentences. For example, it is very common for interrogative sentences (questions) to have an inverted order. For example:
Where is the (object)?
When was your last birthday?
Whither wander you? — Shakespeare
All of those sentences use inverted word order. One of the reasons that English is so expressive is that it uses both inflections and word order to determine the meaning of a sentence.
One of the great things about Lisp is that it allows you to define your own infix operators.
Thanks for catching my mistake. My mistake is especially embarrassing because term "inverted order" refers to sentences that have the verb second and the object.
On the other hand, I think that a prefix notation more closely resembles the way natural languages are spoken in the aforementioned parts of the world. For example (+ 1 2) if read aloud would become "add 1 and 2", which sounds like a verbal instruction. Reading infix notation aloud in many ways strikes me as less natural, perhaps it's something that is learned from an early age and thus appear natural.
Through the lens of this example, Lisp does not offer any advantage. In fact, I'd argue, that Python, Ruby, JavaScript, C#, and even Haskell are simply explorations of this example, and others like it, with intentional ignorance of some of the other advantages Lisp offers.
The one thing that all those languages drop from Lisp is Homoiconicy [1]. There are entire other categories of "Eureka Moments" that you can experience with Lisp through the lens of homoiconicy. This is at the heart of the utility of Macros, but macros can exist without homoiconicy and there are other benefits besides easier macros.
What makes Clojure, in particular, interesting is that it's Homoiconic in terms of more than just sequences: There's also maps, sets, etc. Common Lisp & Scheme encode maps, sets, etc in terms of lists.
If you're having a hard time seeing past the syntax (or a lack thereof), I suggest exploring Mathematica. Download the demo [2]. The Homoiconic parts of Mathematica are hidden behind more traditional syntax. Use the FullForm[...] function to see the Lispiness leak through. I suggest exploring the Rules & Patterns documentation [3] to get a feel for a non-macro approach to homoiconic transformations. Rules & Patterns makeup a rewrite system, which are strictly more complex than macro systems, but often easier to grasp during initial exposure.
>Common Lisp & Scheme encode maps, sets, etc in terms of lists.
What do you mean by this? You do know that I can write some reader macros to get the same syntactic sugar for dictionaries and vectors as in Clojure, right?
I was thrilled in the moment when I realized that all of the three parts of this banal form can be an arbitrarily complex forms themselves. For example, you can in place substitute "+" with a long code that contacts a web service and in the end, after 100 lines of code, returns "+". This 100 line function is made of forms made of forms made of forms and each can be replaced by the code that you might need. It's the beauty of the conciseness of the ternary operator in other language (a = b? 1 : 2) taken to a possibly infinite extreme.
This can sometimes be achieved in other languages with the use of temporary variables, one-shot functions if the language doesn't have lambdas (or multi-line lambdas like Python) and in the end litter your soul with the use of "eval"
This also leads to the other wow moment, when using Lisp makes it appear other languages' syntax so inelegant and cumbersome. At its core everything in Lisp is just like this:
(FUNCTION argument1 argument2 …)
When it clicks, it really hits you with the beauty of its perfect mathematical purity and you wonder how can you feel so comfortable with the average C-like language, where for has a syntax that is different from while, switch case is a beast of its own, sometimes you use curly brackets, sometimes square, sometimes regular or even angular, you don't have to forget commas and semicolons, you use colons and dots and question marks and myriads of keywords each with its own syntax and some things are functions and other operators and equal means something different from the double equals and so on.
I like the fact that you can substitute a more complex expression anywhere, even in the first position (the function to be called), but everything looks so similar that it's hard to see what's going on. I read more code than I write (I like to read about programming languages) and this:
((lambda (x y) (+ x y)) 3 4)
looks much harder to read than this:
(function(x, y) return x+y end)(3, 4).
In the Lua version, you can easily skim the code: here is a function (rather than a Greek letter), the parameters are inside parenthesis and separated by commas, the body comes next until the "end" keyword, it produces (returns) a value and then you pass the parameters (inside parenthesis and separated by commas again) to the function that was just created.
In the Scheme version, how am I supposed to know where this lambda ends other than counting parenthesis and having a lot of practice to recognize that (....... thing thing) is a function call where the function is more than a single name?
Similarly this:
(define (fn x)
(lambda (a) (+ a x)))
IMHO looks much harder to read than this:
function fn(x)
return function(a)
return a+x
end
end
The second one is much clearer: "return function", you are returning a closure to the caller. The Scheme one makes me think: a function, another function, parenthesis, an expression, then it ends abruptly and I don't know what the hell the code supposed to do. Again, I CAN read that code in Scheme, but only after learning the core concepts in Lua.
But I'd argue `case` in Lisp does have different syntax from `if`. They are built out of the same primitives, but you have to know how to compose those primitives. It's much like saying all functions are the same, but in reality you need to know the arity, order of parameters, etc before calling a function.
To me, Lisp often felt like if the English language only had commas for punctuation. We then formulated all possible scenarios out of words and commas. Sure it's simple, easy for a computer to parse, etc, but it off loads a lot of work onto us as we read.
I'm also reminded of when I took a typography class in art school. My professor was adament about typography not being "grey". He really wanted us to use weight, contrast, spacing, to create and enforce a hierarchy within our typography which would guide and aid the reader. I'd argue Lisp is completely "grey", while other languages are not.
Don't get me wrong, this thread has been a great read and I've gained new respect from Lisp from it. I'm inspired to give it another go.
It seems some people are able to read it, some people aren't. I made it a personal goal a few years back to learn Common Lisp, and while the first month after I started learning was hard, I've never really looked back since then.
If I was to go back before that point, I'm sure I would find most languages more readable than lisp, but now that I know it fairly well, the syntax/parens don't bother me at all. I can parse it really easily with a passing glance. Maybe it's just a matter of experience.
In the end, a language is a language...if you don't like one, you can use another. I happen to love lisp, but it's a personal choice, and I can understand why people wouldn't like the syntax.
No, I was learning Vim at the exact same time, so ended up using Slimv (vim equivalent of Slime). I think in my entire life, I've spent about 10 minutes in emacs. Not because I don't like it, but more so because I never took the time to sit down and learn.
Nope, you're not. Readability is crucial for those of us not wired the same way as those who seem to be more comfortable in Lisp. Can you get a lot done in Lisp? Sure, if you can scan through and make sense of it. I've tried for a long time, almost to the point of barging in on my work time, but I haven't been able to get comfortable. The innumerable parentheses, for example, make perfect sense to those with the Lisp brain, but since I'm reading it in the context of English, where too many of those are frowned upon, it's nearly impossible for me to follow quickly.
Whenever I bring this up among my colleagues, I get shot down by the proficient folks. The argument then turns into how I'm deficient somehow or haven't tried hard enough -- the word "stupid" came up a few times -- and that just completely turned me off the language.
Just an observation about the parentheses... Although it can be hard to make sense of the large number of parentheses, using good indentation (emacs does it for you) helps a lot.
Of course Python makes it easier to figure out what code belongs where, it was made with this goal in mind after all.
I believe it is a tradeoff, Lisp's syntax gives you flexibility and an easy (in terms) and powerfull macro system at the expense of code readability (when compared with some languages).
This is is suggested very, very often. It's been implemented a few times, but has never gained much currency. As best I can tell, it's one of those ideas that sounds good but rapidly grows either complex or kinda useless as you throw more cases at it and you just end up with lots of weird syntax thrown around instead of lots of parenthesis. It works best in a Lisp that was designed with this syntax like Dylan rather than as something you bolt onto a paren-dependent Lisp.
That's a quick one-liner I wrote for part of a webcrawler. In Lisp you don't need to know anything much about parsing that syntax except that after an open parenthesis, you're going to have a function/macro followed by parameters.
In the first example I've hardly improved anything, just removed a single pair of parens.
In the second example, I've split each new nested function call onto a new line and indented. This leaves "raw" parameters on the same line as the function call, whilst moving function params to the next line. (Note that Lisps don't make any distinction between the two usually).
In the third example, I've gone for some unholy mix where I've left the first nested function call on the same line as the first "raw" params, but moved the next params to the next line and repeated with any following functions. This is a mess...
Personally I think the 2nd example is actually quite readable. But here's the catch; this was 1 line from a 60 line function. If every line was multiplied by 5, is a 300 line function still as readable? And that's before you even get into any considerations of how to implement macros and other potentially complicated constructs.
EDIT: Admittedly, not every line is going to grow by a factor of 5, but it'll still be enough to turn a one page function into a multi-page function.
It's never really caught on though, I would guess due to inertia. Most Lisp programmers just get used the parens, and maybe those that can't end up not using Lisp.
Change your IDE's color scheme so that parens are rendered in a low-contrast font color. The parens will be less-obvious.
Emacs' automatic indentation to The Proper Place is tremendously useful. I find Lisp's indentation to be just as easy as Python's, for example -- likely as I have used a Lisp for six years before coming to Python.
Once you've used it for a while, you don't think of it as much different from using {} for control blocks, () for method calls, and [] for array indexing in other languages. For me, the parens "go away", in that I follow the program control flow more by indentation than by counting parens.
It would really open up the list world, especially to those of us comfortable with python.
It seems like such an obvious upgrade to me, since:
A: In no case will it break code that works now.
B: It would always be possible to translate to and from it unambiguously, so if a team wanted to keep all of their code one way or the other, each individual developer could still see things with their own preference - imagine a tool like `gofmt`.
Then it misses the point, don't you think? There is not too much gain between your version and original one. On the other hand, we are used to the use of parenthesis in formulas and people don't complain too much on that.
I think the main annoyance is not the syntax per se but the fact that calls are way more nested than in an imperative language counterpart.
I have no doubt as to the power of Lisp, I've seen the end results with my own eyes. Haven't tried reading it on Emacs yet, but I guess it couldn't hurt to try again. It's just extremely difficult to turn off my semicolon thought process and turn on parentheses.
For me, reading Lisp is a tactile experience as well as a visual one.
Lisp code is a tree, and emacs has a lot of neat commands to let you move around the tree (up one level, down one level, leaf forward, leaf backward), and I find myself doing this whenever I have a piece of Lisp code open to "feel out" its structure.
It's a little different, and maybe less like "reading" than understanding a piece of Python code, but I like it.
Thank you. I should make it clear that I'm still on OK terms with those people. It wasn't so much a direct "you're stupid" or else I wouldn't speak to them again, but more of a "that's just stupid that you can't get it". Which I guess is a long about way of saying almost the same thing, but I think they held back.
I can try to do something with it again if I have time, but I feel it's going to be a long and laborious process before I start to get comfortable. I used to unconsciously put semicolons ;)
Interestingly, I now unconsciously do this switching back to C:
(printf "%d" i)
Before I forget and switch back. I strongly suspect that the people who can't adapt are having difficulty because they are not programming with the new syntax full time. Practicing at night while the majority of your time is still spent with some other language is not the same thing. This is also true for learning human language and is why immersion programs are so successful.
Further, I absolutely do not buy the claims that people have that they can understand haskell easily but not lisp. Haskell might look more familiar at a casual glance, but its syntax takes a substantial amount of time to really understand.
> I strongly suspect that the people who can't adapt are having difficulty because they are not programming with the new syntax full time. Practicing at night while the majority of your time is still spent with some other language is not the same thing. This is also true for learning human language and is why immersion programs are so successful.
I think this is spot on. I have a minor quibble that can't adapt ought to actually read find it difficult to adapt for the very reasons specified--it's not a matter of can or cannot, but a reflection of how much time one is actually able to invest in learning, like a human language. Random, sporadic investment into learning French vocabulary won't help much in improving one's ability to actually speak French. Much will be forgotten that way.
Immersion is definitely the best model for picking up any new [programming] language. Each language I use with measurable proficiency--Python, Objective-C, C#, JavaScript, French, English, etc.--is in that list because I actually took on tasks in which that was the language I had to use. Each language I've messed around with on the side for fun/experimentation/curiosity--Mandarin Chinese, Lisp, Haskell, Java, etc.--is barely worth mentioning.
> I strongly suspect that the people who can't adapt are having difficulty because they are not programming with the new syntax full time.
That is very interesting. It took me indeed may months of night-time programming in Clojure and elisp to get at a point where I could feel confortable with the syntax. I had a series of small "enlightenment" during that journey: the last one was not long ago, when I realized how I could "carry" state through a high-order function.
But to stay focused I kept reading and re-reading great writing about Lisp by Paul Graham and kept viewing and re-viewing amazing videos by Rich Hickey.
It's as with everything: keep your eyes on the goal, not on the obstacles.
fwiw, it is immensely easier to build a parser for (= z (+ y x)) than for z=x+y
you might ask, why should that matter ?
Ad outside the eyeglass store:
why do we have ears ?
so we can wear spectacles!
The point being, we first evolved as creatures with ears and later on devised spectacles so we could wear them over the ears. But we've now taken the ears so much for granted we think of spectacles as the innovation! Parsing should have been the whole point, but we now prize readability. We then add fluffy monikers like "serious perseverance" and "Lisp dumbness" to further tilt the balance in our favor.
It isn't about writing Lisp parsers so much as about writing a parser for your particular language that solves your particular problem. Instead of writing a python program that plays minesweeper, you write a minesweeper language where minefield, detonate, mine, digit, grid are keywords in some sense, where minesweeping can be explained in terms of your minesweeper language itself, not in terms of python & variables & functions & all that.
Read Section 1.2 - "Change the language to suit the problem. In Lisp, you don’t just write your program down toward the language, you also build the language up toward your program. Language and program evolve together. Like the border between two warring states, the boundary between language and program is drawn and redrawn, until eventually it comes to rest along the mountains and rivers, the natural frontiers of your problem. In the end your program will look as if the language had been designed for it. And when language and program fit one another well, you end up with code which is clear, small, and efficient.
The greatest danger of Lisp is that it may spoil you. Once you’ve used Lisp for a while, you may become so sensitive to the fit between language and application that you won’t be able to go back to another language without always feeling that it doesn’t give you quite the flexibility you need.".
You might not want to write parser from scratch, but you might want a new syntactic abstraction, and that usually requires a language change. For example, Python's "with" statement. If Clojure did not already provide the analogous `with-open` macro, you could create it yourself.
Parsing is actually a big issue for people who have to read and write programs; that's why most languages need complex rules for operator precedence and programmers have to memorize these rules or risk introducing subtle bugs.
In Lisp, you never, ever have to worry about operator precedence because the order of operations is explicit in the syntax; you just start from the most deeply nested parentheses and work outward. I'd say that's a worthy reward for learning to read code in prefix notation.
I have a similar experience, but it applies to Haskell, too. I understand functional programming just fine, but I can't read the source code at all. I can slowly grind my way through a snippet and work out what it means, but I can't read it. I wonder whether this experience has anything to do with one's comfort reading mathematical notation - I basically can't, and skip over all the formulas when I'm reading a paper. It's at least 100x easier to just infer the principle from a working example.
Interestingly, I like Haskell for exactly the opposite reason: not only can I read it, but I don't have to. Haskell is the best language I know for getting the gist of some code at a glance.
This is the same advantage as mathematical notation has over paragraphs of text: I can get the general idea from a formula or diagram without reading it in detail. In a sense, I can infer the "shape" of the notation, which is what lets me avoid actually reading everything.
Getting to this point with both mathematical notation and Haskell took a lot of practice, but it's well worth it: both notations have exceptional information density and allow me to go through more information faster.
Sometimes, for some completely foreign ideas, I do have to read the mathematical notation/Haskell code in much closer detail. And this does take much longer than you'd expect: a single page can take something like half an hour or more. But this is not much of a surprise: if the notation was expanded to prose, it would take up several pages, and not be an easy read by prose standards either.
It's remarkable how differently adapted our learning mechanisms are. If you gave me the choice between one page of math symbols and ten pages of English prose, I'd take the prose and call it a bargain.
I get a distinctly Perl-esque vibe whenever I try to read Haskell. So many pieces of type-system plumbing with non-alphanumeric names and uncertain precedence.
What is the deal with haskell? It seems it has some sort of weird property where it causes seemingly rational people to invent crazy nonsense whenever they talk about it. I can't even imagine what type system "plumbing" would be, or how having a type system would make code harder to read. Would you mind providing an example so I know what you mean?
'<*>', '>>=', '=<<', '>>>' off the top of my head.
The first has something to do with Functors, the second two something to do with Monads, and the last something to do with Arrows, and I have no idea what their precedence is. The first three all more or less solve the problem of "use a function I'd normally use on values of type `a` on values of type `m a`, where `m` is a Monad or a Functor.
The reasons this stuff makes code hard to read (for me) are:
1. They're all infix and I don't know the precedence.
2. It's not restricted to standard-library code. Oftentimes, learning a new Haskell library involves figuring out which part of the infix line-noise namespace it's staked out for itself.
Applicative and Monad and their operators are really something I would consider required Haskell knowledge. It's not surprising that you can't read Haskell code if you don't know what those are, because they are everywhere by virtue of being such useful abstractions.
Granted, Haskell people seem to have a sometimes unhealthy attraction towards having operators for everything. The Lens package in particular is rather awful in this regard IMO. (Fortunately there are non-operator equivalents to everything)
>'<>', '>>=', '=<<', '>>>' off the top of my head.
None of those have anything to do with the type system though. That's why I called your statement nonsense. They are just ordinary operators like + and -. The habit of claiming everything you dislike about haskell is somehow related to the type system is quite common and rather bizarre.
>The reasons this stuff makes code hard to read (for me) are
The reason is because you haven't learned haskell. If you had never learned arithmetic then 5 + 3 * 7 would make no sense too. That doesn't mean math is hard to understand, it just means you need to take the time to learn it.
>They're all infix and I don't know the precedence.
Use :info in ghci, or look it up on hoogle. Just like you would with a function you aren't familiar with.
:info (<*>)
infixl 4 <*>
Left associative, precedence 4. Addition is 6, multiplication is 7.
>It's not restricted to standard-library code.
Lots of languages let you write new operators. If a library creates tons of operators that reflects on the library, not the language.
How long have you been using haskell? I had the same sort of thing when I started with it. I can't really remember when exactly that went away, but I gradually just sort of gained haskell literacy over time just from practice.
Most Lisps aren't so great with arithmetic. Afterall arithmetic notation has been refined through 100's of years to be readable and concise, and I've never seen anybody use prefix notation for operators voluntarily. But infix notation can be added easily: this is a quick'n dirty solution in Clojure with left associativity, no precedence and many gotchas. Incanter has a much better implementation called $= with less gotchas and many more features:
Another common problem is the piping of results from one function to the next:
(f
(g
(h a)))
where you usually want to start reading the code in the lower rightmost corner and go upwards towards the left. Very messy in many lisps.
Clojure has solved that problem with the threading macros, yielding postfix notation when you need it the most:
(-> a h g f)
Piping subresults between functions really doesn't get much better in any language.
And that's of course some of the appeal of Lisp: The syntax may start controversial, but you can choose most of it yourself and make it depend on the problem, your tastes and of course, your readers.
Sloppy me. I would of course be referring to arithmetic syntax or notation. And you make my point exactly: there is many ways to express arithmetically heavy operations in a natural, readable way.
Yes, many languages support something eqivalent. Classically object oriented languages not the least:
a.h().g().f()
I feel that prefix notation is one of the greatest impediment to the uptake of Lisps. And I think it's important to underscore that you can (idiomatically) choose prefix, postfix or infix depending on the character of your problem.
If the code is (unnecessarily) hard to read, then express it differently. The language encourages it.
But when you have objects and no piping operator, library designers will often choose to implement pipe flow as method chaining. See e.g. jQuery, Scala's collections or many ORMs, like SQLAlchemy.
I read more about Lisp than I program in it, and I feel just like you about the language.
Everything looks the same. Some time ago I read that Common Lisp had multiple return values. I was expecting something like Lua, where a function like this:
If you are lucky you have a few :keywords or &keywords so your eyes have something to hang onto. Other than that, everything looks so... positional, and a closing parenthesis could mean anything from the end of a simple expression like (+ 1 1) or the end of the declarative part and the start of the executable part of a complex structure.
You don't even need to go that far to find these problems: just look at:
> Does it just take some serious perseverance? Or am I just Lisp-dumb?
I've had the same difficulty with Lisp, but I can read Forth and Postscript (another postfix language) easily. I am really not sure what that says about how brains get wired other than I encountered both postfix languages early in my career.
You crushed SICP and still aren't extremely comfortable with LISPs? That's worrisome. I'm in the process of working through SICP and hoping that there will be some kind of fundamental change in the way I think about code.
i can tell you that after going through only a few chapters of SICP, it radically changed the way i wrote code. i took the MITx 6.00X programming course after and was incredibly uncomfortable at using destructive methods (to give an exmaple), even though i used to write and think code in that way.
I am with you on this. I wouldn't want to use this type of syntax. It obviously serves a different purpose to Java or Ruby though, it might have to do with a different mindset.
I doubt that there is any syntactical advantage though. In the end it is just a matter of opinion.
It takes some time. You have not experienced Lispy syntactical nirvana yet. I was in your place when I started out. I kept on going. Now, even English seems artificially hard compared with s-expressions :D
It took me perseverance. Not that I'm some Lisp god or anything, but I can sure follow 90% of it from reading it. It's like learning Japanese when all you know is English.
Maybe we don't realize how long it takes to acquire a new skill.
If someone had told me it would take a year to just become comfortable with emacs, I would never have picked it up. I already have sublime installed, and textmate!
But I picked it up because I respected people I knew who were wielding with great skill, and I wanted to try org-mode for TODO lists.
Does it just take some serious perseverance? Or am I just Lisp-dumb?