The biggest conceptual shift is to thinking functionally rather than imperatively, so it's going to be similar in both languages. The difference is that Haskell is more thorough, but the fundamental ideas are very similar.
Haskell, of course, has many of its own advantages which are mostly detailed elsewhere. I'm merely going to give you my highest-level take-away from trying Clojure: Haskell has much better facilities for abstraction. Haskell allows you to use types and data representation specific to your donation while still allowing you to take advantage of some very generic libraries. And the custom types are not actually complex, unlike what Clojure leads you to believe with its preference to using the basic built-in types: you construct your types from basically three fundamental operations: combining multiple values into one (like a tuple or struct), giving a choice of values (like a variant or disjoint/tagged union) or creating functions. That's all there is to them!
Basically, don't be afraid of Haskell, whatever the common wisdom is. As usual with these sorts of things, it's much more common than wise.
I've been using Clojure for about 5 years now. I've actively worked for the past two years on core.logic which has quite a few fancy abstractions (custom types, protocols, macros) that all interact quite nicely together. Now, I'm not a Haskell expert, but I do feel that some of the things I've built would be could be challenging in the Haskell context. I might very well be wrong about that.
That said, I'm actively learning Haskell as there are some aspects of Haskell I find quite attractive and in particular a few papers that are relevant to my Clojure work that require an understanding of Haskell. My sense is that even should I eventually become proficient at Haskell, my appreciation for Clojure's approach to software engineering would be little diminished and likely vice versa.
Of course, this is nothing at all like macros, but in practice we can achieve many of the same goals while maintaining type safety. So, win win win (the third win is for monads).
EDIT: As dons points out, I was imprecise in my wording. I don't mean to say that TH leads to Haskell programs which are not type safe. The compiler will, of course, type check generated code. In general, given that dons has been programming Haskell for 14 years compared to myself who has been doing it for 1 year, prefer what he has to say on this subject.
How does TH sacrifice type safety? The generated code is type checked.
For actually customizing syntax via macros, quasi quotation is the popular approach, combined with TH. People can eg embed JS or ObjC syntax fragments directly into Haskell this easy. http://www.haskell.org/ghc/docs/latest/html/users_guide/temp...
For logic programming: you might find Control.Unification  interesting -- and of course if you don't need unification then you can go pretty far with the plain old List monad.
Protocols/multimethods: Haskell typeclasses are really really powerful. Probably my favourite of all the approaches to polymorphism I've seen anywhere, although YMMV (they do lean heavily on the type system.)
Macros: there is Template Haskell , which is pretty interesting although I can't claim to've used it myself. It doesn't quite share the simplicity of macros in a homoiconic dynamic language, but at the same time in Haskell it feels like you don't need to lean on compile-time metaprogramming as much as you do in a Lisp to achieve power. (Can't quite put my finger on why).
Monad transformer = thing that can transform a monad into another one with additional primitive operations. Monads are more or less the Haskell way of achieving something akin to overloading the semicolon in imperative languages. In imperative languages the meaning of the semicolon is baked into the language. It seems natural and stupid to think about what it means, normally it's just "do this, changing the program state by assigning some values to some variables, then do this with this new state" but if you think harder, it can also mean "do this and skip the rest of the loop" (if the first statement is a break) or "do this and then skip a bunch of levels up the call stack until you find an appropriate handler, unwinding the stack in the process" (if it's a throw). Haskell doesn't have any of this baked in, but in a way allows you to define the semantics of the semicolon yourself.
So monad transformers then allow you to build up more complicated semicolon semantics by combining simpler ones (and getting the full set of their primitive operations, such as assignment, break or throw statements in the previous examples).
Logic programming, in its simplest form, is concerned with deducing "goal" facts of the form C(X), from a bunch of rules of the form "A(X) and B(X) and ... imply C(X)" and other facts of the form "A(X)". One way you can do this is look for all the rules with your goal fact as their conclusion, then look how you can derive their premises, and so on. Which essentially boils down to a backtracking search.
So what Kiselyov et al did was to implement some primitive operations and overload the semicolon in a way which makes it easy to perform a backtracking search. Or more precisely, since it's a monad transformer, they figured out a way to add these primitives to any set of existing ones (such as assignment primitives for instance). Their implementation also provides some interesting backtracking operations which can be tricky to implement (the aforementioned fair disjunctions, negation as failure and pruning). And it is efficient since it's based on continuations (which are just functions of a certain format), as compared to other approaches which first have to represent the target program as data ("reify" it), then define an interpreter for that data and finally run the interpreter.
One of my frustrations with learning Haskell is that everybody assumes that I am coming from a imperative programming background. I've toyed Python a wee bit, but I've really only ever used functional languages (Scheme, R, Clojure) and have only ever programmed in a functional style. Needless to say, I have no clue what the semi-colon is supposed to do in imperative languages. As I have been told many times, knowing functional programming lets you start at 2nd base with Haskell...but no further.
Thanks for the attempt. I'll try again after I'm done with RWH.
I hope that helps decrypt the monad explanation part of my post somewhat. I wish I could give one which better relates to your background, but of the three languages you mention, I've only had very superficial exposure to Clojure. And you've also probably heard it before, but I would recommend LYAH over RWH as the first Haskell text.
What they're talking about is statements. If you have a completely pure functional language, no function should ever have more than one expression. If the function had two expressions that would mean one of them did something and the result was thrown away. But if you don't have side effects, why would you have an expression that does something and then throws the result away?
In Haskell any function can only have one expression. So if you need to do several steps (i.e. you need side effects) you have to use do notation. Do notation is syntactic sugar for turning what appear to be several statements into one statement.
If you use the "curly brackets" way of coding then you would separate these statements with semicolons (tada!), but most code is white space significant so they just use one line per statement.
So what do does is takes the code you wrote in the lines and determines how to combine it. If you're using do then you're using a monad of some kind (a container, basically). Given the container and the contained, you can do two kinds of operations: "side effect kinds" that appear to do a statement and ignore the result (these actually work with the container) and expressions.
The interesting thing about this kind of code is you don't keep track of the container. Sometimes you don't see any trace of it at all except in the type signature. There will be functions that talk to the container but they don't appear to take the container as a parameter (which is good since you don't have a handle to it anyway). Behind the scenes do is setting up the code in such a way that the container is actually passed to each of your expressions so your "side effects" aren't side effects at all, they're modifications to the invisible (to you) container.
And each line is fused together by using one of two functions the container defines. So this is where the power comes in. How the container chooses to put these lines together depends on what the container actually is. An exception container, for example, might provide a special function (called, say, "throw") that will call each expression (functions by the time the container seems them) unless one of them calls its special "throw" function at which point it doesn't execute any further expressions and instead returns the exception it was passed.
I don't know if that makes things better or worse. :)
Much better! Thank you :-)
I think you've also managed to prod me a little closer to understanding monads -- I always feel most tutorials are too complex, or too theoretical. Relating monads and the semicolon gave my brain a nice "hint" towards understanding them better, I think.
Composition is key in functional programming, and when you have two functions a -> b and b -> c, there is no problem... You can just compose them and get a -> c.
Using the List monad as an example, consider the case when you have a -> [b] and b -> [c]... You have a function producing bs but you can't compose it with the other function directly because it produces a list. We need extra code to take each element of the source list, apply the function, and then concatenate the resulting lists.
That operation becomes the "bind" (>>=) of our Monad instance. Its type signature in list's case is [a] -> (a -> [b]) -> [b], which basically says "Give me a list and a function that produces lists from the elements in that list, and I will give you a list", but that is not so important. The point is that you start with a list and end up with another list, which means that by using bind, you can compose monadic operations (a -> [b]) indefinitely.
The "return" operation completes the picture by letting you lift regular values into the monad, and when return is composed with a regular function a -> b it transforms it into a -> [b]
The more generic form of a monad in Haskell is "m a" which just means a type constructed from "a" using whatever "m" adds to it (nullability, database context, possible error conditions etc.)
As you can see from the type signature, "a" is still there. Monadic bind and return allow composing existing a -> m b and a -> b functions, and this abstraction is made available through the Monad typeclass.
 Note that for lists, there's actually another valid definition of bind that behaves differently but also satisfies the required properties. Since you can't have two instances of a typeclass for a single type, it's defined for a wrapper type "ZipList" instead.
In general, I'd even say that Haskell provides very little syntactic sugar, and the stuff that it does provide is both quite useful and rather understandable. Examples being list comprehensions, or even list syntax to begin with, where [1,2,3] is sugar for 1:(2:(3:))). Yes, the do-notation (which is what you normally use with monads) is also sugar, but it's not the difficult part of understanding monads. The difficult part is understanding the various bind operators (or equivalently, join or Kliesli composition) and how exactly they model different computational effects, and lastly forming the "bigger picture" from the examples.
> So I guess you could call lack of semicolons syntactic sugar for semicolons.
(I believe that is fixed now, however. I still remember it -- contributing to a somewhat irrational fear of Haskell :-).
Lazy evaluation. Much of what you use Lisp macros for is to control when/if arguments get evaluated. Haskell just works this way so you don't need macros for it.
If there are some people out there who are well versed in both Haskell + Clojure, I'd love to hear some insight into where Clojure shines, and where you find yourself saying "Clojure is better at this!" in some distinct and meaningful way.
Also I think it really is easier to get started with Clojure than with Haskell, even for the (very well thought out!) concurrency primitives, though obviously that doesn't matter if you're already up and running with Haskell.
I use Clojure exclusively at my job, but for some things I really miss something like Haskell's type system, even though I freely admit I don't really understand the type system at its higher levels (specifically related to GADTs, existential types, type familes, etc.). Applicative functors would make some things so much nicer.
and you don't have to understand dependent typing to do it, either. I know there are heterogeneous list implementations for Haskell, but I can never understand how they work.
You can have heterogeneous collections in Haskell, you just have to mean to. :) Check out the the "Obj" example here 
A few comments:
> Well, the Clojure repl is way better.
I've messed around with it, not enough to know, but wouldn't shock me. `ghci` is pretty decent, but it's not really world-class at anything. iPython is probably the best repl I've ever used for any language.
> Not being pervasively lazy makes it easier to reason about many things.
Agree. This is, indeed, problematic at times.
> Not strictly boxing IO in the IO monad makes it easier to debug (debug by println is still useful!).
Well, Debug.Trace gets you most of what you want there (using unsafePerformIO under the covers), so there is an "escape hatch" for doing printf-style debugging. But it's not quite as smooth as printf in languages that don't sandbox purity so much. On balance, I'd still take the purity (because I do a lot less debugging!), but point granted.
> Macros are much easier to understand than Template Haskell
No experience with macros, but since lisp-like things are so big into macros, sounds plausible.
> and since you don't have real typing you can have heterogeneous collections, or values where the type of one part depends on the value of another part, easily.
I'm not sure I view this as a /virtue/, to be honest. I'd rather have the type checking, and use typeclasses or ADTs to put mixed types into a sequence.
> Also I think it really is easier to get started with Clojure than with Haskell,
Yeah, I think this is undoubtedly true. Haskell veterans often say "whats' the big deal?", but the deal is big. And it's not so much because of "math", in the classical sense, as much as it's about very high, very new, abstractions. Many of them without analogies to things you've done before or things in the "real world". I did ocaml for awhile before I did Haskell, and Haskell was still a pretty big leap.
Anyway, thanks again for the constructive feedback.
With Clojure I have access to Swing (yeah I know...Swing isn't everyone's favorite) without installing any additional libraries. Then there is seesaw which is a very nice layer on top of Swing. To install seesaw I simply edit a text file in my Clojure project and the project management/automation system installs it for me. Very nice.
Discussions like this one convince me that we have advanced our programming techniques over the last decade or so. After all, we could be arguing about semicolons, the GOTO statement, or other similarly important issues.
It's a nice feeling.
In the abstract, possibly, but not in practice. You can't ignore the effect the general bent of the community has on the language and the code that is idiomatic in the language. People tend to describe Haskell code in mathematical terms, and libraries use operator overloading and the like in mathematical analogies.
Many of the people in the community do not know much of the math beyond the terminology they learned in Haskell. That's certainly what I gathered talking to people at a local Haskell meetup and it matches my impression of many of the people online (e.g. in the mailing list or StackOverflow.
Basically, Haskell itself is pretty easy. The language is small, simple and elegant. But the libraries are massively, massively complex and do things in ways that are very often not at all intuitive if you're not familiar with the math behind them. That's what people mean when they say it's too heavily based in math.
This is not to say that you can't use Haskell in the real world, but it has some attributes that make it daunting for many people. I don't think this a bad thing, either — not everything should be for everyone. Variety is the spice of life.
I'm just a beginner Haskeller, but this seems like a pretty bold (and false) claim. I've used Parsec for writing a Scheme (while reading the book), Shelly and Shake for build and test automation, and some statistics libraries for, well, statistics. They were all intuitive to use, and there was no "impenetrable morass of esoteric abstractions". I'm not sure what you're on about. Maybe provide examples?
you <*$%^> everyone %%<#>%% experience
I believe the canonical example of a weird math-based library that makes beginners cry is Control.Arrow. Reading arrow-based code without having fully grasped both the idea of arrows and the API itself (do you think <&&&> could use a few more ampersands?) is an exercise in frustration. Even the humble monoid — simple as it may be — is hard for many people to grasp, because they're such a frustratingly generic idea with horribly unhelpful terminology attached to them.
Want more? Here's one of the top Haskell projects on Github — a library that does lenses, folds, traversals, getters and setters: https://github.com/ekmett/lens
With that library, you can write things like
_1 .~ "hello" $ ((),"world")
atom = many digit
<|> between (char '(') (char ')') expression
atom = many digit
`or` between (char '(') (char ')') expression
The operators let you see the structure of an expression at a glance. You don't have to read the whole thing to figure out what's going on. Moreover, the <|> symbol is pretty intuitive--it's clearly based on | which usually represents some sort of alternation.
> sanderjd: I do think Parsec would be better off if "<|>" were something like "parsecOR"
Do you think perhaps some people think about functions visually while others think about them auditorily?
An IDE could easily render function names however the human reader wants. The option whether to render function names as names or symbols would be customizable, just like the display colors for various element types are.
Within each option, there's further choices. For names, there could be a choice between various natural languages. For symbols, it could be restricted to ASCII e.g. <|>, or full Unicode, e.g. | enclosed by a non-spacing diamond.
\ a b c -> a >= b && b <= c || a /= c
λ a b c → a ≥ b ∧ b ≤ c ∨ a ≢ c
> Doing it more generally would require the author of the initial function to come up with multiple different names for it, which sounds unlikely.
Unlikely for now, but could become more likely in the future.
I will say that in my opinion, the alternative in the reply above this (which I can't reply to) looks fairly nice, is clearer, and not even a little bit harder to follow. But I'll allow that as a very casual Haskell programmer, my opinions about Haskell are probably not particularly valid.
Haskellers like to use symbols for two argument functions because of precedence rules. You learn them for the modules you're using just as you would have had to learn the function names. But usually seeing how the symbol is named combined with it's type signature is enough.
As far as lenses, I personally don't like them so far. But not because of the (admittedly, questionably named) functions but rather that doing anything in lenses seems to require template magic.
"Frege is a non-strict, pure functional programming language in the spirit of Haskell ... Frege programs are compiled to Java and run in a JVM. Existing Java Classes and Methods can be used seamlessly from Frege."
Large projects, I would go with Scala/Haskell and for front-end systems, I would use clojure.
Why do it this way? With clojure, you can easily modify data or small pieces of logic. Simply edit the script without the recompile. With the scala/haskell api or library, you probably need something that changes less frequently. That backend system may act as a core piece of your library.
And if you don't like that. You can do Python and Java/C# which can give you the same effect.
This is where Clojure has a slight advantage because you can tap into the Java ecosystem for stuff that isn't currently in the Clojure world.
Database libraries are a good example of this, database support in Haskell is still pretty flaky and cumbersome to setup in comparison to JDBC.
Haskell does have a bunch of advantages though. A pretty petty one is syntax: Haskell's is simpler, prettier, more consistent and yet more flexible. They're very similar, of course, but I think Haskell gets all the little details right where OCaml often doesn't.
The single biggest advantage Haskell has--more than enough to offset the module system, I think--is typeclasses. Typeclasses subsume many of the uses of modules, but can be entirely inferred. This moves many libraries from being too awkward, inconvenient and ugly to use to being a breeze. A great example is QuickCheck: it's much more pleasant to write property-based tests in Haskell because the type system can infer how to generate inputs from the types. Being able to dispatch based on return type is also very useful for basic abstractions like monads. Beyond this, you can also do neat things like recursive typeclass instances and multi-parameter typeclasses.
Honestly, if I was forced to pick a single Haskell feature as the most important, I would probably choose typeclasses. They're extremely simple and a foundation for so much of Haskell's libraries and features. Most importantly, typeclasses are probably the most obvious way the type system goes from just preventing bugs to actually helping you write code and making the language more expressive, in a way that a dynamically typed language fundamentally cannot replicate.
Laziness in Haskell is not really a big deal. It can make much of your code simpler, but it also makes dealing with some performance issues a bit trickier. It also tends to make all your code more modular; take a look at "Why Functional Programming Matters". I am personally a fan of having laziness by default, but I think it's ultimately a matter of philosophy.
And philosophy, coincidentally, is another reason to learn Haskell: it takes the underlying ideas of functional programming further than OCaml. In Haskell, everything is functional and even the non-functional bits are defined in terms of functional programming. Many "functional" languages are actually imperative languages which support some functional programming features. OCaml is one of the few that goes beyond this, but there is really a qualitative difference when you go all the way.
Once you know that everything is functional by default, you can move code around with impunity. You on longer have to worry about the order code gets evaluated in or even if it gets evaluated at all. You also worry far less about proper tail calls, which mostly compensates for having to deal with some laziness issues.
Haskell also embraces the philosophy in another way: you tend to operate on functions like data even more than in OCaml. The immediately obvious example is that Haskell has a function composition operator in the Prelude. There is no reason for OCaml to not have this, but--as far as I know--it doesn't. It's an illustration of the philosophical differences between the two languages. On the same note, Haskell also tends to use a whole bunch of other higher-order abstractions like functors and applicatives.
So I think the most compelling difference is ultimately fairly abstract: Haskell just has a different philosophy than OCaml. This ends up reflected in the libraries, language design and common idioms and thus in your code. I think that by itself is a very good reason to learn Haskell even if you know OCaml; and since you do, picking Haskell up will be relatively easy!
Edit: It's great that you mention OCaml's module system too; my professor probably mentions the benefits of it in just about every lecture.
(IIUC because Ocaml licensing is not very welcome to open source efforts).
I'm very good at picking losers... (Ocaml, D, Factor..)
Also I've never seen Haskell's FFI work - it's just too much of a pain. Clojure's Java integration works right out of the box.
For what it's worth, I've found Haskell's FFI very pleasant for interfacing with C (and the reverse looks just the same, but I haven't tried). You then interface with any othet language through C, since most can FFI through C anyway. I think it's a very reasonable solution!
I guess this just shows that Math PhDs don't know how to use the fine profiler: http://stackoverflow.com/questions/15046547/stack-overflow-i...
Which would have saved about 5hrs 50mins...
Then you go on the describe what you find good about haskell? That sounded weird.
Custom data types are not always complex, but efficient and useful data types often are. Clojure's data structures have very good performance for immutable data structures. Clojure's maps, for instance, have what effectively amounts to a O(1) lookup, while Haskell's Data.Map is a size balanced binary tree with only O(log n) performance.
So O(log n) then. We obey gravity around these parts.
I presume you are referring to HAMT-like structures ,such as found in http://hackage.haskell.org/packages/archive/unordered-contai... which are by no means unique to Clojure.
Besides simply having the data type, you still need a good allocator and GC optimized for immutable data, which is where GHC stands alone - http://benchmarksgame.alioth.debian.org/u32q/benchmark.php?t...
Which is of no practical difference to O(1) if log n is always very small.
> I presume you are referring to HAMT-like structures... which are by no means unique to Clojure.
Of course they're not. My point was that it is not trivial to derive a efficient and useful data structure like a HAMT from Haskell's type system. The data structures in a library like unordered-containers are in practise just as opaque to the developer as the core data structures in Clojure.
This highly optimized data type is defined in 6 lines, easily accessible from the docs.
Really? Because SPJ, for one seems to agree with it.
Just that it's not in the perfect practical form that a real world language would have.
It is, of course, difficult to argue with zealots, but I will try nevertheless.)
The cost of what is called "advanced type system" is inability to put an elements of different types in the same list or tuple or whatever. It is not just a feature, it is a limitation. In some cases, when you, for example, dealing only with numbers, say, positive integers, it is OK, but what then the real advantage of such type checking?
On the other case, the concept of a pointer which all those "packers" trying to throw away is very mind-natural. When we have a box we could put anything in it, as long as it fits. Imagine what a disaster it would be when you moving from one flat to another but must pack stuff only into special kind of boxes. This one is only for such kind of shoes, this is only for spoons, that for forks. So, with a pointer we could "pick up" everything, and then decide what it is and where it goes.
Another mind friendly concept is using symbols and synonyms to refer to thing - s symbolic pointers. It is how our minds work. Those who are able of thinking in more than one language know that we could refer to it using different words, but "inner representation" is one and the same.
These two simple ideas - using pointers (references) and have data ("representation") to define itself (type-tagging is another great idea - it is labeling) gives you a very natural way of programming. It is a mix of so-called "data-directed" and "declarative" and, as long as you describe a transformation rules instead of imperative step-by-step processes, "functional" styles.
Of course, the code will look in a certain way - there will be lots of case-analysis, like unpacking things form a big box - oh, this is a book - it goes to a shelf, it is a computer speakers, it goes on the table, etc. But that's OK, it is natural.
The claims that "packing" is the best strategy is, of course, nonsense. Trying to mimic natural processes in our mind (as lousy as we could do it) is, in my opinion, has some sense.
There are some good ideas behind each language and not so good. Symbolic computation, pattern matching, data-description (what s-expression, or yaml is) are good ones. Static typing, describing "properties" instead of "behavior" - not so.)
Also it is good to remember that we're programming computers, not VMs. There is something called "machine representation" which is, well, just bits. It doesn't mean to swing into another extreme and program in assembly, but it is advisable to stay close to hardware, especially when it is not that difficult.
Everything is built from pointers, you like it or not.) The idea to throw them away is insane, the idea (a discipline) of not doing math on them is much better. The idea of avoiding over-writing is a great one, it is good even for paper and pencil - everything become a mess very quickly, but avoiding all mutation is, of course, madness.
So, finding the balance is the difficult task, and it is certainly not Haskell. Classic Lisps came close, but it requires some skill to appreciate the beauty.) So, the most popular languages are the ugliest ones.
I can't say I managed to completely understand the argument you're making here, but doing mostly Java/Python for work, I don't remember the last time I had to write a heterogeneous list. At worst, you can always go for existential types.
In my opinion, the single-linked-list data structure as a foundation of Lisp was selected not by accident. It is also most basic and natural representation of many natural concepts - a chain, a list. You could ask for the next element, and find out that there is no more. Simple and natural. Because of its simplicity the code is also simple.
Such kind of lists should be heterogeneous, because when all the elements are of the same type, it is more natural to represent it as an array - an ordered sequence. As far as I know, Python's lists actually are dynamic arrays.
The sequences of the elements of the same type (same storage size and encoding) with a marker at the end could be also viewed as a homogeneous lists. C strings processed as a list of characters is canonical example.
Now consider a UTF-8 encoding. It is a variable-length encoding. UTF-8 string is not an array, and because you cannot tell the boundaries between runes while reading, is not a list. But, nevertheless it could be considered and processed as a stream, until EOL marker is reached. This is why it was invented in Bell Labs to keep things as simple as possible.
Now, you see, the concept of a homogeneous list from math is not enough for CS, and sometimes it is much better to have it fuzzy. What is a list is a matter of a point of view.
I think I will keep my dealer.)
Yes, that's a recursive data structure well-suited to functional languages, same as Haskell lists.
> Such kind of lists should be heterogeneous, because when all the elements are of the same type, it is more natural to represent it as an array - an ordered sequence.
From a memory point of view, maybe, but that's really an implementation detail. If you take, eg, Perl arrays, which you can access by index, push, shift, unshift, and pop, the actual implementation is invisible from the programmer, just as it should be in this sort of language. Java will happily store cats and dogs in an array of Object, for instance.
> Now consider a UTF-8 encoding. It is a variable-length encoding. UTF-8 string is not an array, and because you cannot tell the boundaries between runes while reading, is not a list. But, nevertheless it could be considered and processed as a stream, until EOL marker is reached. This is why it was invented in Bell Labs to keep things as simple as possible.
You could very well store it as a list of bytes. This wouldn't be terribly efficient, but it's perfectly doable. Whether you process it as a stream or you have it stored in whatever array/collection is orthogonal to the fact that your language of choice supports heterogeneous lists. You can do stream processing in Haskell too (with, eg, pipes). You have also access to other data structures which are not lists, but which do expect to be homogenous.
As for lists, as long as your next element is not always in the next chunk of memory, you need a pointer. A chain of pointers is a single-linked list. This is the core of a Lisp and it is not an accident. Together with type-tagging, you could have your lists heterogeneous, as simple as that.)
This is a part of the beauty and elegance of a Lisp, in my opinion - few selected ideas put together.