Hacker News new | past | comments | ask | show | jobs | submit login
Blub Paradox (2014) (c2.com)
39 points by tosh on Jan 8, 2019 | hide | past | favorite | 61 comments



This article discusses the Blub Paradox essay and some of its shortcomings.

The original essay is one of the few texts that stuck with me. I read it not as an affirmation that some languages are strictly superior to others (in particular lisp) but as a push to try new paradigms and ideas even if I cannot see, from my current point of view, how they could improve on my current practice. It is one of the reasons that had me trying Rust this year.


If iOS had required Lisp instead of Objective-C, then Lisp would have been popular right now. Ruby really needed Rails to get people to try it.

I think the killer use problem is more an explanation then anything else. Java also showed a good marketing, well funded campaign does wonders.


> If iOS had required Lisp instead of Objective-C, then Lisp would have been popular right now.

It could have dragged iOS popularity with it.

Pushing, promoting or forcing something is not enough for it to become popular and widely used.


Given the hype and the company, there was no way Lisp would have drug it down. The only way for it to affect iOS was to provide a worse environment than the web apps. People would have found a way just as the reluctant found a way with Objective-C.

Ask Sun about Java and their promotional campaign. When a teacher at a community college in North Dakota gets a call and sent a book to help him decide to teach their language, you can be very effective at pushing.


> People would have found a way just as the reluctant found a way with Objective-C

But the people most likely to develop iOS apps at the start were likely already familiar with Objective-C, I think, due to it being the favoured app platform on OSX, no?


I would have thought the same thing, but if you look at a lot of the early app developers, they did come from other platforms. It was easier for an OS X developer, but there were a lot of new people who complained about Objective-C but just got on with it. A lot of early tutorials are really geared to the bigger group of new developers.

On a side note, I really do believe the people behind Swift hate Objective-C and a lot of the old complaints are "solved" by Swift.


> If iOS had required Lisp instead of Objective-C, then Lisp would have been popular right now.

Probably. I wonder if we wouldn't have ended up with more Xamarins instead.


Maybe, but one shouldn't discount the amount of documentation that Apple pushes. Look at Objective-C programmers in the age of Swift. Its rather hard to be on the outside of the preferred language.


Well, I think that's largely due to the ability to use cute emojis as variable names.


I think a more believeable alternative history would have been Dylan, which Apple already created for the Newton and which was a Lisp under a more traditional syntax layer.


Or, iOS would never took off, because no one would want to write apps for it.


Yeah, there is this weird idea on HN that any language can become popular if some big ecosystem adopts it before it becomes big. The reality is more complicated of course. Familiarity, simplicity, learning curve all matter much more, than people want to believe.


If Steve Jobs when announcing the iPhone SDK had said that Apple's way forward was Lisp and how great the Meta-Object Protocol[1] was for the future, developers, regardless of their beliefs, would have gone and made Lisp popular. Some things have the killer use.

Look at Javascript, it is not a good enough language to have developed without Netscape putting it into the browser. It had the killer application that you were basically forced to use. I love prototype-based inheritance but even NewtonScript was a better language.

1) Steve Job's videos for NeXT explaining objects were pretty amazing


> C++ and Scheme are examples of (near) parity, in Graham's terms, anyway because of C++ templates and Scheme macros doing nearly the same things

This one sentence to me is an indication that the author has completely missed the point. Templates and macros do the "same thing" only in the sense that all languages do the "same thing" at some level of abstraction, that is, model Turing machines. What matters is not what languages "do" when they are compiled, but in how they bridge the "impedance mismatch" between TM's and human brains. And in that regard, templates and macros are about as different as two language constructs can possibly be.


I really hate these wiki-articles. (Not Wikipedia, articles like this one.) The conversational back-and-forth gets confusing when it goes for a while. And the article goes on long wandering digressions that make it hard to see any actual points that the article had.

If you want to actually say something, don't do it like this.


Is this some kind of Dunning-Kruger effect?

Anyway, I don't think it's right mental picture, and thus context for paradox. Language power, I believe, cannot be defined with one dimension. What is more powerful, Java or SQL? Will SQL benefit from encapsulation or will Java benefit from window functions?


The linked article is not the original Blub paradox essay and does highlight this problem (the first element in the "A few problems with the BlubParadox" list).


The general concept becomes much more interesting when applied to spoken languages.


Isn't anyone but me annoyed by the lack of scientific rigor in arguments like these?

PG: - It is more efficient to write software in Lisp. Others: - Nah, it doesn't matter.

But no empirical evidence has ever been presented...


There is some empirical evidence. I can't find it now but I've seen research showing Lisp beating all languages in terms of speed of creating some programs from scratch by both experienced Lisp developers and students new to the language. Obviously no research for large projects, those can only be researched through things like github data, etc.


http://norvig.com/java-lisp.html

Start there, some more links from there to details on the study and follow-ups.


Thanks for the links! Of those, the only one I can find dealing with development speed in Lisp vs other languages is http://www.flownet.com/gat/papers/lisp-java.pdf and that study is flawed due to the self-selection of test subjects. Given 50 years of http://wiki.c2.com/?SmugLispWeenie the amount of empirical results is very low.


Ulrich Stärk's empirical research (no Lisp):

https://web.archive.org/web/2017/https://www.plat-forms.org/

It is hard enough to get study participants for the common programming languages, I deem it nigh impossible for the obscure ones.


It's like... When the aliens in "Arrival" (or Ted Chiang's story of your life) speak Python but you only speak R. Different languages make you think differently.


It's kind of interesting in the 2001 essay that invented blub it was suggested as a hypothesis as to

>And if Lisp is so great, why doesn't everyone use it?

Suggesting programmers in a blub language don't realise it's power. I get the impression in the 18 or so years subsequently that the main reason for the low uptake of Lisp is it's power enables people to write clever code that other programmers have quite a job to understand while languages like Python let you do much the same stuff while being easier to read.

So it's not that people don't realise the power so much as they don't want it.


I agree and will also add that pg seems to have missed some other ingredients of a language's power, especially libraries and tools.

Python and Ruby (and Perl before them) got big not just as “easy” languages, but as languages that get the job done quickly thanks to the massive amount of libraries.

And while Java, C, and C++ may not be the most elegant of languages, the amount of tools to inspect the code, find bugs, prove properties about the code, safely refactor the code, and test it from the unit level to the level of an ecosystem is what keeps some programmers interested in those languages.


Libraries & tools are not aspects of a language's power, but rather of an ecosystem's. A language can make it easier or harder to write libraries & tools, but libraries, properly speaking, postdate the language itself.


There is a large correlation between some language and ecosystem features, so you can not separate them that easily.


Lisp isn't really a language so much as a language construction toolkit, because macros are like language features (e.g. any macro you're using needs dedicated support from any tools that you want to use on your codebase).

Effective lisp organizations will generally use a small handful of general-purpose macros throughout their codebases - but at that point you might as well standardise that handful of macros as a language in its own right and you'd have better tool support, better interoperability between libraries (since you don't end up with different libraries choosing slightly different macros for the same thing) and so on.

Also lisp is a blub language compared to a language with a modern static type system. It's not that people don't want lisp-like expressiveness, but it turns out other languages can offer more without compromising on that.


Static languages are still complete blubs compared to lisps as far as practical metaprogramming goes. And Common Lisp's type system is much less of a blub in that regard.


> Static languages are still complete blubs compared to lisps as far as practical metaprogramming goes.

Not convinced; many modern statically-typed languages offer macros or equivalents (and also offer alternative ways to achieve most or all of the headline use cases). Metaprogramming in those languages is substantially more code-level work than in lisp, certainly, but even lisp users tend to treat macros as something expensive (because even though custom macros are cheap in terms of code cost, they're expensive in terms of reader (and tool) comprehension): the standard advice is not to write a macro unless there is no alternative, and using lots of specific custom macros in each code area is regarded as poor style. So in practice developers in modern static languages use macros in much the same way as lisp users.

> And Common Lisp's type system is much less of a blub in that regard.

Disagree; if you don't have types that are reliably accurate and enforced at compile-time then you gain very few of the advantages, counterintuitive as it is.


I'm yet to see a "modern" (statically-typed) language with random syntax whose macro-like facilities are actually usable by mere mortals. My observation is that in practice lisp programmers, even with the good rule of thumb of not using macros when functions suffice, still use way way more macrology. Good macros are just the opposite of reader burden - they tame complexity.

And (compile-time) macros are one thing, but what happens when you need metaprogramming at runtime (for example dynamic code generation)? With the Common Lisp compiler being always present one can easily generate and compile arbitrary code at any point in time. Here's simple real world example (shameless plug):

https://m00natic.github.io/lisp/manual-jit.html


> And (compile-time) macros are one thing, but what happens when you need metaprogramming at runtime (for example dynamic code generation)?

I'd agree that stage polymorphism is rarely used or appreciated in languages that don't have it (though I see no reason it should be incompatible with typing; as far as I can see your example is much the same as e.g. Scala's LMS; as with macros of course it's substantially more cumbersome to do this in a language with much more syntax than lisp[1]). I'm not sure that it qualifies as a "blub paradox" since to my mind it's a performance optimization rather than something that fundamentally changes language expressiveness - having AST-like datastructures that are gradually transformed/interpreted at runtime is very much a standard technique in ML-family languages, wider adoption of stage polymorphism would mostly lead to programming the same way and having it run faster, I think.

[1] I'm not convinced that it would be impossible to make a language with ML-style types but a very lightweight syntax that made metaprogramming easier. Personally even looking at e.g. Haskell I find myself wishing for a more visible syntax more often than I'm wishing for better interpreter tower performance, shrug.


I think it's more that Lisp is really great for certain things. Those certain things aren't all of programming, though. People use Lisp for a problem that's a good fit, and it's like a revelation - programming is insanely fun and productive. So they become a convert, and think that Lisp is The One Right Way To Program.

But in another problem domain, Lisp is terrible. I wouldn't want to program an embedded system with memory-mapped I/O in Lisp. I wouldn't want to do high-performance computing, where I had to get the memory alignment right to get the best performance.

Lisp might be like a Formula 1 racer. If you ever drive one, it will probably revolutionize your idea of what a car can be. Just don't try to drive it on a four-wheel drive road - a Jeep will absolutely dominate it there. For that matter, a Formula 1 car is not even a very good car for commuting to work - not street legal, lousy gas mileage, too high maintenance.

And this is what's wrong with the Blub Paradox idea. Power? Power for what? A language has a certain power for a particular kind of problem, and a different power for a different kind of problem. Pick the language that is the most powerful for your problem, not the language that is "the most powerful".


Code readability and ease of modification are also part of a programming language's power. LISP scores lower on those than Python and so Python is more preferrable than LISP (assuming all other factors balance each other out - in reality it is the tooling).


I'd love to know Lisp, just like I'd love to know Python. I have no experience in either (I come mostly from Java, javascript with some Ruby and some ancient C/C++ experience). When I look at Python code, I understand what it does. I don't think I've ever written any Python, but I think I could sit down and be productive within a day.

Lisp looks like complete gobbledygook to me. I'd love to be able to understand it and write it, but the syntax doesn't look like it's meant to be human readable. It's almost as much work as reading assembly.

I should probably give it another try, though.


Little Schemer is a good introduction to lisp (Scheme in particular) and may help you get past the "complete gobbledygook" stage. Paradigms of AI Programming is a rather advanced text, but a good one since Norvig writes very clean and documented code (even the code snippets in the text include docstrings). It's also free now (Little Schemer isn't, but it's not expensive either).

Steve Losh has a good [0] starting article on Common Lisp as well, which might be a better starting point than the books I mentioned, and could lead to them.

[0] http://stevelosh.com/blog/2018/08/a-road-to-common-lisp/


But that's because all the languages you know are basically dialects of each other because they all descend from ALGOL, not because they are inherently easier to understand. Similarly, someone trained in Lisp would find things like Scheme and Clojure easier to understand than something like Python because those likewise are dialects of Lisp. I'd strongly recommend studying languages different from ones you know -- they are mind expanding even if you never use them in practice. Not just Lisp, but things like Haskell, OCAML/F#, and Prolog will change the way you think.


> not because they are inherently easier to understand

I'm honestly not sure that that's true. I would bet money that someone who primarily works in Lisp would still have an easier time sitting down and reading Python code than someone who works in Python sitting down and reading Lisp, because I think it might truly be inherently easier to understand.

There is intrinsic meaning in:

    x = 5
    let x = 5
etc., insofar as these translate character for character, word for word, into human language, whereas this simply doesn't:

    (let (x 5))


> There is intrinsic meaning in:

No, there isn't. That just happens to be closer to the notation you were taught in elementary school, so you're more familiar with it. If you were taught s-expressions in elementary school (and this would actually be a really good idea) you would say the exact opposite. In fact, once you understand s-expressions, all other notations seem awkward and arbitrary. That's actually the reason that Lisp persists, because it's a kind of natural "local maximum" in notational idea space. It just happens to be one that very few people ever reach because of the damage done to them in their primary school education.


I have created non trivial projects in PHP and Ruby and have used a lot of other languages to do things before moving to Clojure... probably about 5 years programming before using my first lisp.

For me I find that s-expressions are the easiest thing to comprehend. Every other language is loaded with syntax and with lisp it's just mainly just (function args).

I wanted to test your argument and I have never looked at common lisp before, I just skimmed some source code from this project https://github.com/stumpwm/stumpwm and it's extremely easy for me to parse.

Comparing this to new variants of Javasscript that seem to come out biannually with some new syntax, I have no idea what's going on and have to look up references on syntax to begin to understand anything.

I guess at the end it's subjective, but lisps have ruined other languages for me.


There is no such thing as "intrinsic meaning", and I think you might be confusing "human language" and "english-like". There is astonishing variation in human language.


I really like Lisp and its macros and I don't even mind the sea of parentheses. But I do think requiring everything to be done in prefix notation is a weakness. This is an argument I have seen -- it may not be the one you are trying to make, but your statement reminded me of it:

"No programming language is more or less natural than Lisp. It is merely a matter of familiarity and the way most programmers were taught."

One obvious problem is that familiarity does matter. Even ignoring all other programming languages, there is a barrier to climb for the vast numbers of beginner programmers who know English and common math notation like f(x) instead of (f x) and x < y instead of (< x y). I've also seen (- x y) be confusing as one incorrect way to read it as English in your head is "subtract x from y".

The other is that for some things, it really does seem like infix notation is inherently preferred by humans. Is there a natural language where conjunctions don't go between terms, i.e. one says "und x y z" instead of "x und y und z"?


Functions are more like verbs than conjugations. And Verb Subject Object (which is rather lisp-like) is not uncommon among natural languages. Arabic and Biblical Hebrew (Modern Hebrew not so much due to influence from Western languages) use that order a lot.


That's just fine for things that are function calls in any language. I have no preference between log(x, 10) or (log x 10).

But I do want to read some functions like conjunctions, and in Lisp, I am forced to call `and`, `or`, `<`, `/`, and pals the same way as log, print, etc. I just don't think I'll ever find:

    (and (not (is-blocked door))
         (< (height player) (height door))
         (or (is-unlocked door)
             (has-key player door)))
as readable as:

    not is_blocked(door)
    and height(player) < height(door)
    and (is_unlocked(door) or has_key(player, door))
Edit: to expand, I don't think it's a fundamental flaw of Lisp. There could easily be some syntax sugar that lets you specify precedence rules for functions and write them infix, just like Haskell lets you write either `(+) 1 2` or `1 + 2`. It would de-sugar to the same sexpr.

I know this can be done as a macro, in fact I did it for elisp several years ago ( https://github.com/rspeele/infix.el ) but it is really just a toy for personal use. If an infix operator system came out of the box and its use was encouraged for certain common operators, I think that would be a net positive, but most Lispers disagree with me there (if I had to guess).


The Lisp expression is a nice sideways tree, where we immediately see that three conditions are joined by and. When I look at it, I see this:

     and --.-- not (is-blocked door)
           :-- < (height player) (height door)
           `-- or --.---(is-unlocked door)
                     `--(has-key player door)
The other one looks like a bit of a lorem ipsum paragraph to me.


I guess it comes down to different strokes for different folks. It would be interesting if there was a website that quizzed you on randomly generated expressions like these, giving you S-expr forms sometimes and ALGOL/infix operator forms other times, and asking you whether the expression is true or false given a set of variable values.

Naively I would guess that I would do best on infix exprs and you would do best on s-exprs.

But I could imagine other results, like s-exprs giving me a speed disadvantage but an accuracy advantage (either because I had to read them more carefully or because they eliminate reliance on precedence rules). Or there may be other quirks, like maybe S-exprs read better for larger, multiline conditions while infix reads better on short one-liners with <= 3 operations.

It reeks of effort so I'm not going to make such a site, but I do wonder what the results would be.


That may be fine for ancient Jews and Arabs, but it still presents a bit of an obstacle to speakers of Germanic languages, including English. With the exception of Yoda, I suppose. I bet Lisp would be totally natural to him.


An objective test of "easy to understand" would be, how close does the code look, to how a programmer might pseudocode it on a whiteboard? I don't recall seeing much significant difference between how Java, Python, or Javascript devs pseudocode on a whiteboard, so it's something approximately close to how a human thinks about it in the absence of syntax issues.

I think Python code looks closer to this pseudocode than Lisp does. Once you become an experienced Lisp programmer, it probably takes little effort to convert that mental pseudocode into Lisp, but I believe it does take intrinsically less effort in Python.


I disagree. People pseudocode in algol style if they are used to program in e.g. python, or discussing about an imperative algorithm, or communicating to algol-style programmers. But if you pseudocode in a lisp context you write s-expressions, the same way that if you are discussing SQL you may draw an ER diagram, or if you are talking math or abstract topics you write mathy things in a declarative style (such as writing sigmas or inverted As and Es). No one writes imperative pseudocode if you are going to program in prolog.

In sum, using pseudocode is just temporarily liberating yourself from pesky syntax adherence in prder to better quickly iterate over an idea, but in that discussion you always have the target in mind from the onset.


I did Prolog back in university. That was really cool. Miranda was our introduction in functional programming there. I don't know Haskell, but I really like some aspects of Scala. But Scala is still not all that different from what I know. Lisp is way more alien than that.


> Lisp looks like complete gobbledygook to me.

Might that be Clojure, rather than Lisp, that you're looking at, without an extensive functional programming background?

Some classic Lisp code, too, will look like gobbledygook if you've never had a tutorial, due to unfamiliar symbols. If you don't know what is a "cons cell" or the functions car and cdr, for instance.

Surely, you can guess what this is doing:

  (let ((x 3) (y 4))
    (print (+ x y)))


> unfamiliar symbols. If you don't know what is a "cons cell" or the functions car and cdr, for instance.

Those short, cryptic names are definitely part of it. It's unfamiliar syntax, and overdose of brackets for added confusion, and in the middle of it all, words that mean nothing to me.

Stuff like cout, puts or the printf syntax also take some getting used to if they're new to you, but you get more context with them.

And yes, I can guess what your let and print do, but still, the excessive brackets, unusual assignments and prefix notation, give the whole thing a lot of noise.


But, on the the bright side, that expression gives you a synopsis of all the syntax you will ever have to know. At least the principal organizing syntax for structuring the bulk of the code. What you don't see there are examples of various minor notations, like various kinds of literals and such.

The good news is that parentheses disambiguate everything; if we remove some of them, we have to introduce hidden rules that determine what is a child or sibling of what.

let can have fewer parentheses, and some dialects (like Arc, in which HN is written) have tried that:

  (let (x 1 y 2) ...)
I've seen variations on a let1 operator which just binds one variable:

  (let1 x 1 ...)
In Common Lisp, the inner parentheses are there because the initializing values are optional: (let (x y (z 3) w) ...) gives us four variables. x, y and w are initialized to nil; z to 3.

Parentheses are needed around the variables so that they form a single argument position in the let syntax. Without that, we don't know where the body of the let begins. So that is to say, if we make it (let (x 1) (y 2) (print x)), how do we know (y 2) isn't supposed to be an invocation of function y on argument 2 rather than another variable binding.

The parentheses are great for code layout; there are formatting rules to make all complex expressions look consistently good. The parentheses also facilitate good editor support.


This actually sounds far more imperative than I imagined Lisp to be. I thought it was mostly a functional language.


Common Lisp is a very pragmatic language, complete with goto, if you want it. This is for the purposes of efficiency rather than because you should use it yourself.

Suppose you wanted a really efficient numerical library written in Common Lisp, you could do it. But you wouldn't want to use map/reduce/remove/etc which are all linear time on the sequences they operate on (which can be lists or vectors). So clean looking code turns out to have poor performance because you have multiple O(n) operations, introducing large constants or if they're nested turns them into quadratic or worse operations. For instance, if we wanted to compute dot product we could, in CL, do:

  (defun dot-product (v1 v2)
    (reduce #'+ (mapcar #'* v1 v2)))
That's O(n) which is the "best" we can do for dot product. But it actually contains two iterations. In C, you'd have:

  double dotproduct = 0.0;
  for(int i = 0; i < vector_length; i++) {
    dotproduct += v1[i]*v2[i];
  }
While for this small example the difference in performance is minor, in the case of a library making many calls to these functions that double loop would add up. And for more complex operations you're introducing higher constants (or worse) by using the clean looking Common Lisp code.

Enter something like Series

Series: https://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node347.html

See here for a demo of how it tranforms functional code to imperative: https://malisper.me/loops-in-lisp-part-4-series/

So you still (as the user) get the illusion of working in a functional language, but under-the-hood it happily transforms itself to an imperative form. And if you wanted to make something like Series, you could present that functional form to your users and hide the imperative, optimized form.

Main takeaway: Common Lisp has high-level functional aspects, OO aspects, but also low-level systems-oriented and imperative aspects. You get to pick and choose which level you want to operate on based on your domain. And with the macro language you can present clean, functional interfaces to the end-user while using the lower-level features to optimize things for them.


ANSI Lisp, and related dialects, including most of its predecessors, are thoroughly imperative. If you have a program design in your head intended for Algol, you can probably implement in Lisp with essentially the same organization.

In my own dialect of Lisp, TXR Lisp, to demonstrate/test the foreign function capabilities, I took a C code sample from MSDN (a minimal Win32 program to create a Window) and translated it, almost expression for expression:

  (deffi-cb wndproc-fn LRESULT (HWND UINT LPARAM WPARAM))

  (defun WindowProc (hwnd uMsg wParam lParam)
    (caseql* uMsg
      (WM_DESTROY
        (PostQuitMessage 0)
        0)
      (WM_PAINT
        (let* ((ps (new PAINTSTRUCT))
               (hdc (BeginPaint hwnd ps)))
          (FillRect hdc ps.rcPaint (cptr-int (succ COLOR_WINDOW) 'HBRUSH))
          (EndPaint hwnd ps)
          0))
      (t (DefWindowProc hwnd uMsg wParam lParam))))

  (let* ((hInstance (GetModuleHandle nil))
         (wc (new WNDCLASS
                  lpfnWndProc [wndproc-fn WindowProc]
                  hInstance hInstance
                  lpszClassName "Sample Window Class")))
    (RegisterClass wc)
    (let ((hwnd (CreateWindowEx 0 wc.lpszClassName "Learn to Program Windows"
                                WS_OVERLAPPEDWINDOW
                                CW_USEDEFAULT CW_USEDEFAULT
                                CW_USEDEFAULT CW_USEDEFAULT
                                NULL NULL hInstance NULL)))
      (unless (equal hwnd NULL)
        (ShowWindow hwnd SW_SHOWDEFAULT)

        (let ((msg (new MSG)))
          (while (GetMessage msg NULL 0 0)
            (TranslateMessage msg)
            (DispatchMessage msg))))))
TXR Lisp benefits from featuring the referencing dot operator a.b.c.d for accessing structure fields (which is a syntactic sugar for a (qref a b c d) form). This is much more important than having infix math, because the notation supports the expressivity of object-oriented programming, which strikes at the heart of larger scale program organization.

The full program with all the boilerplate to define Win32 types and bind to the foreign functions is here: http://nongnu.org/txr/rosetta-solutions-main.html#Window%20c...

The original C is here: https://docs.microsoft.com/en-us/windows/desktop/learnwin32/...


> Surely, you can guess what this is doing:

Only if you got far enough in your background/education to understand prefix notation. Most people aren't there.


All the other languages OP mentioned make a good deal of use of prefix notations.

f(x,y) is a prefix notation and so are command languages: "command arg ...". Statements like "while (x) { y }" or "return z" are also a prefix notation, as well as declarations like "struct foo" or "int x, y".


You're the typical blub programmer. Python and Java are both readable to you because they are essentially languages with the exact same concepts.


I would say more people don't know lisp and find the syntax foreign than people who understand lisp and can't read someone else's code.

Golang was designed around the philosophy of blub.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: