Hacker Newsnew | comments | show | ask | jobs | submit | gordianknot's comments login

And authoritarians, believing in criminalization of both.

Gun laws reflect a society's general property rights, and drug laws their intellectual freedom.


Humans are social animals, and awareness of a negative state of our community makes us sad or mad, which seems meant to drive us to action. Evolution appears to be mostly about the code (genes), not the client (mind) or machine (body). From pre-history to present, communities've been composed of genetically-similar members. Over time this has been less and less true, corresponding with the accelerating decrease of travel costs. Now data (memes) have emerged as another evolutionary layer, at a higher level of abstraction.

Contemporary "political news" is propaganda carefully crafted by concerned corporations to misinform the viewer, both targeted "reverse lobbying" on specific issues and general-purpose anti-participation tactics. The idea seems to be to get voters to act against their economic and/or ethical interests either by convincing them to mis-vote (compared to how they would if they were more informed) or not vote, via manufactured dissent, character assassination, blurring of issues and facts, and irrelevant/impossible campaign promises (e.g. archaic issues like gender and reproductive rights and "border control" that's thinly veiled racism (and won't happen because companies want the illegals and they pay better)).

Call it reverse advertising. We've gotten so good at PR techniques that they've become invisible. Everything stays the same, but we rebrand it and view the world through the lens of the internet, detached from empathy with the bitter natural world, in a cocoon suckling the nectar of porn-- over-processed information.

This is why startups have only yet caused negligible political change. Software, and by extension most programmers, are effectively "brains in a vat." Shows how machines have already "taken over" the world, and haven't displaced us in the process. The singularity already happened, but we're still large, hairless bipedal rodents-- and computers are not. Human bodies need different stuff than machines, so we don't have to be competitive.

Likewise, hackers just want to have fun, and -- as this comment bears witness -- we just talk about it online instead of doing anything about it. Hackers aren't competitive with suits any more than computers are to biology. Suits want software, so nerds get paid, but then turn around and re-invest their money, move to Singapore and live it up, go on perma-vacation, etc. There are many brilliant hackers, many of them also more than charismatic and wealthy enough to get into national office. Instead they build the programming language they've always wanted, or take up cycling, or reconnect with nature.

When we do look beyond the bubble of hedonism, we're comparable to people who pray for things out of their control, i.e. asking for something, instead of either making adjustments to gain control or accepting that control is ultimately undesirable or impossible. Self-directed prayer (meditation), where one looks inward and develops a dialog with 'eir layers of consciousness, is something different, but other-directed prayer is the single-player version of "happy news" (porn), be it cat pictures or startup drama or gadget announcements or sports or esoteric programming language design.


Lighten up. We're not living in a literal 1984-meets-The-Matrix world just yet. Your perspective seems as narrow as you criticize others' for being.

Silicon Valley may yet reshape the world, even if some of us take scenic bike rides. (That was one of the bad things, if I'm keeping your argument straight, while empathy with the natural world in general is good. The problem with bike rides is they take time that could be spent running for president.)

I understand the point that many sectors of society could use some new life breathed into them, but I think the oxygen needs to come from where it will, and it will come.


No, but it is a Brave New World. We're the ones out on that island with the artists and scientists. We make tools of change, but sell them off so that we can go on bike rides. It's hedonism; this is the New Gilded Age.


I use Python at work because I have to, but I'd rather be writing JavaScript.


To me that's the value proposition actually. Invest in learning and collaborating now, then be ahead of the curve in a few years (if interested in getting hired) or be ready to capitalize and know how to hire and make software that only could've been made with it when it eventually reaches some level of maturity. I mean, those that grow it reap the best rewards.

Software is a product of the platform(s) it was built with, in the same way a novel is a reflection of the writer's language more-so than the writer. Microsoft's software is what happens when you use C and .NET; Google with C++, Java, and Python; Facebook with PHP; 37signals with Rails; etc. There are apps that will be built because they could only be built with a unified JS platform. There will be new kinds of software that directly result from the new possibilities of Node, CoffeeScript, Meteor, Firebase, Parse, etc.

Different tools yield fundamentally different results. Once a platform is "mainstream", it's to late, the big opportunities have already passed and the innovation is elsewhere. There's still a chance to be the "DHH of Node"; that spot's obviously already been taken in the Ruby world.


Learning new technologies is something I do, not because I'm interested in getting hired or in becoming the DHH of node. I would do it even if I were by profession a truck driver.

This may sound "romantic", but when your love of technology interferes with you actually being productive and getting things done, ... it's frustrating.

If you are a student, then jumping on a new platform and making a name for yourself is a great thing to do. But I'm an old fart, I was there when Java applets were "the thing that will change the world".

>There will be new kinds of software that directly result from the new possibilities of Node, CoffeeScript, Meteor, Firebase, Parse, etc.

Like what? (not rude, just curious)

As far as I can see there's nothing new under the Sun, so to speak.


I hear you, and can relate to the romance and frustration! I was in grade school when JS was released... but've been writing it for over 10 years now. I tried "everything" else, and it's made be a better JS hacker, but I feel like I'm doing it wrong when I use anything else.

I didn't mean to suggest riding the wave of new, distinct, non-progressive technologies. Their is a progression; it seems clear to me that PG was right, that we (as an industry) are slowly moving toward Lisp. He also said that he felt Lisp and C represent the two "clean, consistent" programming models, and I agree. And that's the reason that JS isn't just another passing wave (although my previous comment did seem to make that suggestion).

JS is something different; it's (1) a good-enough balance of C and Lisp and (2) available on every platform. To me, the situation is clearly that JS will form a solid, durable layer over C. Then the language designers and industry hype machine will shift to langs that compile to JS. This is already happening, naturally, it's just not evenly distributed.

> Like what?

We're still in the early days of compiling to JS, and being able to use it as a modern server-side environment. I don't know what the results will be, but I think the difference will stem mostly from development time. Even though it could be done, in practice you don't end up with the same app if using Fortran, Java, and CoffeeScript, because doing so would take a month versus a week versus a day. To paraphrase Linus from his Tech Talk on Git, speed doesn't mean you do the same thing faster, it changes behavior.

Apps will get written that wouldn't've. I'd argue that Facebook succeeded mostly because of PHP, and in turn, that they'll eventually fail because of it, too. They beat MySpace because MySpace used the MSFT stack; it's like England's victory over Spain due to more nimble war ships. Same thing'll happen to FB unless they evolve when necessary.

This is what Yegge was getting at back in '06 [Dreaming in a Browser Swamp]. He mentioned "Scheme on Skis" and "JavaScript on Jets", which might turn out to be ClojureScript and Express or Railway. Well on our way, and he was entirely right in retrospect (even though a shocking number of smart hackers don't want to accept it and keep on with archaic tech that's becoming rapidly endangered, but that's life).

Light Table is an early example of an app that's happening because of ClojureScript. And many exiting CoffeeScript apps are quite impressive: https://github.com/jashkenas/coffee-script/wiki/In-The-Wild

I think it comes down to being able to focus on design and make fast changes, without needing to worrying as much about the lower layers of abstraction.


Adoption isn't the problem. We don't need Congress to use such a system initially; we need bills, the US Code, etc. mirrored on Github. When it's there, people will get it. The information is out there, it just needs to be processed into a usable form so that it works with Git. And it'd take millions of dollars, and have no conventional ROI, so no one's going to do it.


>And it'd take millions of dollars, and have no conventional ROI, so no one's going to do it.

Your pessimism is unwarranted. There is already one user on GitHub that scrapes the US Code and mirrors it. He even tags the changes so you can diff them quite easily.



It also wouldn't have $0 ROI because there are people who will pay for advanced services built around the law-making process.

Here's an example of a service built on Ontario laws (disclosure: I made it): www.ontariomonitor.ca. It emails people when a bill passes a committee or when new laws are introduced (+ lots of other stuff).


There's Thomas (http://thomas.loc.gov/home/thomas.php). Bill text is available in PDF, XML and a printer-friendly HTML.

The question is how up-to-date the bill texts are. I doubt it can tracks in real time with changes/amendments voted on, etc. but then again any system bolted on the process as opposed to being fundamentally integrated into the process wouldn't be.


Adoption is the problem. The US Code already is on GitHub — https://github.com/divegeek/uscode — but it's only a mirror. There's also the Sunlight Foundation's OpenCongress http://www.opencongress.org/ which provides a nice way to track a bill and its participants. But without the authors actually using something like git in their process of writing the laws, this doesn't really help people participate directly.


Aren't bills formatted similarly? Couldn't you then just OCR them?


"Consider the source" is always relevant, and always a weak argument. Regardless of who wrote it, the main point of the article seems to be that hackers working in essentially utopian communities risk being out of touch with their users. It's basically Gibbons' thesis on Roman decline.

Becoming complacent is a risk. OTOH, most people are alienated, synchronous workers. They don't have control over the means of production, don't have equity or profit-sharing, and earn a wage for their time itself rather than their products. Many HN'ers are post-capitalists, and are naturally becoming more and more distant from that world. I know I am. I visit family, or go see old non-technical friends, and fuck, so much complaining about things they wish they could control, lusting for things they wish they could afford, and hyper-attachment to their existing life and possessions. I remember the onset of that feeling, when I had stayed to long at a non-software job after college.

I don't think working remotely and earning a decent salary makes me out of touch, just ahead of the curve. Same for Google and Facebook. We don't need to "get in touch with the common man"; it's like suggesting Rome should've reverted to a violent warrior culture to combat the tribes.

What Rome actually did is what Google is doing. Rome didn't fall; as they had gone from Kingdom to Republic to Empire, they continued evolving, into Church. The Roman Church was able to achieve way more than the Empire, since they got an information advantage over the tribes and could conduct invasions non-violently (and way more successfully) as "missionaries" instead of military. Christianity proved to be a cultural advantage over paganism.

In turn, Google is making self-driving cars. Your own traveling mini-Googleplex. They'll buy an airline in the next few years. Facebook will start building places for more people to get online, to breed new hackers. They're starting a HS internship program in Menlo Park. They'll realize, if they haven't already, that it's cheaper to make smart kids into engineers than hire them out of college. And so on.

As the merchants overthrew the nobility, we're watching the hackers overthrow the capitalists.


Or we just migrate to self-driving electric cars, with the batteries serving as Intergrid energy storage.


I've learned to never get too attached to anything that comes after "Or we just ..." :)


Also, let's add wi-fi hot-spots to all of them so that they'll create a giant mesh-network, providing cheap internet access for everyone. And ponies! Everyone gets ponies!


You should write What Made Basic Different.


Perl, PHP, Python, and Ruby are all interesting failures. They are important for their effect on languages that will last longer (Lisp, ECMAScript) though.

Edit: I don't mean these are "failures" now, just that they're doomed in the long run. Does anyone think they'll be able to stand any of these languages in 2018? I don't they'll have evolved much by that point either. The older a language gets, the harder it is for it to evolve. And if it tries to make too big a leap, people simply don't go for it (PHP5, Perl 6).


Like it or not, I don't think you can call any of those languages failures. Certainly not PHP, Python, or Ruby. I would venture to guess that all three are more commonly used than Lisp.


Failure in the Grahammian sense of being a dead-end.

"I think that, like species, languages will form evolutionary trees, with dead-ends branching off all over. We can see this happening already. Cobol, for all its sometime popularity, does not seem to have any intellectual descendants. It is an evolutionary dead-end-- a Neanderthal language." - http://www.paulgraham.com/hundred.html


Though I disagree with most of the conclusions in that essay, and in particular the expectation that Lisp (or a direct descendant) is going to be the language of the future, I still think it's way too early to call the above languages failures, even in the "Grahammiam" sense.

Python and Ruby especially are growing in popularity, and I think it's awfully premature to consider them evolutionary dead ends at this stage in the game.


I don't necessarily agree that those languages are dead either. I was just clarifying the point I thought he was trying to make. Plus I wanted to turn pg into an adjective :)


I mean failure like the tyrannosaurus rex. That is, from an an evolutionary perspective. Not at the arbitrary present. And I'd wager that Perl is still actually more used than Python or Ruby, it just doesn't get the love on social news sites and elite blogs.


I think Perl has been relegated to short scripts and legacy code bases. I don't see a lot of new projects being developed in Perl. That's what I meant by leaving it off the list.


The power of a programming language is proportional to its capability for innate abstraction. If that's true, it follows list-oriented languages are inherently inferior to their hashtable-oriented brethren. (These orientations are often misguidedly referred to as "functional" and "object-oriented" paradigms, which I find to be useless, over-overloaded terms.) Basically, with list-oriented langs the primary abstraction is a tree, whereas with hashtable-orientation, it's a graph. I'm talking about the -primary abstraction- (i.e. what you "think in" when hacking); obviously you can implement any structure in any powerful enough language. If others don't see it this way please, illuminate me.

Lists confine one to rigid hierarchies, which have to be compensated for with dirty (but sexy) hacks like in-language macros. Meanwhile the index of hashtables is arbitrary, which allows you to do naturally the things you have to patch in Lisp.

Though hashtables are more powerful and easier to grok, for one reason or another nearly all (popular) hashtable-oriented languages are total crap (C++, Java, C#), but in some regard a step up from deformed languages like C (in all seriousness, how does anybody get by without first-class functions and hashtables?). My tentacles have only found two decent hashtable-oriented languages: JavaScript >~1.5 and Io.

Anyway, why is Lisp unpopular? Because it's harder for most programmers to think in lists than hashtables. Then, why does the lang have a cult following? Because it's well crafted and consistent, which seems to cause some people to overlook its shortcomings, even to the extent of seeing design flaws as features.

But as far as I'm concerned the Language War is over anyway. JavaScript won.


I think that your comparison of the relationship of lists to lisp and hashtables to OOP languages is rather superficial. Lists relate to lisp differently than hashtables (or full objects actually,) relate to languages such a Java or Ruby.

In those languages, everything is an object, that is everything construct you define, whether data or code stared as an object, a set a behaviors and properties. (Assuming it is fully OOP, unlike Java.)

In common lisp, everything is an object as well, with it's own set of properties and behaviors. Cons cells are basic, for example, but you can still add properties to them and define methods for them. Encapsulation is not enforced, but that's for what closures and packages are.

Lists are dominant in lisp for another reason: the language itself is represented in them, not just the data and the methods, but the pre-compiled language, so that you can use lisps meta-programming facilities to generate lisp code itself. In most other programming languages, the code is represented to the compiler as text and to use such meta-programming facilities would require string parsing. Macros aren't a 'hack' but an actual paradigm shift. Attempts to do the same thing in other languages have largely been very clumsy, witness C++ macros. (Template Haskell apparently has managed to do it properly though.) Nearly Lisp's entire syntax is for defining the structure of the code; everything else is done with operators, functions, and macros.

If languages like Java or Ruby were represented in hashmaps the same way that Lisp is represented by lists, they would probably be incomprehensible. if you were to attempt to use a structure to represent the language, it would probably end up being something of a tree format. Code, is naturally hierarchical, even class definitions, and if I were to do the same kind of thing that one does with Lisp in C++ or Java, I would end up using lists. So I don't think that this comparison is really correct.

(BTW, I love C. It's not deformed, it was designed like that to A- make it easy to implement and B- give the programmer as low a level access as he needed. You are meant to define your own data structures and implement them in an efficient way using algorithms that make sense for the usage. You are not confined to a preset, possibly inefficient implementation. This is a level of control not available in a lot of other languages which is why C is so commonly used to write interpreters and compilers for other programming languages. Those highly efficient Python hashtables are implemented in C (and maybe a bit of assembler))


My tentacles have only found two decent hashtable-oriented languages: JavaScript >~1.5 and Io.

I'm not sure whether or not you'd find it "decent", but you might consider adding Lua to your list. It's small, relatively fast, and uses hashtables as its composite data structure. It also has some other neat stuff, like tail-calls and coroutines.


Thanks for the tip. I've heard some good things about Lua, but never gone further than Wikipedia.


I will second the suggestion of Lua. Here's my raw beginner's introduction to Lua tables:

The basic structure is the hashtable. You can use any first-class value as a key (number, string, function, another table) and similarly any first-class value as a value. Creating a table is done with the table constructor "{ }" ('> ' is the repl prompt):

  > a = { }
So now we have an empty table named 'a'. Let's say we wanted to have a table containing a list of colors - this is represented in Lua as a table with ascending integers as the keys. (Starting at 1, rather than 0, which is a bit unconventional)

  > a = { 'red', 'green', 'blue' }
This is equivalent to saying

  > a = { [1] = 'red', [2] = 'green', [3] = 'blue' }
If we use strings as keys, we start seeing some of the syntactical sugar Lua offers. Let's look at favorite foods:

  faves = { bob = 'pizza', george = 'cake', mary = 'pie' }
Note that no quotation marks are needed around string keys. (Well, unless the key is a language keyword. That's a bit annoying, but is related to the single-pass compilation, which is valuable. { if = 'can't do it' } fails. { ["if"] = "can do it" } succeeds.)

We can use numeric indices along with string ones in the same table:

  mixed = { 'a', 'b', 'c'; state = 'NC', city = 'Charlotte', county = 'Mecklenburg' }
If we want to access values from a table, we can use a subscript notation.

  > print(a[1])
  > print(faves['bob'])
Here's a winner though. If we want to subscript a string, we just use the dot notation.

  > print(faves.mary)
What happens if we subscript a nonexistent entry in a table?

  > print(faves.james)
No error is thrown, which is handy. Tables are defined as having the unique value nil as the value for all nonexisting keys. In fact, if you wanted to remove an entry from the table, you just set the key to nil:

  faves.george = nil -- the cake is a lie! And the key 'george' is removed.
One nice thing about Lua tables is that they are extremely regular. There aren't special cases in their behavior. They're easy to construct, inspect, and manipulate. They are the fundamental data type of the language, and everything is done in terms of tables. Objects are created out of tables. Namespaces are tables. Modules are tables. Configuration files are tables. It's an extremely clean and convenient design.

In addition to the tables, you get first-class functions:

  > function a() print 'hello' end
  > a()
Is syntax sugar for:

  > a = function() print 'hello' end
  > a()
These syntax sweeteners we've seen work together, too:

  > function a.foo() print 'world' end
  > print(a["foo"])
  function: 0x807f2e8
  > a.foo()
Ok, we're almost to objects. The next sugar we see is the ':' notation.

  > a = { color = blue }
  > function a:fave_color()
  >   print('My favorite color is ' .. self.color) -- .. is concatenation
  > end
If a function is defined with the ':' notation then it has an implicit local value called 'self' which is set to the containing table.

  > a:fave_color()
  My favorite color is blue
At this point it's pretty easy to create simple prototype-based objects.

What brings even more power to the table is that we can define custom behavior on each one. We can set a 'metatable' which defines how the table responds to subscripting, the various mathematical operators, and being called in the functional position. In a quick script I worked on I implemented a prototype-based class system in under 20 lines of code, all with the power of metatables. Lua tables are very powerful and though similar to the ones in Javascript are even cleaner and more pleasant to work with.

Anyway, it's all nifty stuff. Other features you get: fast incremental garbage collection, first-class functions, asymmetric coroutines (isomorphic to one-shot continuations), lexical scoping, closures. Lua is also one of the fastest interpreted languages and has an excellent API for binding to C if you want to do something performance sensitive, or want to use a library written in C. The language itself is written in 100% pure ANSI C. It runs on everything from embedded microprocessors to mainframes, in lego robots and on space satellites. And the community is very friendly.

If you want macros, there are several community-run variants of the language with macro facilities... I haven't gotten very deep into those though.

Good documentation for getting started:

  http://www.lua.org             -- official lua home page
  http://www.lua-users.org       -- lua community wiki
  http://www.lua.org/docs.html   -- lua documentation
  http://www.lua.org/manual/5.1/ -- lua language reference
  http://www.tecgraf.puc-rio.br/~lhf/ftp/doc/hopl.pdf -- a fascinating article on the history and features of lua
Check Lua out and see how you like it. I find it to be a very pleasant language which is fun to work in.


And what about CLOS? Does not make Lisp, in your words, a "hash programming language"?

Really I don't get the difference you state between list/hash programming languages, I've work a little with EcmaScript and I don't know anything about Io, but I'll put it in my to-do list. Can you, please, extend it? What's the main difference between them?

Because you say that the primary abstraction of a "list" programming language is a tree, which I think not. Since a tree is a directed graph without cycles, you wouldn't have loops. So, at least is a directed graph. But then, which property has the "hash" language graph that the "list" language graph has not? It's an undirected graph? It think it is not possible, because if it's undirected how do make your program to go "forward" and not "backward"?

I hope I'm not too obfuscated and my questions make some sense.


I've never looked at CLOS. But, what I mean is the "dominant metaphor," which I suspect is still lists.

I'll just go through my probably plebeian understanding. Arrays are to lists as hashtables are to objects. An array, in my mind, is a list that only contains one type and is indexed with enumerated integers. On the other hand a list can contain any type, but is also indexed with enumerated integers.

In JavaScript:

  array = [1, 2, 3]
  list = ["one", [[array], 3]]
  array[0] == 1
  list[20] = 23 // list indices aren't necessarily a linear enumeration
Of course, arrays and lists are both technically Arrays in JavaScript (a bad naming choice; I'd have called them Lists). Now a hashtable is typically just a list that uses strings for indices instead of integers.

  hashtable = {
    future:function(x) { return this.today + x }
A "method" is just a value that happens to be a function. Usually hashtable-oriented languages choose to abstract away the string, and treat it as a variable.

  hashtable["today"] == hashtable.today
Like with lists/arrays, JavaScript gets hashtables/objects almost exactly right, but again is subject to some questionable naming choices.

RE: trees and graphs -- I was getting at the relationships between nodes, not the actual computations, but I'm not comfortable enough with the terminology to explain exactly what I meant.


Ok, I think I get you. But then I think you can't categorize programming languages this way, the default data-structures provided by the language don't characterize it. In fact, it seems to me that you like most JavaScript because it's object oriented, it's dynamic typing and functions are first-class. Features that others languages lack, and not because the "dominant metaphor" is the hash. Casually in JavaScript the objects are built with hashes (at least apparently), and can be easily extended.

It's a fun thread :D


Built-in data structures are a big point of categorization of langs on my end. To me, the defining feature of Java is its classical structure. If you got rid of classes, you'd have a different language (in the interface sense). And interface is really all I'm talking about.

I'm basically just asserting that objects (should) == hashtables. This is quite literal in JavaScript. Other languages bend the metaphor in different directions, and obscure it to that no one even knows what "object-oriented" really means beyond particular idiosyncratic syntax in this language or that.

PG of course talked about this before, in Why Arc Isn't Especially Object-Oriented:

> I've done a lot of things (e.g. making hash tables full of closures) that would have required object-oriented techniques to do in wimpier languages ...

I'd argue that he was employing genuine object-oriented techniques, but just didn't have classical syntax and didn't consider what he had an "object." Other languages make a point about it, and use special syntax, which fogs the whole thing. Perhaps some people in "OO" mindsets have the kind of naivete that C-only hackers I've met have about first-class functions.

Actually, I just realized the whole reason C++, Java, C#, and co. have "methods" in the first place is just compensation for not having first-class functions you can stick in a hashtable.


If objects == hash tables, any language with decent hash tables, including lisp, has objects that you're perfectly happy with. Lisp also has other data types, but their existence means that lisp has more power, not less.


> But, what I mean is the "dominant metaphor," which I suspect is still lists.

That tells us more about the basis for your suspicions than it does about lisp.

It's okay to like javascript more than you like lisp. It's also okay to be mostly ignorant of lisp. However, it's poor form to make up things to "support" those positions.


"And what about CLOS? Does not make Lisp, in your words, a "hash programming language"?"

I think in the "hash" languages, you usually consider the method to "belong" to an object in some way.

With multi-method dispatch, the relationship between objects and methods is more fluid. So I think there is a difference here, too.

It's not that "list" languages only use tree structures for everything, just that trees are the more "natural" choice in those languages.


While lisp code is represented as lists, other user data can be represented with hash tables, vectors, arrays, classes, etc, including lists. (Yes, the name "lisp" refers to lists, but the language has grown since the name was picked.)

Also, it's easy to represent arbitrary graphs with lists, in much the same way that you'd do so with hash tables, structs, etc. Yes, the way that a node refers to other nodes differs but there's no restriction on the relationship of the nodes or the overall structure. (The difference between different kinds of graphs has nothing to do with how one node refers to another.)

And, macros have nothing to do with any of this because they are "just" code that turns code into other code. Perhaps another code representation would be better than lisp's, but since few languages have one, and some of those that do break it with every release....

In short, Gordianknot's thesis and examples are wrong, he doesn't understand graphs, and he has no idea what macros do.


I was only talking about the code itself, the primary metaphor of the programming language, not what the language actually represents. Most langs that I've encountered are structured as semi-formalized strings (i.e. C-derived langs), where as Lisp is structured in "physical" lists. I shouldn't have talked about "abstractions," because that wasn't what I meant. I was getting at the "concretions" of the actual lingual interface.


> was only talking about the code itself, the primary metaphor of the programming language, not what the language actually represents. Most langs that I've encountered are structured as semi-formalized strings (i.e. C-derived langs), where as Lisp is structured in "physical" lists.

Huh? Let's review.

>>>If that's true, it follows list-oriented languages are inherently inferior to their hashtable-oriented brethren.

Javascript code is semi-formalized strings or ASTs.

The careful reader has noticed that lists that represent code are ASTs with context-dependent field names. Since the nodes provide the context, said dependence isn't a big deal.

I don't know how many javascript programs manipulate their ASTs. (Lisp programs with macros are manipulating their ASTs.) The vast majority of javascript hash table operations are on data. (Yes, lisp code can be data, but not all lisp data is code.) In that, they're no different than any other language that has decent hash tables, such as lisp.


I don't understand why Java is "hashtable-oriented"...

What design flaws of Lisp are seen as features?


Really what are classes other than sugary hashtables?


sets of related closures over common state?


Okay, and I would look to hashtables as the best way of describing these "sets."



Applications are open for YC Winter 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact