
We need less powerful languages - ingve
http://lukeplant.me.uk/blog/posts/less-powerful-languages/
======
TeMPOraL
Yeah, that's what we need for writing the same bored boilerplate CRUD all over
again, though note that those less powerful languages end up being so painful
to use that we eventually end up machine-generating most of the code (see:
Java, HTML, CSS).

Sometimes we also need languages for exploratory work...

But anyway:

> _In my years of software development, I 've found that clients and users
> often ask for “free text” fields. A free text field is maximally powerfully
> as far as the end user is concerned — they can put whatever they like in. In
> this sense, this is the “most useful” field — you can use it for anything._

> _But precisely because of this, it is also the least useful, because it is
> the least structured. (...) The longer I do software development involving
> databases, the more I want to tightly constrain everything as much as
> possible._

And that's _precisely_ why they want that free-form text field. That's why
people still use Excel instead of whatever database solution their IT
department bought/prepared - because it doesn't limit them to particular
poorly understood interpretation of their workflow that was outdated last
week. Real world changes, requirements of the job change, and there's nothing
more annoying than be stopped dead in tracks because some smartass from IT
thought that this particular field should always be a number...

Use text fields unless you're absolutely, positively sure the constraints you
want to impose are valid and will never change.

~~~
jrapdx3
Free-form text input may be preferred by users, but possibly a nightmare for
the developer. As an example, I've been assisting a small non-profit arts
organization, constructing a database system to keep track of artists' work,
etc.

In the past there was essentially no ordered data-keeping. For several years
certain bits of info had been entered into a spreadsheet, like name, title of
art work, date created, medium, email addresses and so on.

Data in that spreadsheet was a total mess. Most fields were "free-form" text,
with no consistency at all in format of dates, email, URLs, crucial fields
left empty, data put in wrong fields, etc. It was a lot of work to clean it up
and quite clearly showed what motivates programmers to be "controlling".

To be sure, excessive rigidity can be problematic, but it's hard to make clear
to users the importance of maintaining data integrity. Frequently enough,
users complain about constraints despite effort to demonstrate their necessity
and benefits.

Of course constraints evolve, which is often the challenging component of
creating and growing a system. If there's good reason to modify things,
because what we conceived of as a number turns out otherwise, then we must
indeed find a way to accommodate that reality. OTOH if it really is a number
and the user wants it different, we leave it as is and once again patiently
try to show the user our reasons.

Finding the right balance between user "freedom" and necessary limitations on
data input is hard to achieve. I suppose the right approach has something to
do with avoiding the illusion that data system development is actually ever
done.

~~~
geocar
This isn't always about data integrity.

I'm an American, but I live in the UK: My _home address_ is in the UK, but I
live in a flat and my neighbours are mail stealing cunts so I use a scan+email
service for my _mailing address_ which is based in the US.

Similarly, my _land line_ is a UK number, but my _mobile_ is a US number, and
when you refuse to let me type in a `+` sign I spend a lot of energy guessing
whether to give you a number in NANP or in international formats.

The thing is, capturing information is just that: capture. By doing the
traditional programmer thing of validating the field on input, you're pissing
me off as a potential consumer. I _guarantee_ that I know more about where I
live than you do, so I'm more likely to abort my transaction if you tell me
that my address is invalid. That means your non-profit arts association simply
doesn't get my donation.

Thing is, I actually accept that my situation is exceptional and not the rule,
so I think this is really about programmers being unable to deal with
exceptions; treating them as nothing more than dynamic escapes or nonlocal
goto, like there's only a choice of more complexity, or more rigidity.

This is nonsense.

Simply capture whatever the user types. That means all input fields are plain
text or blobs or whatever. You can try to validate it into your business model
when you have some business need: like mailings, or shipping, and then attempt
to extract and validate _the specific fields_ when applying the mapping. If
you have an array of failures, you can allow the user to review at that point.

I do this with two tables: An input table, and a data table. The input table
has forward pointers to the data that is extracted, and the data table has
backwards pointers to the inputs. The data tables might be used for business
logic like mailings, order processing, login management, or shipping things.

Shipping is a particularly good example: I want to maintain four shipping
providers since they offer different rates. This allows me to offer "free
shipping" by simply selecting the cheapest provider and pushing that cost into
the product. To do this I need to know their shipping zip code for the US, or
the shipping country for international _that 's it_. I don't need anything
else, and three patterns (/(\d{5})(?:-\d{4})/, /\b([A-Z]{2}$)/m, and maybe a
list of common countries) should be enough to extract from most orders.
Anything else I can punt to my fulfilment center who can either call the
potential customer, delete as spam, or manually extract.

Or maybe I just ship everything UPS: I send it off to label making, and the
0.003% that fail I have to hand-check anyway (after all, are we verifying the
city names as well?)

What am I doing with these URLs? Am I visiting them? Am I verifying someone
has placed some widget on there? Or am I putting a link next to their name on
a bulletin board? Verification means different things depending on the use
case.

Missing that crucial email address field? Or maybe there's an extra space on
it? What exactly am I doing emailing them? What if the email bounces? What if
it gets marked as spam? Verification of an email address has less to do with
the characters in it than it has to do with the use-case: If this is for an
account recovery, I want to know you can email me and will work with your
system to do that.

This approach also means I don't need to "edit" things either, because edits
are simply new inputs. Logging is free. Users are happy.

The point is it's not a balance; avoiding the illusion is easier than you
think and the hardest parts of the problem of data validity are problems you
have to solve anyway.

~~~
DrScump

      My home address is in the UK, but I live in a flat and my neighbours are mail stealing...
    

Are you sure it's your neighbors? We get mail theft here (in Silicon Valley)
all the time; thieves harvesting mail for valuables, credit cards, tax data,
etc. for ID theft.

In my city, police will _not respond even if there is theft in progress_ and
they have idle units at Starbucks next door, claiming that there is no state
law against mail theft -- it's up to the USPS to deal with it.

I have a PO Box for everything but junk mail. The much-maligned USPS has a
really nice feature nowadays: you can sign a (free) agreement allowing them to
accept packages on your behalf from other carriers... so I have FedEx, UPS,
etc. all going to my PO Box, using the street-address format for the Post
Office proper. It's much less expensive than private services like the UPS
Store and such.

Your neighbors are probably more focused on stealing your newspaper. Or
spouse.

~~~
geocar
The USPS operate their own police force[1] because federal laws are handled by
the federal government.

[1]:
[https://postalinspectors.uspis.gov/](https://postalinspectors.uspis.gov/)

------
Tloewald
It seems to me that the author is attacking the wrong problem axes. I don't
think expressiveness or power are the problem here. I think the problem is
lack of clarity.

Consider the discussion of terseness. What is the best way to explain
something in Englush? Long winded or terse? Wrong axis. Explain things
_clearly_. Expressing a concept perfectly with a small number of words the
listener doesn't understand is useless. Beating a dead horse is useless.

I have a huge problem with C because the same characters and keywords are
reused in multiple different similar but different ways and it adds cognitive
load to things that should be simple (eg declarations) -- leaving me less able
to devote my limited mental faculties to the problem I am actually trying to
solve. (Not singling out C, just picking a well known example.)

Languages that give you lots of redundant ways of doing things -- especially
if some ways are better than others but not obviously so -- aren't more
powerful, they simply harder to understand. (This is not the same as arguing
for newspeak — words that are similar but not the same are useful.)

What we want is clarity. Clarity isn't simple or easy but it is exactly what
we want in programming languages.

~~~
xorcist
Clarity is hard to define. You seem to argue that a simpler language such as C
gives rise to more convoluted code while more complex languages such as Java
can deliver more terse code.

That's not always the case. Take GUIs for example. In pre-C++/Java times the
user interface declaration was often data. Now it is code. Whatever data is
still necessary to function, such as button texts, are now object attributes.
That code can then grow functionality and that's when you get non responsive
UIs and hard to debug callback spaghetti. So things are done differently in
less expressive languages, but it's hard to argue it's less clear in the
general sense.

Another example is "application servers" and the rise of enterprise
programming where object factories instantiate other objects in order to keep
dependencies unidirectional. In a more simple language none of that would
exist, and that particular problem would not be there to solve. But would it
mean more complex, and perhaps unique, solutions elsewhere in the software?
I'm not sure. Git won out over the more engineered and more complex
alternatives for example.

~~~
Tloewald
I agree that clarity is hard to define. But so is power. Is C more or less
"powerful" than Java?

------
mbrock
Graham's essay is basically about what general purpose language to use when
you're a small technical startup of clever hackers who want to develop cool
stuff as fast as possible, using the language as a competitive advantage.

His conclusion is, bluntly, Lisp, since you can reprogram that language
arbitrarily and it won't get in your way.

It is also excellent for implementing DSLs, which can be as powerful as you
want. For example, Viaweb uses (still!) a Lisp-based DSL to provide the non-
Turing-complete customization language for their customers, and I suspect that
whole thing was a weekend hack.

(Edit: it probably is Turing complete though.)

The ease of creating such restricted DSLs comes from Lisp's power. At some
point you'll need a general purpose language, if only to implement your other
specific languages. If your team can be competitively productive with a
remarkably powerful language, Graham says you should use it.

~~~
seiji
The other side of massive isolated productivity is: nobody except them knew it
or could maintain it. Yahoo ended up doing the whole "rewrite lisp in C++"
thing because you can't hire "experts in paul graham's custom lisp macros" at
scale.

But, it worked, didn't it? Grow, scale, cash out, dump code on non-founders,
become startup messiah.

~~~
braythwayt
Let’s see now. Paul Graham wrote viaweb in Lisp and claims it was a
competitive advantage for a startup.

What is his track record with startup(s)? Do we just count N=1? Or do we
include what YCombinator has accomplished?

Now let’s look at what Yahoo has accomplished as a BigCo.

Under the circumstances, would you bet that Paul was right about Lisp, Yahoo
about C++, neither, or both?

I’d go with Paul being right about Lisp, and Yahoo, well... I’d call that N=1
at best. Clearly they started from a “We already have hordes of C++
programmers” point of view, and a “We apply hordes or programmers to every
problem” point of view.

It might be the only point of view for them. It could very well be that there
were hundreds of Lisp programmers available, but none might have wanted to
work for Yahoo under any circumstances. Your mileage may well differ, based on
the overlap in the venn diagram of language, culture, and problem set.

~~~
S4M
One thing that people forgot to mention is that Paul Graham was a Lisp expert
before he started Viaweb - he had already written _On_Lisp_ which is still
considered as one of the references to learn Common Lisp - and probably Robert
Morris wasn't a bad lisper as well. They most likely were much better
programmers than their competitors or Yahoo's median employee, so while Lisp
may have been a competitive advantage for them, it wasn't the one that gave
them the strongest edge. We don't have the data of how Lisp experts fare
against equally strong C++ (or some other language) - although it may be that
Lisp magnifies programmer's strength.

~~~
braythwayt
If we take Paul’s “The Python Paradox” essay, and turn its conclusions up to
eleven, we might end up with this extreme perspective:

Perhaps great programmers are great in any language, and a company employing
great programmer will have great results in any language, it’s the programmers
that are the competitive advantage for a startup.

But although they’d be great in any language, they have taste, and prefer to
work in some languages regardless of whether that is a competitive advantage
or not. And therefore, they prefer jobs and startups with languages that match
their taste.

And taking that a step forward, these mythical great programmers might prefer
working with other people they perceive have taste, so they tend to clump
together, and they see the language choice as a kind of signal of what kind of
colleagues they will have working for a particular company.

If all this handwaving has merit, it could be that choosing to do your startup
in Lisp when Paul started ViaWeb was’t 100% about having a competitive
advantage from Lisp itself, it was having a competitive advantage from having
Paul and Robert, and anybody else they hired.

\---

With larger companies, they will never say it, but sometimes they don’t want
these “great programmers,” with their taste and their salary demands, and
their code that causes an army of middle-of-the-road workmanlike programmers
to stare thoughtfully for long periods of time working out how it does what it
does.

Even if they end up being more productive, perhaps they end up being less
predictable, because a smaller team of better programmers is more vulnerable
to poaching as the company goes from developing the next big thing to an
endless march of adding small enterprisey features. Even if you write in
Scala, Haskell, Lisp, or Clojure, when the work becomes maintaining Yahoo...
Maybe you don’t want people who relish a challenge, because they’ll quit.

So you want to anti-signal, by rewriting it all in a workmanlike language, so
you get workmanlike people. And if it takes touch longer or costs a bit more
overall, it’s predictable because the people are more fungible. They’re easier
to replace.

So it could very well be that when you’re launching a startup, you want to go
out on a limb and choose the language based on the kind of people you want to
hire for a startup.

And when you mature, you might rewrite it in another language based on the
kind of people you want to hire for a mature company.

And both choices might be right for their times. And neither might actually
have anything to do with the “power” or “expressiveness” or “abstraction” of
the language itself.

~~~
marktangotango
The vast majority of companies don't need these great/gifted programmers, they
need the workmanlike people who will suffer the constantly changing
requirements and priorities, and lack of product management, or even product
definition.

~~~
agumonkey
I still remember the day I realized that this 5-years-cursus titled engineer
was a glorified cashier position.

------
charlieflowers
It's a great post, well worth reading.

However, I don't completely agree with the conclusion. I think that what we
need is to correctly identify the situations in which giving up power _in a
specific way_ will give more power elsewhere.

You can't always know this up front. And you can't just get all this benefit
for free by religiously following the principle "always go for the least power
possible".

Instead, we need to look at the status quo -- even parts of it we haven't
questioned for a long time -- and ask, "Where could I give up a little power
in exchange for much greater benefit elsewhere?"

~~~
angelbob
Indeed. HTML and CSS are great examples where by giving up expressiveness for
the author, you can gain more power for later reprocessors -- uncommon for
programming languages (because halting problem) but a common thing to do for
language-like things like markup, Puppet manifests, etc.

~~~
nickpsecurity
Oh, you overstate it. You could always do that stuff in DSL's in languages
like LISP while maintaining the power. That would've been helpful to prevent
what HTML and CSS's lack of expressiveness resulted in: a whole mess of tech
for client, server, and transport to make up for what they couldn't do.
Languages like Curl or Opa were both more powerful and more understandable
than that collective mess.

Hence, why we want powerful and versatile languages with optional reductions
via DSL's.

~~~
fabulist
Writing and using a DSL in LISP seems to be the definition of reducing your
power in a case you find it useful. I'm not sure what nuance you're drawing.

Changing requirements and the browsers wars made the web a mess, not this
design decision. If the web is a set of linked documents available for public
consumption, do we really need to encrypt it? I would say yes (because
information about STD treatment may be public, but your interest isn't), but
it would be understandable to leave it in clear-text. But what about a
worldwide commerce platform? Oops, we'd better layer on TLS.

~~~
nickpsecurity
"If the web is a set of linked documents available for public consumption, do
we really need to encrypt it? I would say yes (because information about STD
treatment may be public, but your interest isn't), but it would be
understandable to leave it in clear-text. But what about a worldwide commerce
platform? Oops, we'd better layer on TLS."

That's a tangent topic I'm not really focusing on. I'm talking about the
presentation, efficient transport/updating, storage, and so on of content &
data accessed via the web. Aka web sites and web applications. I'm saying
Turing complete HTML doesn't cut it by far which is supported by the fact that
hardly anyone uses it: server-side includes and Javascript at a minimum going
back to 90's. Leading to...

"Writing and using a DSL in LISP seems to be the definition of reducing your
power in a case you find it useful. I'm not sure what nuance you're drawing."

It reduces power for that DSL specifically. This gives us the advantages of
reduced power. However, every other requirement can then leverage either DSL's
or the powerful language to solve them. Or, like Opa, one can just use a
powerful language with good attention to safety. In any case, you get a
language that solves your current problem, solves your [likely] next problem,
solves them well, is efficient, is compatible with HTML or whatever, and _is
consistent across the stack_.

Sounds better than the hodge-podge of crap that mainstream, web apps are made
off. The mess certainly wasn't created by HTML's design but HTML's & its
partners aren't suited to dealing with the mess most effectively. Right tool
for the job, ya know.

------
colanderman
This 100x. After learning to program in Coq (sub-Turing-complete functional
language) and SQL, I've come to recognize that I have never in two dozen years
of coding needed to express something that wasn't expressible in some sub-
Turing-complete language.

Turing-completeness is overrated and is more often that not a cop-out excusing
poor language design. Sub-Turing-complete domain-specific-languages simplify
program design, reduce bug surface area, and aid analyzability and
understanding.

~~~
icebraining
SQL is actually Turing-complete; using recursive CTEs, you can build a cyclic
tag system, which has been proven to be Turing-complete.

~~~
Retra
SQL in its original incarnations is not Turing complete. It's mainly the
bastardized, enterprised, heavily-extended versions built by people who have
to sell it that are Turing complete.

~~~
colanderman
To be fair, recursive CTEs are somewhat a blessing when you need to compute
transitive closures. It is just unfortunate that they were designed as
generative constructs (= you can create and recur on rows out of thin air
rather than only products of other finite relations).

------
catnaroek
Bob Harper has also made this point before: “The power of type theory arises
from its strictures, not its affordances, in direct opposition to the ever-
popular language design principle “first-class x” for all imaginable values of
x.” [https://existentialtype.wordpress.com/2013/01/28/more-is-
not...](https://existentialtype.wordpress.com/2013/01/28/more-is-not-always-
better/)

~~~
colanderman
Completely agree. People think I'm weird that I think functions being first-
class is a dumb idea.

But think about it: how often do you need to perform some complicated
algorithm on a function, or store a dynamically generated function in a data
structure? Almost always, first-class functions are simply used as a means of
genericizing or parameterizing code. (e.g. as arguments to `map` or `fold`, or
currying.) (Languages like Haskell and Coq that are deeply rooted in the
lambda calculus are a notable exception to this; it's common to play
interesting games with functions in these languages.)

You can get the same capabilities by making functions _second-class_ objects,
with a suitable simple (non-Turing-complete) language with which to manipulate
them. That language can even be a subset of the language for first-class
objects: the programmer is none-the-wiser unless he/she tries to do something
"odd" with a function outside the bounds of its restricted language.
Generally, there is a clearer, more efficient way to express whatever it is
they are trying to do.

There is some precedent for this. In Haskell, typeclass parameters live
alongside normal parameters, but aren't permitted to be used in normal value
expressions. In OCaml, modules live a separate second-class life but can be
manipulated in module expressions alongside normal code. In Coq, type
expressions can be passed as arguments, but live at a different strata and
have certain restrictions placed on them.

Unfortunately designing languages like this is _hard_. It's easy to just say
"well, functions are a kind of value in the interpreter I just wrote; let's
make them a kind of value in the language". This is the thinking behind highly
dynamic languages like JavaScript, Python, and Elixir: the language is modeled
after what is easy to do in the interpreter without further restriction. The
end result is a language that is difficult to optimize and analyze.

It's a lot more work to plan out "well, I ought to stratify modules, types,
functions, heterogenous compounds, homogeneous componds, and scalars, because
it will permit optimizations someday". But these are the languages that move
entire industries.

~~~
catnaroek
In general, language design requires a balance between first-class vs. second-
class objects. First-class objects are important because they let you write
the program you want. Second-class objects are also important because they let
you write programs that can be reasoned about.

For instance, Haskell's type language (without compiler-specific extensions)
has no beta reduction, because only type constructors can appear not fully
applied in type expressions anyway. This reduces its “raw expressive power”
(w.r.t. System F-ω, whose type level is the simply typed lambda calculus), but
it makes inference decidable, which helps machines help humans. It also helps
humans in a more direct fashion: Haskellers often infer a lot about their
programs just from their type signatures, that is, _without having to compute
at all_. It's no wonder that Haskellers love reasoning with types - it's
literally less work than reasoning with values.

So I agree with you that “first-class everything” isn't a good approach to
take. A language with first-class everything is a semantic minefield: Nothing
is guaranteed, sometimes not even the runtime system's stability. (I'm looking
at you, Lisp!)

\---

But, on the specific topic of first-class functions, you'll pry them from my
cold dead hands. Solving large problems by composing solutions to smaller,
more tractable problems, is a style that's greatly facilitated by first-class
functions, and I'm not ready to give it up.

~~~
colanderman
You misunderstand (as most do): I never said function composition was a bad
idea; I do it all the time.

My only claim is that you don't need a Turing-complete language to compose
functions. As an thought-exercise, consider replacing all uses of first-class
function compositions in any given OCaml program with top-level module
compositions. You lose nothing: no-one (who designs maintainable software)
uses the full expressive power of a Turing-complete language to manipulate
first-class functions.

(Of course there are more user-friendly ways to implement second-class
functions, which make using them less a burden, but no extant language has
such a system.)

~~~
catnaroek
When did I say “function composition”? What I said is “composing solutions to
smaller, more tractable problems”. If you divide a problem `FooBarQux` into a
sequence of subproblems, `Foo`, `Bar` and `Qux`, sure, these solutions must be
combined using the function composition operator. But there are other ways to
decompose problems. For instance, splitting a problem into a DAG of
subproblems with overlapping dependencies gives rise to dynamic programming.

And, no, an ML-like module system wouldn't make up for the lack of first-class
functions. Standard ML doesn't allow any module recursion whatsoever, and,
while OCaml allows it, it's quite unwieldy to use in practice. It would be
literally impossible to define a function like `fix`, which requires the
ability to call its function argument (let's call it `f`), supplying an
expression containing `fix` itself as argument.

~~~
colanderman
Since when do you need first-class functions for dynamic programming?

How often have you had to define `fix`? It's not even possible in certain
strongly typed functional languages.

I stand by my assertion. First-class function manipulation is not a thing 99%
of programs need to do.

------
Shog9
Seems like an argument in favor of domain-specific languages: identify the
domain, and create something just powerful enough to suffice.

Indeed, most of the arguments made are the same that were used as
justifications for DSLs a while back:

> The problem with this kind of freedom is that every bit of power you insist
> on have when writing in the language corresponds to power you must give up
> at other points of the process — when ‘consuming’ what you have written.

A good DSL lets you create, for example, business rules that are easy to
maintain (for those familiar with the business) but don't prevent me from
swapping out the implementation that drives them for something better on down
the line.

~~~
cpeterso
What would a DSL for creating DSLs look like? And could it be self-hosted?
It's turtles all the way down. :)

~~~
sparkie
Kernel is ideal for creating small embedded languages, where you can
selectively expose parts of the parent language to perform general purpose
computation, without giving any access to sensitive code.

Kernel is like Scheme, except environments are first class objects which you
can mutate and pass around. A typical use for such environment is for the
second argument of _$remote-eval_ , to limit the bindings available to the
evaluator. If you treat an eDSL as a set of symbols representing it's
vocabulary and grammar, then you bind them to a new environment with
_$bindings-environment_ , then passing a piece of code X to eval with this
resulting argument as the second environment, will ensure X can only access
those bindings (and built-in symbols which result from parsing Kernel, such as
numbers), and nothing else.

There's a function _make-kernel-standard-environment_ for easy self-hosting.

Trivial examples:

    
    
        ($define! x 1)
        ($remote-eval (+ x 2) (make-kernel-standard-environment))
        > error: unbound symbol: x
    
        ($define! y 2)
        ($remote-eval (+ x y) (make-environment ($bindings->environment (x 1))
                                                (make-kernel-standard-environment)))
        > error: unbound symbol: y
    
        ($remote-eval (+ x y) ($bindings->environment (x 1) (y 2) (+ +)))
        > 3
    
        ($define! x 1)
        ($define! y 2)
        ($remote-eval (+ x y) (get-current-environment))
        > 3
    

_$remote-eval_ is a helper function in the standard library which evaluates
the second argument in the dynamic environment to get the target environment,
then evaluates the first argument in that. The purpose of this is to provide a
blank static-environment for o to be evaulated, so no bindings from the
current static environment where _$remote-eval_ is called from are made
accessible to the first argument.

    
    
       ($define! $remote-eval ($vau (o e) d (eval o (eval e d))))
    

If you're familiar with Scheme, you may notice the absensce of quote.

And contrary to the opinions in the article, Kernel is the most expressively
powerful language I know, and I'd recommend everyone learn it. You have a
powerful but small and simple core language, from which you can derive your
DSLs with as little or much power as you want them to have.

~~~
cpeterso
Thanks! I will check out Kernel.

The ability to restrict the environment is key. Creating a DSL in standard
Lisp would create a _more_ powerful environment: the DSL would include all
Lisp features plus the domain features.

------
tel
This is a big part, if not _the_ most important part, of why I like Haskell.
"Purity" is a giant stroke in the direction of using the least powerful
language possible at all times (and the advanced types let you both declare
the level of power you want and compose sub-languages of different powers
together). You also often go deeper due to the large prevalence of deep or
shallow embedded DSLs in the typed-functional languages.

In short, this is, to me, one of the most important principles of programming.

~~~
eatonphil
So why Haskell and not, for instance, SML? I guess SML is not necessarily
pure, but it certainly a much simpler language than Haskell.

~~~
Peaker
Without purity, you have the power to express a lot more, burdening the
reader.

Purity lets you easily and arbitrarily constrain a function or program --
easing the burden for the reader who is free to assume so much more about what
they read.

------
norswap
Sometimes, yes. But sometimes, you're just going to chafe at the restrictions.
And then you start adding to your restricted language and you've created a
monster.

Case in point: every build system targeting the JVM. Most build systems in
general.

------
vezzy-fnord
I wouldn't read that much into what TimBL is saying about the "principle of
least power". It seems more like a _post facto_ realization, and particularly
in light of expectations for a transition into a Semantic Web. The 1990 WWW
proposal [1] reveals the web to have been more-or-less the bare minimum glue
to unify CERN's document troves, and to do so in a highly expedient timeframe.
A far cry from TimBL's earlier ENQUIRE, and no in-depth design principles per
se.

[1] [http://www.w3.org/Proposal.html](http://www.w3.org/Proposal.html)

------
hannob
It's kinda interesting that he writes a long blogpost with many ideas that are
quite similar to the Langsec ideas, however in the comments he says that he
wasn't aware of Langsec.

I am not surprised. Langsec could really be a gamechanger in many aspects of
IT security, yet many people in the Infosec community are not aware of these
concepts.

If you're curious, this older talk by Meredith Patterson is still worth
watching a lot:
[https://www.youtube.com/watch?v=3kEfedtQVOY](https://www.youtube.com/watch?v=3kEfedtQVOY)

------
mikekchar
Pick the most expressive language that you can get. Constrain it with a coding
standard. As people chafe against the coding standard, alter it as needed.

There is no benefit to restricting yourself arbitrarily based on what someone
who isn't even involved in your project thinks. I think that most people have
difficulty enforcing a coding standard. This is because they don't have a
cohesive team. This is a people problem, not a technical problem. If you try
to fix the people problem with technical solution, the problem will just pop
up some place else.

~~~
nhaehnle
For most practical purposes I agree, I would just like to add that restricting
yourself on top of a non-restricted language is something where technical help
is useful: that's basically what lints are about. This is not to say that the
people aspect is irrelevant; after all, you need to get the people to agree to
the lints.

------
gue5t
See also: [http://langsec.org/](http://langsec.org/)

------
nickpsecurity
Tloewald is right that clarity is the real issue. I've seen powerful, BASIC-
like 4GL's whose operation were more clear than simple language doing
equivalent things because they were designed for that. The ideal language
gives you just enough features and power to let you productively express what
you need without being too complex to understand.

The DSL languages are in a special category here. You start with powerful host
languages such as LIPS/Racket or REBOL. The language is usually simple enough
to understand plus has something like macros for extension. Each domain that
we run into a lot and/or has plenty boilerplate can have a DSL that lets us
concisely express those operations both for productivity and clarity. The
underlying language is powerful and still comprehensible for times when the
DSL's aren't enough.

So, the author is a bit off. What we need isn't less powerful languages. It's
powerful languages maintaining clarity and avoiding the everything but the
kitchen sink philosophy.

Note: HTML/CSS wasn't the best choice to support author's point. It was good
for static pages. Yet, people needed more power. So, they added a crappy
client-side language, all kinds of server frameworks, then all kinds of client
frameworks, and so on and on. Languages like Curl and Opa handle the situation
with a unified, powerful approach that was more clear than the web ecosystem
in general. Also had less security and maintenance issues. So, author's
example works against them in practice.

------
platz
It’s well known that there is a trade-off in language and systems design
between expressiveness and analyzability. That is, the more expressive a
language or system is, the less we can reason about it, and vice versa. The
more capable the system, the less comprehensible it is.

[http://blog.higher-order.com/blog/2014/12/21/maximally-
power...](http://blog.higher-order.com/blog/2014/12/21/maximally-powerful/)

------
malisper
I would guess the canonical example of this is goto. By the definition the
author uses, goto is extremely powerful. At the same time code written with
goto is generally a complete mess. Instead languages provide for loops and
while loops which are more restrictive than goto, but much easier to reason
about.

(I'm working on a blog post about this.)

------
chipsy
The biggest change to my coding style in the past few years has been to opt
out of the most expressive solution if it will fall apart in a cut-and-paste
situation.

Many new features are variations on old features. The incremental cost of
implementing one so that I can cut and paste the next half dozen is small. The
cost of refactoring to a generalization after that is also small. But the cost
of poorly generalizing early and having to reimplement feature one and
everything it touches in order to make feature two possible is quite painful.

This becomes more obvious when you have different codebases with similar
features that might want to share from time to time. A formally generalized
solution becomes a big dependency, as it relies on a heavily specified
problem. A slightly too primitive solution flows between codebases freely -
plug in new primitives and it runs.

------
snarfy
This reminds me of the philosophy behind Forth, where you build up 'words'
enough to express your program, but by itself it doesn't have much built in.

~~~
Frondo
That's something that always kinda bugged me about tcl, the language is pretty
bare bones in itself and you end up writing some pretty basic stuff in any
given project.

Of course you end up building your own little standard library over time, but
then you've got to import it into any given project.. Just an irritating extra
bit of friction. And reading someone else's code means learning their standard
library... (is it a standard library when everyone has their own?)

~~~
mannykannot
Is this simply a consequence of Tcl not shipping with the sort of standard
libraries that are expected nowadays, as opposed to being a problem that
follows from having a terse language?

~~~
Frondo
I like that question and that distinction, and I think the answer is I'd give
is "yes".

Tcl's flexible, to the degree of being able to define new language constructs
or redefine built-ins (e.g. you can redefine the basic if-then if you want),
which is great in some ways.

But it always seemed to me like the language culture is so deep in the
mentality of "if you need, implement it yourself," that the flexibility
becomes a crutch for the language developers.

What I mean is.. well, the basic data structure is the list, and you get
built-ins for adding to a list (lappend), reversing a list (lreverse), but
removing an element from a list (what would be 'lremove')? You've got to code
it yourself.

Hey, the man page for 'lreplace' even gives an example!
[http://www.tcl.tk/man/tcl/TclCmd/lreplace.htm](http://www.tcl.tk/man/tcl/TclCmd/lreplace.htm)

 _Why isn 't that built in?_ The common answers are, it's so simple you can
code it yourself, or you might want an lremove with different behavior than
me. To the second answer, realistically, if the language had a built-in
lremove, most people would just express their programs to use it, and those
who needed a different one could still write it. To the first, yes, it's
simple, but it's just a little bit of sand in the underwear--unpleasant
friction you'd like to ignore but keeps popping up.

Lack of built-in OO was the same way until relatively recently in the language
history. Lots of people made home-brew ones, and there were several common OO
systems, but what's a newbie to do? Research all the different OO systems to
make a decision first? More sand in the underwear. Run into enough of that
stuff and it just drives you crazy and makes you want to stop using the
language.

------
luckydude
Optimize for writers or optimize for readers. The more powerful languages
optimize for writers. The less powerful optimize for readers (aka the code
reviewers, the people who come after you to fix your bugs, etc).

I'm a huge fan of less is more. I review a lot of code.

~~~
nickpsecurity
That's just what's common. General-purpose 4GL's are both very powerful and
easier to read than most languages. Similar examples in LISP and REBOL
communities. Power != lack of comprehension.

------
tomp
I don't really see an issue here. Regular expressions, HTML, configuration
files... these aren't _programming_ languages, they're _data_ languages.

They don't describe an algorithm or a behavior, they just describe data. For
that purpose, it's preferable to have a format that's as simple to read,
process and analyze as possible. On the other hand, actual _programming_
languages are meant to describe an algorithm, and "power" usually means either
high-level expressiveness (Scala) or low-level adaptability (C), both of which
are very important in some contexts.

~~~
catnaroek
The language of regular expressions _is_ a programming language. It's just not
Turing-complete.

~~~
mreiland
it's a programming language in the same way that a flashing light can be a
programming language. If someone on the other side is using it to make
decisions then you're technically programming something so it's a
"programming" language.

I mean sure, technically that's true, but it's also not interesting.

The interesting thing about regexp is that it's really damned close to the
mathematical notations used for describing regular grammars, and therefore
it's a good, tight way to specify pattern matches on regular grammars.

No one ever really answers with "I've been programming the regular grammar
parser". Sure, maybe technically that's what they've been doing, but the
context isn't really right.

~~~
catnaroek
> it's a programming language in the same way that a flashing light can be a
> programming language. If someone on the other side is using it to make
> decisions then you're technically programming something so it's a
> "programming" language.

You're totally missing my point. A regular expression describes a
_computation_ \- the series of state transitions a nondeterministic finite
state automaton undergoes in order to decide whether an input string belongs
to a language. (The fact that this computation is typically optimized by regex
compilers is another matter.)

A programming language doesn't need to be general-purpose to be a programming
language.

> The interesting thing about regexp is that it's really damned close to the
> mathematical notations used for describing regular grammars, and therefore
> it's a good, tight way to specify pattern matches on regular grammars.

Are you saying that what matters most about a language is its surface syntax?
Seriously? Wow.

~~~
mreiland
> A regular expression describes a computation

So does the flashing light, it also has the cool advantage of being
parameterized by time.

> Are you saying that what matters most about a language is its surface
> syntax? Seriously? Wow.

If that's what you took from that then there's a knowledge gap and it will do
us no good to continue this conversation.

------
WalterBright
> •compilers — with big implications for performance

D is one of the most powerful languages available, and it compiles faster than
about any.

In any case, in the hands of an expert, using a powerful language results in
simpler user code. This is because the program can be crafted to look like the
problem being solved. In fact, it's why we have programming languages in the
first place rather than writing assembler in hex bytes.

------
AlexeyMK
This topic seems less about power and more about invariants. I could imagine a
powerful, fully expressive language that does a better job of explicitly
stating its invariants.

In the `urlpatterns` example,

    
    
      urlpatterns = [
          url([m('kitten/'), views.list_kittens, name='kittens_list_kittens'),
          url([m('kitten/'), c(int)], views.show_kitten, name="kittens_show_kitten"),
      ]
    

I could see `m` and `c` being used as a more-specific-than-regex DSL
specifically for routes. Reversing routes would be even easier because of the
inherent limitations of using the DSL - you have to pass a string to `m`, you
have to pass a type (and a variable, ideally, so it would be `m('kitten/'),
c("id", int)` or just `m('kitten/'), id()`.

The lesson I gain is "figure out the implicit invariants from your design
decisions, make them explicit and try to keep as many of them as possible,
only letting go once you have no other choice."

------
DrNuke
From a productivity & maintenance perspective, very many coding jobs / tasks
can be executed more efficiently by using purportedly limited tools. Are you
HNs really surprised to hear this? The drive is time-to-market reduction,
improved communication and, overall, domain-specific standardisation.

~~~
p4wnc6
One objection I have to this is that no matter how much someone believes they
can guarantee a certain job or task will be confined within a subset of
general computing, inevitably the software gets pressured by business use
cases to 'break free' and do more general purpose computing.

An example might be saying something like "Things you do in a spreadsheet will
never benefit from object orientation to such a degree as to justify
implementing that ability for the sake of the business's bottom line."

As a former analyst in a financial firm, I encountered this idea all the time.
You can use Excel's "slope" to do regressions. You can even do matrix
arithmetic if you're willing to deal with the syntax. You can program basic
functions.

In a strictly time-to-market sense (the time from some ad hoc financial
analysis in a spreadsheet to 'market' \-- either producing a report for your
boss, a strategy implementation, or some other deliverable based on what your
spreadsheet calculations showed you), throwing stuff in Excel, copy/pasting
code from templates, etc., can't be beaten.

But this is a very narrow view. For example, data provenance is extremely
difficult if your work is a directory of spreadsheet files. Even if they are
version controlled, there is no automatic way to understand how the
spreadsheet programmer intends to copy/paste some functionality out of one
template and into another at the moment a new analysis is performed. The
dependencies, if documented at all, are documented only in natural language
descriptions, rather than overt 'import' or 'include' style statements, or
static analysis of what gets used where.

Unit testing is often ignored in a spreadsheet environment and it's a huge
pain to do in the rare cases when someone actually tries to do it. The focus
on superficial aspects of 'time-to-market' also puts pressure to avoid version
control, even if the spreadsheet paradigm as a whole doesn't necessarily have
to reduce use of version control.

But far beyond any of these items, there are (for example) tools like Pandas,
or even xlwings, in Python, or HFrame/HMatrix in Haskell, and I'm sure many
other things in many other languages, which presents you with what is
effectively a spreadsheet as an abstract data type.

You can programmatically perform the spreadsheet interactions that would
otherwise have been manually reproduced, and you can include more advanced
software techniques, like logging or unit testing, since you are simply
working in a full-featured programming environment, rather than an environment
where the feature set is purposely limited.

In this case, I've never heard any compelling argument for why a direct
spreadsheet is better than a spreadsheet-like abstraction in a full
programming language.

The arguments I have heard to justify continued direct use of a spreadsheet
are:

(a) the people using it don't know how to write code and the business doesn't
believe it's worthwhile to hire/train programming talent for such a role.

(b) someone who makes the decisions is using a hyperbolic discounting function
when they assess the future returns of compounding automation through proper
software design vs. just-get-it-done-today-by-copy/pasting-in-a-spreadsheet-
if-you-have-to attitude.

(c) "Programming" is a low-status activity, and so
business/marketing/financial analysts must perform some activity that is
plausibly different than "programming" so that "programming" doesn't
experience a status rise within the organization, and potentially affect
people's raises/bonuses/promotions or project allocation.

Let's take a step back from this example of spreadsheets. What is the general
phenomenon going on? I would argue that it is about automation and
productivity. But then, general purpose computing is also about automation and
productivity. There can be counterexamples for sure, but most often newer
layers of abstraction are introduced because they create genuine value in
terms of making it easier to automate something or making a more generalized
lever that lifts whole categories of heavy things instead of just lifting this
or that specific heavy thing.

No matter what limited-scope tool you start out with, over time lots of the
tasks performed within the scope of that tool will be repetitive and/or will
agglomerate into clumps of similar work that can be factored out into an
abstraction.

When the tool also has general purpose computing abilities, you can take
advantage of these opportunities to automate or factor out clumps of work and
write generic, reusable solutions. The process of doing this almost always has
huge positive effects of productivity, especially as it accrues and compounds
over time. It's overwhelmingly worth it to pay generally modest short term
costs to work in this manner, rather than trying to hack in ways of coping
with bottlenecks in a tool that can't support general computing.

Part of the problem, though, is that many middle-managers and up within any
given organization do not understand how this works. The syntax of their
brains only manipulates the "programming language" of the business. Deliver X
to Y; ship Z by Friday; give me a forecast of W. They don't unpackage "Deliver
X to Y" down into its atomistic components and ask to what extent general
programming can help, and whether or not it will have returns on "Deliver X1
to Y1" next week and "Deliver X2 to Y2" the week after that.

When programmers throw an exception within the business's programming
language, Exception('We can't ship Z by Friday if we also write sufficient
unit tests this week.'), what happens? Let's just use unit tests as the
example of a "general computing" behavior that might sometimes be axed in
favor of justifying a narrow-scope tool environment for business reasons.

In some organizations, this is considered very carefully, and a lot of
attention is paid to the engineering assessment. The managers may ultimately
come back and say, "You know, we really looked it over and did some careful
thinking, and we still must ship Z by Friday, so skip the tests." In that
case, probably the managers and engineers alike both agree that you don't want
a limited programming environment for the task. Unit tests mattered to the
engineers, and they also mattered to the managers even though they had valid
reasons to skip it this time. But all parties probably agree that, in
principle, unit testing would have been better and should be at least a
possibility.

In other organizations (a lot), the Exception is simply caught and never
handled. Managers don't like hearing about something that sounds like a whiny
and low-status issue ("programmers want unit tests"), and so they mandate that
such things be skipped, and support using tooling environments where such
things are not even a possibility. And at the end of the day, they justify
this as a necessary business reality, when it's pretty questionable whether
they truly ran any numbers to decide if the longer term gains from unit
testing more than offset any short term slowdown to write the tests.

In practice, it's always some gray-area mixture of all of these things, and
sometimes there definitely is a legitimate reason to forego general computing
options for business reasons.

But I am very skeptical of the more far-reaching claim that it helps bottom-
line business productivity to reduce _the possibility of_ automation and
software productivity practices enabled by general computing capabilities.

~~~
DrNuke
Great comment, thanks. We both know that corp managers prefer straining
systems, reap rewards and then disband: no legacy, no reuse, new projects. I
have one question, though: would you suggest a 2016 startup to focus on best
practices or product-market fit? That's the value we're considering here,
imho.

~~~
p4wnc6
My feeling is that it is indeed worth it to be pedantic about these kinds of
best practices in a start-up. In fact, when I left finance to join a start-up,
literally the whole reason for doing so was that the start-up was supposedly a
place where engineering best practices are valued more than incremental short-
term business progress, in contrast to the stodgy bureaucratic finance firm
where political in-fighting prevents best practices from materializing.

I don't personally see any reason to believe working for a start-up could
possibly be a good idea otherwise. (And since most start-ups only pay lip
service to supporting best practices, while really supporting only short term
incremental business gains like any other kind of organization, this is
generally a good reason to default to believing that it's a bad deal to work
at any given start-up.)

For me, this also has a lot to do with vision and consistency. If you create a
start-up and your only goal is to sell it (or, more realistically, you are
expected to do this because it is the goal of the VC firm you got into bed
with), then you're not only not going to focus on best practices, but you're
also going to jettison any part of your mission or vision any time it's not
convenient to some incremental short-term growth opportunity. By the time you
reach a point to sell the business, it may be completely unrecognizeable from
the vision you started with.

If your only goal is to make money, this might not be a problem. But probably
not very many people who agreed to work with you will share the feeling -- and
they certainly won't stand to make meaningful amounts of money -- and so they
are unlikely to be happy workers through most of the process. That means you
haven't been getting their best effort all along, and the product is probably
shoddy.

Generally (but not always), good engineers don't want to work for something
that is explicitly a hype machine where _maybe later_ quality will be hacked
back into it, but probably not. So start-ups that fundamentally take an
approach of short term product-market fit are pretty much by definition
staffed by bad and/or unhappy engineers.

Contrast this with something like 37Signals/Basecamp. Of course they had to
make engineering sacrifices along the way and didn't always do everything in
some pedantically best-practices-adherent way. But their ethos/vision was to
be much more about best practices than about unreasonable growth or winning an
acquisition lottery. Yes, they were driven by succeeding with product/market
fit, but they didn't ever become a slave to it or turn into zombies about it.

Given that there's such a poor success rate among start-ups (a large base-rate
bias towards failure mode), it's hard to draw much from outliers of any kind,
whether they are like 37Signals/Basecamp, or they are like some always-
pivoting shop that always favored short term business gains but still
succeeded despite it. But my perspective is that it's way, way better to be on
the side that pedantically clings to an engineering vision and has a general
ethos of turning down short-term business gains as a means of investing in
longer-term best-practices-focused infrastructure.

------
gscott
I have been looking into quicker languages for scripting and scripting desktop
applications. I found [http://www.rebol.com/](http://www.rebol.com/) which is
fantastic and powerful without being overcomplicated.

------
agentultra
_Burdening the reader_. That's a good take-away for anyone reading this
article. Every abstraction you introduce is a burden on your reader, user, and
customer. They will be more difficult to maintain without passing on your
specialized knowledge. They will be slower than they otherwise would be. They
come with a cost.

These are machines we're programming. They have instruction sets and memory
architectures. We need more languages like Jai, IMHO, that give more power to
the programmer to exploit the target platform underneath and provide
interesting patterns for controlling resources and transformations.

------
jonpress
That's why I'm not a huge fan of Coffeescript or even convenience utility
libraries like underscore/lodash. They save you a couple of lines or a few
opening /closing braces but they make code more difficult to understand (to
the average programmer) and (in the case of utility libraries) increase your
reliance on reference documentation.

I generally prefer simple constructs that require multiple lines over complex
one-liners. Although it depends on who is on the team. I prefer to write code
that any polyglot programmer with a modest level of mastery in any specific
language can understand.

~~~
jonahx
> They save you a couple of lines or a few opening /closing braces but they
> make code more difficult to understand

There is a massive confounding factor: popularity, in the sense of what is
familiar to the largest number of people. And this, in turn, is largely an
accident of history, politics, and marketing.

Which is to say, sure, more people will understand a "for" loop written in the
familiar C/Java style than a "map" one liner. But that does _not_ mean the
"for" loop is easier to understand, in the sense of expressing a concept
naturally and declaratively.

~~~
progmal1
On the other hand having a filter construct is way nicer than having to manage
the two arrays on your own.

Or worse yet, when someone is trying to remove items from a list that is self
resizing, it is very easy to make bugs (by not adjusting the counter in the
for loop when one is removed).

------
justaaron
I take exception at the notion that one cannot "guarantee" the presence of a
specific key, when one is using a key:value store without an explicit schema.
(sounds like someone was believing anti-mongo FUD and living in sql-land too
long...) If you built a structure and put it IN the database, you can use that
same structure to get it OUT...the presence of a key can be checked, etc. all
one is doing is moving the schema into the codebase (where I argue it
belongs... the opposite being an all-powerfull-db with stored procedures as a
kind of remote-remote-procedure-call)

------
raspasov
I think the author is mixing two separate issues - data (HTML, JSON, database
records, etc) and the tools to manipulate it (languages).

IMHO, we need simplicity and clarity in our data structures, APIs, JSONs and
powerful (but simple, still) tools to modify that data. If you're using
Clojure this library has been great for me recently
[https://github.com/nathanmarz/specter](https://github.com/nathanmarz/specter)
.

------
mcnamaratw
So, great, this is the mainstream view. It won. We got Pascal, that philosophy
mutated into Java and here we are. It takes lots of code to do stuff, and
that's universally scored as a maintainability win. All right. Who disagrees?
Little pockets of Lisp hackers up in the hills?

Unless the argument is that we need another big wave of even weaker tools, it
feels unnecessary to give this sermon in 2015.

(I invite people who disagree to express something concrete in reply.)

~~~
alricb
One example would be a language to deal with TLV data, like ASN.1 DER (e.g.
TLS certificates). Currently most TLS libraries use C, which is insane, but in
principle, you don't even need recursion to parse DER data; so even vanilla
Haskell is overpowered for the purpose.

~~~
mcnamaratw
Yeah, for sure, in a lot of applications you really don't want people chanting
"code is data, data is code" and building impossible shit that nobody can
figure out.

A finite state machine can easily be made basically un-debuggable by human
beings.

It's just ... maybe it's the crowd I hang out with, but I feel like these
ideas have already totally won.

------
johncolanduoni
I think the problem here isn't power or expressiveness per se, but a large
number of fundamental concepts. For example, vanilla Haskell has a lot of
expressiveness built around a few core concepts (first-class functions,
parametric polymorphism, typeclasses). However Haskell + 101 extensions or
Scala derive a lot of their expressiveness from a plurality of fundamental
languages features. _That_ is what creates the real problem.

~~~
Peaker
Note that Haskell extensions all compile (straight-forwardly) down to a
relatively bare (typed) language.

That is, the set of fundamental concepts exposed is no greater than the bare
language, except the higher abstraction allows controlling which subset of the
power is exposed so that type inference can be kept, and various syntactic
sugar can be used.

Haskell (with virtually all extensions used) doesn't have that many
fundamental concepts, and a small minority of extensions actually are
fundamental at all.

------
Qwertious
Well, yeah. Wasn't this really obvious from "goto considered harmful" and the
concept of functional programming (a 'pure' function being a function that's
constrained from having side-effects, i.e. explicitly less powerful)?

Basically, YAGNI.

~~~
sparkie
But how do you implement the small, concise languages without the gotos and
side-effects to begin with? You're gonna need a language with sufficient
expressiveness and the capability to perform side-effects to do so.
Implementing a new language entirely without side-effects is infeasable.
Replacing GOTO with some structured variant is trivial though.

------
merb
Funny that he links the Video of Paul Philips. He never said 'the language is
bad because it's so complex', he said the internals are bad because it's so
complex. That's a total different thing.

------
mud_dauber
This Forth alumni just sits back and smiles...

------
vatotemking
Another advantage would be low barrier of entry and therefore bigger
ecosystems due to a larger dev pool.

------
DHJSH
That's why I love Erlang, Scheme and other FP languages. They're so _simple_.

------
dbpokorny
1.

> I also need a word about definitions. What do we mean by “more powerful” or
> “less powerful” languages? In this article, I mean something roughly like
> this: “the freedom and ability to do whatever you want to do”, seen mainly
> from the perspective of the human author entering data or code into the
> system. This roughly aligns with the concept of “expressiveness”, though not
> perhaps with a formal definition. (More formally, many languages have
> equivalent expressiveness in that they are all Turing complete, but we still
> recognize that some are more powerful in that they allow a certain outcome
> to be produced with fewer words or in multiple ways, with greater freedoms
> for the author).

I understand that adding

\- automatic memory management \- exception handling \- object oriented
facilities (polymorphism, inheritance)

make a language "more powerful" by allowing the programmer to think at a
higher level. For me, this means the language gets out of the way and I can
focus on the problem at hand, but everyone seems to have a different idea of
what "getting out of the way" means and what constitutes the ideal "default
toolbox" for a programming language.

For decades it was true that higher level meant more powerful meant a better
default toolkit for the developer, but I resonate with the sentiment that a
plateau has been reached in terms of designing the basic toolbox a languages
makes available for the programmer.

A less powerful language means easier and more readily available tools to
parse and manipulate programs written in that language, which leads me to...

2.

> This happened to me recently, and set me off thinking just how ridiculously
> bad our toolsets are. Why on earth are we treating our highly structured
> code as a bunch of lines of text? I can't believe that we are still
> programming like this, it is insane.

With all due respect, JavaScript as implemented in node.js or in Chrome is a
brilliant language and is a fine environment for implementing a static code
analyzer and transformer. The same is true of Python, Scheme, C, C++, Java,
and many other modern programming languages. It takes relatively little code
to to implement powerful tools to parse and manipulate source code when
relatively few syntactic forms are accepted. See for example my autoclave.js
project below.

The one Wikipedia article I come back to again and again is the "Additional
Example 1+1" of
[https://en.wikipedia.org/wiki/LR_parser](https://en.wikipedia.org/wiki/LR_parser)

3.

There is a parallel between the instruction set architecture (ISA) of a chip
and the syntactic forms that a programming language recognizes. One measure of
the "size" of a language is the number of productions in the grammar
definition. So I think of a "highly expressive" language with a complicated
grammar as a "CISC language" and as you say a "less powerful language" as a
"RISC language". All other things being equal, it is easier and faster to
implement a toolchain for a smaller language than for a larger one. This is
why it is (or was, at any rate) reasonable to ask a first-year college student
to implement a meta-circular interpreter in Scheme (MIT 6.001). This is not
possible for most other languages (and it is certainly not possible in
Python).

This project is just one attempt to carve out such a usable "RISC language" as
a subset of JavaScript. If you have the time, I'd appreciate your feedback,
thoughts, and suggestions. Thanks!

[https://github.com/dbpokorny/autoclave](https://github.com/dbpokorny/autoclave)

------
pcunite
I'll give you a two character answer: C#

------
spo81rty
If you want a simple dev environment, language and tools, you can start with
Microsoft WebMatrix and Visual Basic and then graduate up to more complex
stuff. Visual Basic is very similar syntax to python in a lot of ways.
WebMatrix IDE and ASP.NET makes it simple to do. One click deploy to Azure and
you are done.

