
Yep, Programming Is Borked - evincarofautumn
http://evincarofautumn.blogspot.com/2012/01/yep-programming-is-borked.html
======
SomeCallMeTim
The example given with acceleration, velocity, and position? How is a compiler
going to deal with that?

With an Euler integrator, you say? (every frame, p=p+time_scaled(v),
v=v+time_scaled(a) ). Note there's an implied time, as well as frames per
second, in there, but a compiler can know about time. Unless when you're
saying "acceleration" you're not talking about real time, but calculating
where something will be at a future time...but I digress.

What about a Verlet integrator? [1] Maybe you'll also want to add springs, and
Verlet works better for springs. Or maybe a Runge-Kutta [2] integrator? You
can get more accuracy out of one of those. Though some people claim that a
higher frequency Euler integrator can do as good of a job, possibly with more
stability.

And how many times per frame should the system run the integrator? Because the
stability of a system can depend a lot on the interval size. In fact, you
might want to experiment with different integrators, and different time
intervals, to get the result that's best for your application.

These are all choices that a PROGRAMMER typically needs to make, and they can
be different for every problem. More than that, unless your compiler can
DERIVE all of the above equations and more, at least one and probably several
would need to be built in to the compiler.

This "problem" hasn't been solved because, short of creating strong AI, it's
not solvable. This can only work when you're writing something like a "game
builder" app: A domain-specific problem solver that is designed to deal with a
very specific problem -- and which is coded using a traditional approach.

[1] <http://en.wikipedia.org/wiki/Verlet_integration> [2]
<http://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods>

~~~
drostie
I have to say, I don't know his actual response, but he gives a hint within
the article that perhaps, when one wants to get performance in algorithms, one
will start to look at the way the constraints are phrased.

So when you've got to write a sort, you probably start in our hypothetical
dream language by saying:

"sort permutes a list so that a < b implies list[a] < list[b]."

Now "permute", with predicates, is probably built into the system and the
constraint solver probably turns this into a variant of bubble sort, O(n^2).
[That is, when you now query list[0] it does a reverse bubble-sort by looking
for the least element rather than the greatest element, moving that to the
lowest position.]

Now you come into this and say "hey, I've got a huge list, I need O(n log n)
power." What do you use? Perhaps merge sort.

"sort zips together, least-element first, the sorted first half of input and
the sorted second half of input -- unless len(input) < 2, in which case it
just returns the input." Zipping together on predicate p may or may not be a
fundamental design element of the language.

If you can establish a consistent syntax for these sorts of claims which a
constraint solver can follow, then the simplicity of the constraint solver,
and your ability to guess what it will do, will allow you to determine which
algorithm you use to perform the same task.

So it doesn't require strong AI and the programmer _is_ still making the
choice -- that's what I'm trying to say. The programmer is merely making the
choice in a different framework: rather than making the choice in some wrapper
for blocks of assembly language, you are making the choice in some wrapper for
a constraint solver.

Now let me turn from where I think you're wrong to where I think you're right:
I have the feeling that you're going to see something less revolutionary than
claimed, because it will be like C's inline assembler support; in this
hypothetical language you can probably "drop back down" to the pre-constraint-
solver level when you can't figure out how to articulate the problem with
constraints. (Something like "The constraint is, it has to come from applying
this function to those lists!")

~~~
SomeCallMeTim
> "sort zips together, least-element first, the sorted first half of input and
> the sorted second half of input -- unless len(input) < 2, in which case it
> just returns the input."

If you're defining the algorithm to that level of detail, then I submit that
you're writing the algorithm. What you just described looked almost exactly
like how it would be written in one of the more advanced current functional
languages, and the OP and previous OP that he was responding to both
considered functional languages to be Not Good Enough. What they're asking for
is pretty much just Sufficiently Advanced Technology to Do What They Want
(i.e., Magic, or strong AI).

Aside from that, sort IS something that's so common that it tends to be
implemented in every high level programming environment in one way or another.
Baking several sorts into a language isn't odd, so I don't think "sort" is a
good example, because I ALREADY can say "take this list and sort it" in any
language I use.

An Euler integrator isn't built in to anything but a DSL for animations or
games, though. And it's one of THOUSANDS (millions?) of algorithms that a
program might need -- most of which are more easily described (by the
programmer) in a traditional language than by trying to jump through hoops to
describe what you want in a way that you'll actually get what you want.

So yes, the trivial problems could be solved by such a language -- but they're
already solved by CURRENT languages. It's the hard problems where it would be
hard to know how to even start to create a "describe the results" language. I
think all you'd end up with is a DSL for each of the cases you thought to
describe -- which, depending on the domain, could be useful for that domain.
DSLs are great when they're well designed. But as someone else pointed out,
you don't write a game in SQL.

------
crazygringo
The problem is, simple examples like a trivial sprite-based game are not
generally found in the real world, or else already have libraries built for
them. (Or you should build a new library yourself.)

In the real world, you write programs that connect to databases, transform
parameters, perform a bunch of linear algebra calculations, invalidate caches,
and output results to a webpage (for example).

The overall specification of what the webpage is, from start to finish -- is
simply the program you wind up writing! You have to specify what information
you're getting from the DB (the query), with what parameters, the equations to
perform after it returns (using a numerical library), how to invalidate which
caches (a bunch of specific function calls), and how the webpage should be
constructed (the HTML template).

Writing all this in a "declarative" way seems to make as much sense as
throwing away your cake recipes and just using cake photos instead, because
that's the "result" you want.

The cake photo is useful for two things only:

1) Mentally, having an overall idea of what it is you want to make (lacking
many details, however) 2) After you baked the cake, making sure it looks like
the photo

Analagously, declaring things at a higher level in a programming project works
marvelously in two areas:

1) Designing what you want to build, in a big picture 2) Testing that what you
built satisfies the top-level constraints

When artificial intelligence is a smart as humans, they'll we'll have
computers that can write our programs for us, and we'll have "declarative"
programming. But of course, we'll blame the computers when the programs don't
work, just like users blame programmers now... :)

~~~
evincarofautumn
That’s a fair assessment, but it misses the point somewhat. All I’m saying is
that we should be able to write programs that clearly express our intent, and
a result-oriented language would make that easier than a process-oriented one.

“The overall specification of what the webpage is, from start to finish—is
simply the program you wind up writing!”

This is not true; the implementation is _very_ far removed from the
specification. It is not easy to look at a bunch of SQL and PHP code and
deduce precisely what the requirements of the site are. But in a different
sort of language, it could be.

~~~
derekp7
The problem is that a results oriented / declarative language would by
definition not be Turing complete. And in order for a language to be able to
express any problem type, it would have to be Turing complete. SQL is an
example of this -- it is declarative, but you can't write a web server or an
arcade game in just SQL.

~~~
evincarofautumn
Your first statement is patently false. You miss the meaning not only of
“result-oriented”, but also “declarative”. Besides, Turing-completeness is not
a difficult criterion to meet in a language—in fact it’s almost more difficult
to avoid it.

~~~
tadfisher
I'm reminded of TeX, where Knuth struggled to keep it declarative-only for
years until the macro system added tail-recursion, becoming a Turing-complete
language in its own right.

------
InclinedPlane
Translating specifications from fuzzy human-friendly language into an ultra-
precise implementation that runs on computer hardware is the core, irreducible
complexity of software engineering. The compiler isn't going to do it for you,
ever.

With DSLs and advanced programming techniques (FP, macros, AOP, well composed
OOP, etc.) you can reach a state where the intention of high-level code is
stated clearly, but you're never going to be saved from getting your hands
dirty with the details.

------
techdmn
"You'll never find a programming language that frees you from the burden of
clarifying your ideas." [0]

0: <http://xkcd.com/568/>

------
drblast
Warning: I'm about to get all opinionated on you.

This "programming sucks" thing could get old quickly. Most of the problems
mentioned in these articles are already solved in some language or another.
And many of them are too specific. Like wouldn't it great to eliminate for
loops? Well, yes, for the specific cases where the alternative would work.

More pressing problems include lack of a REPL as a primary development
environment, no decent way in most languages to organize and update libraries
in a seamless manner without breaking anything, no good way for multiple
programmers to collaborate in real time, lack of run-time interactivity and
in-place replacement of components of a running program, etc.

In other words, I applaud your motivation, but if you want to reinvent
programming, please don't reinvent Lisp, Prolog, or Haskell. It's just syntax,
and that's been done to death. Reinvent Smalltalk and Erlang instead.

Edit: By the way, Microsoft Excel has many of the features these blog posts
talk about. Type in Jan, Feb, Mar, then draw a box around those and drag to
the right. Magic! However, the utility of these things in a small number of
cases doesn't mean the idea can scale to a more general solution. The same
idea in MS Word is the awful numbered list creator that never does what you
want. I think you'll run into those edge cases much more quickly than you
think.

And now, as a programmer, I don't just have to remember foreach..., I have to
remember 1500 different ways to iterate through a list. I'd much rather have
the general foreach tool and write my own "pairs" function. It takes one
minute.

~~~
vonkow
If someone reinvented Smalltalk and Erlang (and it saw widespread adoption), I
think we would see a lot less "programming sucks" posts.

------
haberman
Bridge building is broken, I can't just say "I need a bridge from here to
there with 8 lanes across 2 decks that can carry 1000 cars weighing 2 tons
each and survive wind storms" and have a computer design it for me.

------
DasIch
Declarative constraint based programming in this fashion might be nice but
that doesn't make the "traditional" way borked.

Furthermore I think the author dismisses the performance problem too easily.
The code examples don't give any indication of the algorithmic complexity. I
suppose I could determine it, possibly quite easily, if I understood how the
compiler works but why would I want to know that?

I expect that this abstraction will make debugging performance problems a
nightmare or it will leak the information. I will have to change perspective
from "what" to "how" quite frequently which would distract me from my real
goal.

I hope to be proven wrong though, this does look interesting.

~~~
evincarofautumn
First, thanks for the honest critique. It really helps me improve. :)

I confess to linkbaiting a little with the title. Where we are now isn’t
_utterly_ broken, but the way we think about programming can always improve,
and this is one direction I can envision improvement. Also, I wanted to tie it
in to the article I was referencing.

Performance is obviously important, but it’s a large topic and I didn’t have
strong enough examples on hand to make the point I want to make. I’ll
definitely go into more detail on that point in a future article. The gist is
that high-level languages don’t need to sacrifice performance if they are
constructed in such a way that the program contains enough _meaning_ for the
compiler to intelligently optimise it, a property which existing languages
tend to lack.

And if I may say so, I hope to prove you wrong too! ;)

------
dougws
Most well-written functional programs I've seen actually look like your
examples--or rather, the very top-level does. The rest of the code is devoted
to telling the language enough about your domain that it's possible to express
your goals.

------
snprbob86
To me, the fundamental issue with pure declarative or functional languages is
that they ignore the simple fact that both the problem and solution domains of
the set of all possible problems are, fundamentally, heterogenous.

Sometimes, "what" I want is a program to do this that and the other thing in
this given order. ie the "how". Other times, I don't care, I just want these
properties to hold. Yet other times, I have no idea what I want at all and
have to experiment and see what happens.

I want a language that solves the composition problem between these distinct
solution spaces. The best programming language for any given task is the one
who's world view best matches the preconceived spec in the programmer's head.

Most research languages take a key idea (everything is a list! or no side
effects! or something like that) to a logical extreme. That's a great way to
study a set of phenomenon in a particular little universe, but most practical
languages find a happy median of thought-pure and just-fucking-works. We need
to get some of these wins from high concept languages back into just-fucking-
works languages.

~~~
tikhonj
I think you, like many others, don't understand the practical ramifications of
a completely pure language. Let's take Haskell as an example--it is, after
all, the poster-child of purely functional research languages!

And yet, from a practical standpoint, Haskell is not _pure_. The underlying
abstractions are pure, sure, but the language makes working with impure
computation _feel_ just like writing an impure program. The magic of Monads
and do-notation may sound complex, but in reality it's just a neat way to
write impure code in a pure way (sounds like a paradox, but it isn't).

Look at this snippet:

    
    
        main = do name <- getLine
                  putStr $ "Your name is " ++ name
    

This trivial program is technically purely functional. And yet it is also
imperative from the programmer's point of view! It looks just like something
you might write in Python with slightly different syntax.

So you say, what is the advantage to writing a program this way rather than
using an impure language? The answer is simple: Haskell lets you mark impure
code using the type system. This is similar to the Scheme/Ruby convention of
'do!' functions being unsafe, but actually enforced.

This sequestering helps you avoid bugs by not having implicit mutation and IO
everywhere and it helps the compiler do clever optimizations like running your
code in a different order. And yet, if you need it, you have IO and State and
fancy things like STM right there, with only a little bit of complication.

Of course, learning to think in this admittedly roundabout way is tricky. But
_learning_ is a one-time cost; using a less expressive or harder-to-maintain
language is a recurring cost.

In other words, there is no reason why a language focusing on one idea can't
be a "just-fucking-works" language as well. In my experience, Haskell and Lisp
are just as practical as others; the only difference is in the initial
learning period. Look at Common Lisp: you can't get more "just-fucking-
work"ing than that!

I think that one should usually ignore a one-time cost like learning in favor
of recurring benefits, but others naturally disagree.

~~~
snprbob86
I understand the practical ramifications quite well:

It's complicated as all hell to implement some very simple, and well known
algorithms which rely on mutation and explicit memory management.

Take, for example, QuickSort:

"sorta looks like quick sort"
[http://www.haskell.org/haskellwiki/Introduction#Quicksort_in...](http://www.haskell.org/haskellwiki/Introduction#Quicksort_in_Haskell)

"actual quick sort w/ in place memory mutation"
[http://www.haskell.org/haskellwiki/Introduction/Direct_Trans...](http://www.haskell.org/haskellwiki/Introduction/Direct_Translation)

The short, few-liner Haskell version is beautiful. It's also _not the same
algorithm_. So then you whip out the larger "direct translation" version and,
suddenly, you're wishing for C.

This story repeats itself over and over again. For example, try implementing
the Fisher-Yates shuffle.

Now, in practice, you don't _need_ in-place swaps. And you can argue code
style and YAGNI and pre-mature optimization and practical implications and day
to day use and parallelization and whatever etc. Yada yada yada.

I'm not saying there is anything _wrong_ with Haskell. I'm saying that
foundation of Computer Science lies in the study of data structures,
algorithms, and computability.

Software Engineering basically boils down to a search and optimization
problem: Find the data structures and algorithms which (1) minimize the
weighted average of costs and (2) maximize the value that the solution
generates.

Haskell's approach to Software Engineering presupposes the cost savings to
isolating mutation (and other purity concerns) as a requirement. I'm simply
advocating future language developers consciously address the problem of
finding those optimal data structures and algorithms in minimal time, with an
eye for the fact that data structures and algorithms are a fundamental law of
computation.

~~~
Peaker
Haskell makes some array access operations cumbersome due to syntactic issues
with the library's API.

It is not inherent that Haskell's quickSort is harder than quickSort in C.

Take a look at Augustusson's imperative language DSLs in Haskell:

[http://augustss.blogspot.com/2011/07/impredicative-
polymorph...](http://augustss.blogspot.com/2011/07/impredicative-polymorphism-
use-case-in.html)

Haskell is so versatile at this, augustusson managed to implement an old BASIC
DSL!

[http://augustss.blogspot.com/2009/02/more-basic-not-that-
any...](http://augustss.blogspot.com/2009/02/more-basic-not-that-anybody-
should-care.html)

I think Haskell can be a great vehicle for all our current day imperative
programming needs (even if due to some library issues it's not quite as nice
for all tasks yet).

------
zackmorris
I've come to the same conclusion and wrote a series of posts on it a few
months back:

[http://zackarymorris.tumblr.com/post/10973087527/the-
state-o...](http://zackarymorris.tumblr.com/post/10973087527/the-state-of-the-
art-is-terrible)

I'm thinking that if you chop a program up into many small pieces, each part
is simple enough that it can easily be solved by the compiler with something
like genetic programming (or better yet, methods in languages like Prolog that
already work for small problems).

So much of what we work on now is a waste of time (I'd say well over 90%),
things like syntax errors, makefiles, DLL hell, code repo weirdness, managing
web servers, concurrent programming, just on an on and on, that I gave up on
working on real problems over a decade ago (the kind we learn in school in
lisp/Matlab/Mathematica etc).

I would really like to write an entire program sometime as a big tree, that
would be convertable back and forth to something simple like JSON. Then the
compiler would go through and convert my simple statements like "when this
sprite touches this sprite, give them opposite speeds) into the underlying
code so I don't have to waste my time with it. I know we think that we work on
more complex stuff than that, but I think if most programmers audit their
time, they'll find that very little of it goes into the mathematics of solving
problems (10%). I realize the logic may not be solved anytime soon, but maybe
the minutia can be. If we don't obsess on finding the perfect algorithms, but
allow ourselves a floor of say O( n^2 ), I think our productivity could go up
substantially.

Maybe it's time for a bunch of geeks like us to be willing to unlearn what we
have learned, basically scrap everything, and try working backwards from what
the solution will look like (what people will be using in a few decades).

~~~
gintas
> I would really like to write an entire program sometime as a big tree, that
> would be convertable back and forth to something simple like JSON.

That would be Lisp.

> convert my simple statements like "when this sprite touches this sprite,
> give them opposite speeds) into the underlying code so I don't have to waste
> my time with it.

That's function application (if at runtime) or macro expansion (if at compile
time).

~~~
zackmorris
Ya but both of those don't work in the real world, at least not very well (or
beginners would be able to use them). I'm not trying to be negative, just
pointing out that existing options are not living up to expectations.

I think I'm talking more about readability than sophistication. I want to
write in a high level language like Hypertalk (from the HyperCard days) and
let the compiler create a series of permutations under the hood that I could
review and say "yes that one works, use it" and then maybe the compiler could
annotate my code with more precise limits on what I said. So for example I
tell it I want to sort a list, it shows me algorithms that sort numbers,
strings and objects, and I say "yes strings are good enough" and it shows me
the updated version of my code showing that it requires strings.

I know that sounds a little weird but this is 90% of the minutiae that I deal
with on a daily basis and I am thoroughly disgusted with how myopic and
restrictive tools have become today. They break when I forget a semicolon,
when I would much rather have them show me an edge case of my algorithm that
is incorrect.

~~~
pnathan
Actually, Lisp works fine for beginners. It's simply not the current fad.

I use Lisp by preference because it's so much easier to express my ideas
without whacking on the syntax mole.

------
growingconcern
There's been a couple of these types of articles on here lately. All of them
break down into "Programming is broken because I can't just say to a computer
what I want and have it created for me". We have to specify the process gets
something done because it takes a hell of a lot of smarts to do this. Even
things that seem very easy and straightforward to say out loud are filled with
unknowns, assumptions and inconsistencies. Maybe all these "programmers" can
stop whining about how programming is broken when we've created an AI that
will understand what they want and just write the program for them. And as for
constraint satisfaction programming - if you've ever actually programmed in
prolog you'd realize that properly defining the problem such that you get a
proper answer back is a hell of a lot of work. Prolog has it's place, but if
it saved a huge amount of work and was easy to use people would be using it
more often.

~~~
mjwalshe
The sort of people who write these sort of posts don’t seem to have actually
worked in real technical computing. Its interesting that the article mentions
Newtonian mechanics. Years ago (early 80s) I worked for a RnD organisation and
we where analyzing the efficiency and droplet dispersion of water sprinklers
used in fire suppression.

They had come up with a neat solution involving really tight depth of field
and doubly exposed file with two different colour filters a short time apart.
So we had a slice though the droplet cloud.

I was told oh we have brought an A0 digitizer (costing about twice my salary
at the time) work out how to interface to that PDP and develop a system to
locate the droplets in the xy plane.

To solve that you actually have to know real engineering to get this to work
the actual programming is the easy bit. I also had to work out how to write a
interrupt driven driver to interface the tablet to the computer – Luckily RT11
did have some basic multitasking functionality built into it.

PS we did also use prolog on other projects so it does have its uses

------
yason
In the last ten years or so, most big companies tried this new programming
language called "What".

First, they wrote their programs in the _form of what they want_ it to do.
Then they left the dreary _how part_ to a sufficiently smart new compiler
called "Outsource" that was generally available only in India and other low-
cost-but-technically-competitive countries. After sending the source code to
"Outsource", the first compiler pass would start. It might come back asking
some clarifications and details for ambiguous cases and they refined their
program as needed. Finally, after waiting for a few weeks they got back some
results.

The resulting program was tested and any problems written down, and more
refinitions followed by a set of new iteration rounds to and from the
"Outsource" compiler suite.

Turns out, to describe the hard parts of a problem in sufficient detail equals
more or less writing the program itself. It's just that instead of the laid
off local programmers, the development managers and technical leads that
hadn't have to code much earlier had to do the programming. They could
describe their programs in English but they couldn't escape detailing the hard
parts of their problems. And then again, the easy parts of programming never
were a significant cost-factor in the first place.

Then some people figured that instead of having people program the smart
remote programmers without domain knowledge to do what was needed, they could
employ nearly as smart local programmers with domain knowledge to do the whole
programming. The upfront costs would be higher but on the other hand the hard
process of explaining the hard parts of a problem, which was needed regardless
of the programming scheme, was much easier because communication was almost
instant and the programmers both held local domain knowledge _and_ programming
skills.

While the local programmers still had to spend expensive time writing some
unavoidable boilerplate, it was left unclear whether using "Outsource" saved
any money at all because the terms of programming with it were also so
different.

------
oconnore
> "The world would be better with a SSC (sufficiently[1] smart compiler)"

Agreed, but there are good reasons that one hasn't been built yet, and they
have nothing to do with lack of motivation.

[1] Sufficient for all possible measures of sufficient.

------
gatlin
This is why answer set programming has caught my eye: I describe my problem
and the instance data and out come answers. Not quite what the author is
talking about, of course, but very cool nonetheless.

------
Symmetry
_this is orthogonal to the issue of “declarative versus imperative”_

Wait, you're saying that I could make an imperative results-oriented language
as easily as a declarative one? Perhaps the author meant "is not identical to"
rather than "orthogonal"?

~~~
evincarofautumn
I did mean orthogonal, though the difference between a declarative result-
oriented language and an imperative one is perhaps not immediately apparent.
In an imperative result-oriented language, _sequence_ would become less
significant—as in a declarative language—but imperative operations would still
be allowed, because sometimes “what you mean” is fundamentally imperative.

~~~
shasta
Your "results oriented" is what everyone else means by "declarative". It is
true that functional and logic programming are not fully declarative because
you end up having to worry about the way your declarations will be evaluated,
but the same issue will apply to your language. That's what you swept under
the rug in your remark along the lines of "if we're careful what constraints
we choose, we can get a good running time."

------
apsp
Could you support your claim that running times are irrelevant? I understand
you want to keep your blog post (or "blarticle") short. But maybe you could
elaborate here? I am genuinely interested in your argument.

I think there are other caveat of specifying only the results.

You posted earlier on the halting problem

[http://evincarofautumn.blogspot.com/2011/10/solving-
halting-...](http://evincarofautumn.blogspot.com/2011/10/solving-halting-
problem.html)

That post also contains a statement which I believe to be false. You state
that the "Haltability problem" (where you are given a program _P_ and want to
determine if there exists an input on which it halts) is decidable but its
not.

Basically, I can just hardcode any input _I_ of my choosing into _P_ so that
it always does the same thing (run _P_ on _I_ ) regardless of the actual input
_I'_ you provide it. Thus, I can use an algorithm for the Haltability problem
to solve the halting problem.

I know that does not pertain to this post directly but the halting problem
does. Given a program written in this new language, how can you determine if
what the programmer specified can even exist as a program?

In fact, suppose you specify the properties of this new language you current
wish to construct (in say, English and assume that everything is interpreted
correctly). How do you know such a language could potentially exist (never
mind implementing it)?

I also don't even want to think about debugging in such a language. More
specifically, I would rather have a library for the example you described
because if I make a mistake in my specification, I have ways to find the error
in it. But I guess this goes back to my previous question: what does your
compiler produce on an incorrect (or outright impossible) specification (i.e.,
piece of "code")?

Its certainly interesting to read about thoughts on potential new languages
but I wish there were more solid theoretical foundations for posts like these.
Otherwise, I feel that the same energy would be better spent on improving
existing languages with some subset of the features you want.

------
jcromartie
We've got machines with memory, registers, instruction pointers, and such, and
they are here to stay. As long as the world runs on Von Neumann machines,
_someone_ will always have the job of converting intentions into machine code,
at some level.

If you come up with some sufficiently-liberating declarative language, you
still need an amazingly smart compiler to use it. People will invent new types
of these languages, as the style catches on, and there will be lots of people
writing the runtimes and the algorithms that you (the one taking the
declarative route) depend on to get a reasonably-performing program out of the
whole process.

------
83457
I propose MTPL, the mechanical turk programming language. It is a declarative
language where you can essentially write what you would like the program to
do, not how you want it done. When you click compile there is a call out to a
powerful computer that is a bit slow but in a reasonable period of time will
return a working program. This is an iterative compiler though where you may
have to adjust your code slightly and compile multiple times to get the right
result.

------
justindocanto
"I’m almost 21, and I’ve been programming for over a decade now"

~~~
evincarofautumn
What? It’s necessary context. I started programming early because it’s
interesting. It has taken many years of hard work to get to where I am, to
develop the skills and opinions I have. Don’t think I’m gloating, because
there is nothing whatever to gloat about.

~~~
Arelius
His point is that what seems like gloating to you, in fact, speaks of
inexperience to many. Having been programming since you were ~10 is nothing
new or unique around here.

Your post would be strengthened by omitting that detail.

~~~
evincarofautumn
I’m not sure what you mean—doesn’t seem like gloating to me. People make
mention of their experience in the field all the time in order to give weight
to what they say. And the only reason I bothered to mention it was to parallel
the article I was responding to. Anyway, I’ve removed it, to avoid the
distraction.

~~~
Arelius
My point is that mentioning you are 21 speaks more of inexperience than
mentioning you have been programming for a decade.

I'm not trying to judge your experience, rather just commenting on the
statement itself.

------
howeyc
How is your Prog different from lisp? This is what I see...

Prog : which (1..10) > 5

LISP : (remove-if #'(lambda (n) (<= n 5)) '(1 2 3 4 5 6 7 8 9 10))

Prog : each (1..5) __2

LISP : (mapcar #'(lambda (n) (expt n 2)) '(1 2 3 4 5))

~~~
jgeralnik
Um, two languages being able to do the same thing does not mean there is no
difference between them. Particularly when one is much less verbose than the
other.

~~~
gmartres
In these examples, lisp verbosity only comes from a lack of syntaxic sugar,
which can be easily added.

------
Detrus
I haven't been programming long and don't get the nuances of functional and
declarative paradigms. But I have a hunch that even if people made some new
languages that balanced out the various concerns of complex compilers,
programmer control, LOC, boilerplate, ugly syntax etc, it wouldn't fix much.

 _So, using research into how people best solve problems, I’m making a tool
that helps people solve the problem of having an idea and not being able to
make it a reality._

Novices will still have lots of problems. They don't really think about how
sprites move in a game, they think they want to make a game kinda like
Asteroids or more like Mario. What will really help them is a well organized,
well documented template.

The problem is with documenting, organizing and making available the multitude
of libraries and solved problems. Making them accessible through visual
examples and natural language.

[http://blog.wolfram.com/2010/11/16/programming-with-
natural-...](http://blog.wolfram.com/2010/11/16/programming-with-natural-
language-is-actually-going-to-work/)

The Wolfram approach of writing a query, seeing the output, confirming or
rejecting it is the right direction. Memorizing syntax is a big problem.

    
    
      [1,2,3].byPairs == [[1,2]];
      [1,2].byPairs == [[1,2]];
      [1].byPairs == [];
    

This doesn't solve syntax memorization.

    
    
      split array by pairs == arrayPairTransform([1,2,3]) == [[1,2]]
      make array of pairs == arrayPairTransform([1,2]) == [[1,2]]
    

This is closer imho. You don't really have to worry about the [1,2].byPairs
being prettier than splitArrayIntoPairs([1,2]). You can use your queries as
the pretty documentation. You can choose which lower level language to use in
the code column, without having to remember its particular syntax. You don't
have to worry about naming as much, you're using tags instead.

A lot of novice programming is looking up code samples and libraries, getting
them to run. Organizing existing code in libraries and samples for a
specialized natural language search engine like in Mathematica seems like it
would change some design decisions of the lower level language.

It's not as important if the lower level language is a bit more verbose. Your
priorities for it would be performance, control, parallelism/concurrency,
regularity. Expressiveness/verbosity can matter less if you offload it to
another layer.

------
jayferd
I see a problem with language here. At the end of the day, you have to make a
computer understand what you want. Sure, you can increase the expressiveness
of your computer language. You can even have it pick an algorithm for you
under lots of circumstances (f.ex. logic programming). But you still have to
explain what you mean.

------
pippy
I had written a post a while ago detailing why c/c++ programming was borked:

<http://spottedsun.com/why-cc-sucks/>

I removed the fundamentals and only posted a core problem I found with c/c++:
prerequisites

------
balloot
Same deal as the other post. The easy part of creating a declarative language
is describing it. The hard part is actually implementing it.

------
appamatto
This certainly would make homework easier.

------
nekomata
Why is this such a big deal, sometimes I like to contemplate the monkey dance
that is coding and I come to the same conclusions, try coding a webpage in
Haskell, yes its possible but programming is very compartmentalized you still
need different languages for different objectives.

------
ThaddeusQuay2
It seems that every generation rediscovers the old stuff. Being only 20 years
old, you missed out on the 5GL craze.

[http://en.wikipedia.org/wiki/Fifth-
generation_programming_la...](http://en.wikipedia.org/wiki/Fifth-
generation_programming_language) (Fifth-generation programming language)

It was led by Japan.

[http://en.wikipedia.org/wiki/Fifth_generation_computer_syste...](http://en.wikipedia.org/wiki/Fifth_generation_computer_systems_project)
(Fifth generation computer)

------
funkah
Works on my machine.

~~~
evincarofautumn
Aw, really? I thought I found a bug. :P

