

Omega - Language of the Future - alrex021
http://lambda-the-ultimate.org/node/4088

======
humbledrone
The first thing I want to see when I hear about a new programming language is
an example. The best examples of Omega that I've found so far are part of the
test suite for the interpreter:

[http://code.google.com/p/omega/source/browse/#svn/trunk/test...](http://code.google.com/p/omega/source/browse/#svn/trunk/tests)

~~~
wkornewald
Reading code like that makes me want to puke. So, he wants to increase
productivity by making us think about and write theorems in addition to our
normal code (don't underestimate the time you invest in this), making our code
even more static and less reusable in the hope that the reduction in time for
bug hunting and testing will make up for the time invested in thinking about
and creating more complex code? Hard to believe. Do those language designers
create this stuff to show off how intelligent they are or just for the sake of
having fun or are they seriously trying to make development easier? I can't
imagine building a website with this. But then again, maybe those languages
are designed for satellites, space shuttles, and simulations (i.e. where
correctness is far more important than productivity)? Well, I think most of us
have to make a compromise between productivity and correctness. It's the same
as having 100% code coverage. Nice in theory, but it makes refactorings and
development too costly and still doesn't guarantee that your software is 100%
correct. Here's a nice post if you still believe 100% code coverage is the
ultimate #1 goal of your project:
[http://blogs.msdn.com/b/cashto/archive/2009/03/31/it-s-ok-
no...](http://blogs.msdn.com/b/cashto/archive/2009/03/31/it-s-ok-not-to-write-
unit-tests.aspx)

~~~
loup-vaillant
A web site with no bug (I mean, no error) is a web site with no
vulnerabilities. Wouldn't that be interesting when your customers trust you
with their credit card number?

~~~
Groxx
You're implying all vulnerabilities are bugs due to coding error.

To take a high-profile recent case: this would do _nothing_ to prevent padding
oracle attacks. Or future crypto attacks using techniques unknown to us now.
Why? Because the encryption is working as it should. As advertised. As
_proven_. Or imagine you used Rot13 for encryption.

Your DB could hold people's credit cards in clear-text; how will this language
prevent that from happening?

Your website has no authorization tools, so you can access others' data by
simply going to their URL; this is still a provably-correct logic structure.

~~~
andolanra
How is any language going to fix the problem of bad programmers programming
badly?

And, more relevantly—how does the fact that a language can't fix those errors
make preventing other errors a useless endeavor? It's true that there's no
such thing as a programming language that can only write good/safe/acceptable
programs, and a lot of people (myself included) who advertise programming
languages as a solution to problems tend to overstate the relevance of type
systems, correctness proofs, pure functions, &c, but that doesn't mean you
can't eliminate a certain variety of error through good language design or by
enforcing programmer rigor.

~~~
Groxx
It's not, which is the point. loup-vaillant's comment implied that it _was_
possible to write a "perfect" program in a proof-oriented language, as it
would have no vulnerabilities, and people could trust your code.

I wasn't intending to imply that, because language X does not solve likely-
intractable problem Y, language X is worthless. Just that likely-intractable
problem Y is likely intractable in language Z as well as X, as it's an
inherent quality of the problem.

edit: I _will_ however explicitly imply that more syntactically complicated
languages are probably more likely to hide many kinds of logic errors. More
brainpower spent decoding your code is more brainpower _not_ spent analyzing
your logic. Each to their own, clearly; I most definitely side with highly-
readable code, and know several people who would likely feel right at home
with Omega (e.g.: my brother is a math + philosophy double-major).

~~~
wkornewald
Regarding "to each their own": Programmers aren't necessarily the best source
for an objective view on productivity. This here is a nice example of CS
students claiming subjectively which programming style they found easier,
contrasted by a more objective comparison of the actual number of bugs in the
code in each programming style: <http://lambda-the-ultimate.org/node/4070>

Studies like this tell me to never trust the opinion of a developer about
programming languages and programming styles. In some cases not even my own.
;) Of course in this case we had not very experienced developers, but if we
look at what expert developers often claim and how widely opinions differ I
often have the impression that expert developers' fundamental opinions about
programming languages aren't much better than that of a student.

~~~
Groxx
Did you mean to link to something else? 4070 brings me to "Is Transactional
Programming Actually Easier?" which deals with concurrency. I'd be interested
in what I think you intended to send me to, but I can't find it.

~~~
loup-vaillant
It was the good link. Here's the key point:

 _"On average, subjective evaluation showed that students found transactions
harder to use than coarse-grain locks, but slightly easier to use than fine-
grained locks. Detailed examination of synchronization errors in the students’
code tells a rather different story. Overwhelmingly, the number and types of
programming errors the students made was much lower for transactions than for
locks. On a similar programming problem, over 70% of students made errors with
fine-grained locking, while less than 10% made errors with transactions."_

Transaction are felt as being of average difficulty, but they in fact spur far
less errors. I bet we could find the same kind of bad judgement with "map"
higher order functions vs "for" loops.

------
jfager
_It seems somewhat obvious that sooner than later all code will be written in
this (or a similar) way._

LtU is just outright adorable sometimes.

~~~
derefr
It makes sense when you parse "sooner or later" in the way people use it
outside the technological domain:

\- Sooner or later we'll go back to the moon.

\- Sooner or later we'll understand how the brain works.

\- Sooner or later the earth will crumble to dust.

...and so on.

But seriously, we've only been programming for 50 years now. There's still
plenty of time for some revolutions and paradigm-shifting—just because the
Mythical Man-Month has been heart-rendingly applicable for the last 50 years,
doesn't mean that things can't change.

For example:

1\. If everyone agrees to target some highish-level bytecode or another, every
language will basically have access to every other language's libraries (let's
hope it doesn't end up being the JVM, though things seem to be heading there.)

2\. If we stick content hashes in function and class names, there will no
longer be such a thing as API versioning conflicts, because you'll always know
what you're calling—and you can just import random code from remote sites
(i.e. invoking on a URL as an identifier = loading whatever bytecode is
available at that URL), because you'll just hash it and guarantee that it
matches its name before jumping to it.

3\. If we can stop requiring that the source code we write be exactly the same
source code the compiler parses, we can treat code files as a serialization
format for an AST data structure (in XML or SEXPs if you want to keep the
source-source human- _readable_ ), and work with it however we like in an IDE,
even "rendering" it as one old-style-"language" or another.

And so on. These are problems that just need to overcome 20 or 30 years of
inertia, and, ever-so-slowly, we're doing it.

~~~
DeusExMachina
As I read around, the JVM seems to be the state of the art for language
runtimes. Can you elaborate why you hope it will not be the unified one in the
future? Just curious.

~~~
pwpwp
What the JVM has going for it is the incredible amount of resources thrown at
it in order to make it fast. Technically, the CLR seems more interesting (it
has runtime type information for generics, it allows structs, it allows
unboxed data, ...).

Personally, I'm not a fan of runtimes. They don't give me the same warm, fuzzy
feeling as compiling to native code and running directly on the OS.

~~~
Someone
Are you sure all of these makes the CLR more interesting? One can argue some
of it just makes it more complicated. Real generics, I agree about, but
structs and unboxed data? I am not sure that implementing these instead of
investing time in making the compiler & runtime smarter is the thing to do.

~~~
pwpwp
Unboxed data (incl. structs) can be a performance win, because it removes
indirections = memory loads.

------
rbanffy
On the problem of using "omega" a its name, I recall a discussion I had, a
long time ago, about Zope.

Zope is an application server. In a given point in its history, Zope Corp,
then Digital Creations, decided to "do it right" and fix each and every design
wart from version 2 on version 3. The end result is that Zope 3 is more or
less incompatible with Zope 2, specially with the kind of app we were
developing with it in 2001 (IIRC).

A friend of mine joked Zope 3 should no longer be called "Zope", but, since
"Z" is the last letter of the alphabet, the only reasonable choice would be
"Yope" and that would suggest a step backwards instead of the jump forward it
was.

With that in mind, I never built a product whose name started with Z.

~~~
ebiester
AAope?

~~~
mahmud
Aaope you're kidding.

------
GeoffWozniak
It seems nice, but it's more of the same: programming by specification.

There's a place for that, but the future of programming languages, in my mind,
lies in changing the metaphor of the process from specification to
experimentation. After all, the problem with the specification metaphor is
figuring out what the specification actually _is_. To me, the best route to
get there is experimentation.

Make programming more about experimentation and I think we'll get somewhere.

(I'm still going to read more about Omega. It looks interesting enough.)

------
kingkilr
> programs must be written for people to read, and only incidentally for
> machines to execute.

------
baguasquirrel
The post is a bit of fluff, but the paper is better. Haskell already has
GADTs, as the paper points out. Those of you crying out for examples, just
read the paper. The real gem are these extensible kinds, which I haven't seen
mentioned in Haskell before.

I don't mean to keep throwing water on the fire but the research community
already made trees that can't be unbalanced because they wouldn't typecheck,
by encoding the depth of the tree into its type. Okasaki demonstrated the
concept in Haskell, I believe. Getting it to work for RB-trees sounds new
though.

~~~
thesz
<http://hackage.haskell.org/package/she> \- _"The Strathclyde Haskell
Enhancement is a somewhat inglorious bodge, equipping ghc with automatic
lifting of types to kinds, pattern synonyms, and some kit for higgledy-
piggledy literate programming."_

From one of the authors of Epigram: <http://e-pig.org>

I heard they intend to include functionality of SHE into GHC.

------
mahmud
There is a really nice prototype language by Gunther Blaschek also called
Omega, and it predates this by at least 15 years.

It was Self with mathematical rigor and some MOP elements, IIRC.

~~~
silentbicycle
Yes, Blaschek has a book on it, _Object-Oriented Programming with Prototypes_.

