
The Rise and Fall of CORBA (2006) - ptx
http://queue.acm.org/detail.cfm?id=1142044
======
todd8
As chief scientist at a start up company in the 90's I had to occasionally
attend standardization meetings to protect our products. This included some of
the very early CORBA meetings. Without participating, we risked being non-
compiant with evolving standards in our space and so I had to attend and keep
up with the directions things were going.

Companies like DEC, HP, and IBM might send five or six people to the important
meetings and consequently it was easier for them to influence the direction of
standards in a way that favored the architectures of their products. While I
met a few, very bright, systems architects actually interested in coming up
with a design that would provide the benefits of industry-wide standards, many
of the participants couldn't grasp good design or simply acted in a partisan
way for the sole good of their own company. What struck me was that there were
mostly "goers" not "doers" attending these meetings. The organizations real
developers and architects were busy working on real products, too busy to
attend the excruciating standards meetings. The goers on the other hand might
be from the "planning" organizations in these large companies. These planners
often had very little background as developers and little insight into
realistic requirements.

At the CORBA meetings, powerful members couldn't agree on the lowest levels of
the protocol. Would it run over UDP? or TCP? what about IP vs Token Ring vs
OSI networking architectures. At a slightly higher level some companies had
their own, completely incompatible, RPC, data marshaling, and security efforts
well under way and they simply wouldn't sign off on any standardization below
the Application or Presentation layers of the protocol. This was crazy; CORBA
was burdened with an architecture requiring standardization at the application
layer between products that couldn't actually communicate with each other.

Hopelessly deadlocked at the most fundamental communication levels, the key
people involved in CORBA needed progress to keep CORBA alive, so CORBA moved
forward on standardization of things like nested distributed transactions--
trying to run before they could walk.

------
billyhoffman
This sentence shows a trap most technical people keep following into:

 _" These arguments cannot fully account for CORBA’s loss of popularity,
however. After all, if the technology had been as compelling as was originally
envisaged, it is unlikely that customers would have dropped it in favor of
alternatives."_

Wrong. Those arguments more than explain the failure.

CORBA, like most technology, doesn't succeed or fail because of how
"compelling" or repulsive it was. People use terrible technology all the time.
Major applications are written in terrible technology. Business make huge
amounts of money off terrible technology. Great technology is skipped all the
time.

As engineers we care about this kind of thing. The problem is that, the vast
majority of people in a business don't give a shit what technology stack you
use.

Technology does not exist in a bubble, where it lives or dies in some kind of
hippy meritocracy. Technology exists in the real world where other factors
tend to dominate far more than we think they should. This is also why when
engineers start companies, they quickly learn that 90% of running a business
has nothing to do with technology or development.

~~~
rwmj
I agree with your points, but I think the paper was probably referring to
actual problems in CORBA which were very obvious to everyone - technical and
non-technical - and caused a lot of pushback.

For example, the original C++ bindings were a train wreck: All C++ types
including strings got a second CORBA::.. definition, so you couldn't
interoperate with other C++ code easily. You had to write pages of boilerplate
to do even the simplest thing. There were lots of traps which could cause your
program to crash or leak memory, causing troublesome bugs in production
systems. This affected schedules (and hence marketing) and field support.

Another example was we couldn't get different supplier's ORBs to talk to each
other - which is, like, the whole point of a standard communication broker
(right?)

------
hyperpallium
Since then, XML webservices have Risen and Fallen; JSON seems to have just
peaked, with the complex, XS-like JSON schema seeming to finally gain
traction.

The cynical common wisdom was that people happily adopt a new simpler
technology, until they finally understand the problem... and to address it,
the new technology must become just as complex as the old one.

The article's main point, about non-competitive standards-setting is a good
one, but can only work for non-competitive technology. This is happening to
some extent, as infrastructure is commoditized as a complement.

~~~
mands
I agree that getting just the right amount of simplicity is actually a hard
challenge. JSON over HTTP feels like it has won as it was so trivial to
implement in any language, however experience has taught me that the lack of a
schema causes problems over time.

At StackHut (www.stackhut.com) we're using JSON with a lightweight schema. So
far this is working very well, providing simple yet 'typed' remote interfaces
into containers. However I often wonder about switching to XML/XML-schema/XML-
RPC, protocol buffers/gRPC or some custom system in the future or to just keep
it simple and not over-complicate things.

~~~
hyperpallium
I think what happened is that people building a complex system, who knew they
needed schema etc would just use the XML/ws-* ecosystem, because as repugnant
as some might feel it is, it is all written, debugged and works. If people
didn't need complexity, they used JSON.

The XML option kept complexity out of JSON.

However, its inevitable people starting with simple systems (using JSON) would
_become_ complex - perhaps partly due to dramtic success. And converting
everything to XML would be time-consuming and error-prone and... repugnant.
So. This is where the demand for json-schema comes from. And it does seem
inevitable: the inertia of back-compatibility is one of the more predictable
features of software.

I can make the hopeful observation that things aren't quite as bad as the
cynic take above: people _do_ learn from some of the mistakes of previous
technologies. There is _some_ progress.

I dislike JSON-schema because it's like a JSON version of XML schema. I think
a simpler scheam would better serve JSON and its typical uses.

Is your "lightweight schema" a simple schema written in json-schema? Or
written in a lightweight schema language?

~~~
mands
Sorry for the delay - was on a break from HN

Yes, I think you are right, the availability of XML/ws-* acted as a magnet for
people who required extensive schemas, etc.

I agree that JSON is a great starting place for the rest of us who don't have
such immediate needs for complexity and get can by with it. But I think
eventually software growth then pushes them to more complex interchange
format, e.g. JSON schema.

I think that there is progress, with JSON on the simpler side, and now newer
formats like ProtoBufs, Thrift and mechanisms such as RPC - we do seem to be
learning from the past. It does feel that perhaps we do swing from one extreme
to another - first RPC was great and CORBA came and went, following this the
perception was that it was utterly unsuitable for anything, until perhaps the
introduction of ProtoBufs, Thift, JSON-RPC and so on. I personally think it
can be incredibly useful, but deciding on just the right features to keep
things manageable is incredibly difficult (more so than the tech itself I
believe).

We're no fans of JSON-Schema either, I've thought about it a few times but it
feels over-complicated. Instead we've settled/forked a criminally overlooked
system called Barrister RPC
([http://barrister.bitmechanic.com/](http://barrister.bitmechanic.com/)). This
just supports basic JSON types, structs created from their aggregate, and
optional nullability. It has worked great so far, although we may expand
shortly to add more numeric types. You can try it live at
[http://www.stackhut.com](http://www.stackhut.com) (source at
[http://www.github.com/StackHut](http://www.github.com/StackHut)) - would love
to hear your thoughts re the schema/RPC layer.

~~~
hyperpallium
I read through all those links, and even got your example working (on a phone
- no curl etc - so had to write a litle java http client). Consequently, this
comment is long. I hope it's useful to you!

BTW: I personally would like to see your idea usable on a phone - without a
full local machine (iterating might be a pain, but your system sounds really
fast). Not just serverless, but machineless! A currently underserved niche.

> deciding on just the right features to keep things manageable is incredibly
> difficult (more so than the tech itself I believe).

I agree implementation is the easier part, though we've been stuck so long, I
think there must be a simpler way to look at the whole thing, involving some
mathematical or algorithmic insight (as relational algebra did for databases).

Barrister: I sometimes get lost in special cases, and forget the main point
that provides the fundamental help to people: Barrister seems feature-full,
but from reading the first paragraph or so, it seems to only output docs -
because that's all they say it does! If the reader already knows what Thift
etc do (i.e. something of an expert, across the field), they could guess that
maybe Barrister does more... but the users who just want the main thing you do
are easier recruits. Perhaps this is partly why it's overlooked...

StackHut: I'm not sure you need a separate IDL, if it is generated
automatically - why confuse the user with it? It seems like a more
sophisticated customization tool, that you could leave aside for later? The
IDL itself is pretty clear, and although I'd thought about (eg) java classes
as defining a schema, I hadn't made the connection that IDL (as from CORBA)
also do that.

(1). RPC format. I'm familar with OO serialization and schema languages
(unfinshed PhD, book chapters, a library and business), but less so for RPC -
so maybe I'm off-base here. And standards - even nascent standards - may be
worth complying with. But why not omit the meta stuff, and make it even
simpler:

    
    
      {
        "stackhut/web-tools":
        {
          "renderWebpage": ["https://stackhut.com", 711, 393]
        }
      }
    

I'm really just wondering if there's a strong reason for the meta data. It can
be helpful to orient the reader, but here it is clear from context - the keys
and how it's being used. Maybe there are other optional fields you sometimes
need?

But... for your use-case, perhaps it doesn't matter that much, as the bindings
hide it from users (but the point of text protocols is human readable, eg for
debugging; so the simpler the better). You could use XML, or a binary
protocol.

(2). JSON by example: This is my great idea for JSON schema, which I'm amazed
no one has done yet: instead of another meta-format, do it by example:

Because JSON primitive values are typed, you can use a value to signify type.
An object therefore also implicitly defines its type (like a java class). For
example, the above JSON can also be used as a schema, because it indicates the
two nested objects required, and the types of the primitives (string, number,
number). Though I suggest a convention of using "", 0 and false as values.

Those zero values help convey that it's a type, not a value. It's temping to
want to encode information in the value itself (as opposed to the type), such
as a default value - but that maybe a mistake, because there is so much more
than can be done with strings than with numbers. Keep it super simple.

[ Arrays are usually not fixed-length, enabling the next trick to encode
optional values and polymorphism. ] The JSON spec allows duplicate keys: so
you include the same key with different types for all the polymorphic types.
Specifically, you include the null value to indicate it is optional. e.g.
version is an optional string:

    
    
      {
         "version": "",
         "version": null
      }
    

This is a bit dodgey, because all JSON parsers simply return one value if
duplicate keys are found. You need to write your own parser. But it _is_ valid
JSON - and more importantly, it _looks_ like valid JSON. That's the key idea
of "by example" \- it looks like what it represents; there's minimal cognitive
leap from type to instance.

It's rare to want polymorphic primitive types (e.g. string and number), so
this is more for polymorphic objects. Unlike the common trick of a "type"
field for nominal polymorphism, this is structural polymorphism - where only
the different fields distinguishes types. NB: there are some tricky cases
here, when the fields overlap, and I'm not sure that client code would want to
mess with it.

Finally those non-fixed length arrays: polymorphism is represented by the
types of a set of values in the array. In other words, the values aren't
ordered, but just represent the permissible types. eg:

    
    
      [
        { "image_url":"", "width":0, "height":0 },
        { "text": "" },
        { "link_url":"", "link_text":""}
      ]
    

That's a schema for an arbitary-length list that can contain instances of
those three types of objects, with those mandatory fields, of those primitive
types.

NB: for a schema of the RPC "header" above, the fixed length array schema
represents a fixed length array - a special case.

(3). primitive datatypes: This idea can be extended, in a second level, with
explicit primitive datatypes - this is the next level of schema power that
everyone wants. Every value is now a string, and looks something like: "url",
"date", "email", and then gets closer to XS, with ranges like "int:1..31" and
even those ridiculous regex defining valid values (great idea, awful in
practice, like the regex for email). The key thing is it still is JSON and
_looks_ like JSON, since the datatype specification language is just a JSON
primitive value (string), and the syntax and meaning is obvious and familar.

BTW minor typo on your website: s/intergrated/integrated/ (like integer)

~~~
mands
Wow - thanks so much for going through all the links and getting an example
working - on a phone no less!! :) very impressive! Hmm, machineless, now that
is interesting - I imagine with phone processors such a thing will become, if
not already is, possible.

Yes! I wish there was a way to really define and lock-down exactly what
communication/serialisation primitives are required to aid communication
between systems. Though I imagine you are right in that we've come quite far
with common practices.

Haha - yes the Barrister docs can be a bit confusing - we are getting around
to writing a simpler version of them with common examples and use-cases. We
really like the schema itself as it does map onto JSON semantics quite nicely.

1) Yep, the messaging format could def be simpler! We just felt that as we're
starting out it would be better to stick with standards, even if, as you say,
they are nascent. This reduces the amount of things we have to do but also we
hope that people much smarter than have thought about the issues involved!
Hopefully the client-side bindings will hide most but if needed it's nice to
now you can drop down to the JSON format.

2) Hmm, JSON by example, this is super interesting!! I fully understand where
you are coming from and it seems so much simpler than JSON-Schema (I was never
a fan) Using values as types is quite elegant, and reminds me of some of the
type-level programming stuff in the functional circles.

The use of multiple entries in a JSON object seems like a nice way to express
sum types - something we are really keen to add to the Barrister IDL (you can
extend from an object but I believe there is only a single tree). As you note,
although this valid JSON, it may require a custom parser to extract. I'd love
to hear more about this technique tho, has it been used elsewhere or is there
any further documentation?

3) Primitives - yes, it'd certainly be possible to specify other primitives by
encoding them in the JSON string value. Could be risky, as you suggest, when
you start looking at regexes and so on - I've not been the biggest fan of
defining these as types in the past. Also, how would one go about defining a
user-defined type at the equivalent level of a primitive? Super interesting
tho, if you have any more thoughts on this I'd love to read them.

~~~
hyperpallium
Rereading, my comment was utterly misleading about "machineless"! I meant no
local development. All in the cloud + browser (or another client). So a phone
just needs a browser. [see my other reply]

I missed the Barrister IDL-JSON binding... kinda important! I must check how
they bind choice/sum/polymorphism to JSON...

2) JSON by example schema: Thanks! It's never been used; the above comment is
the only documentation. Maybe I should write a RFC... or at least first draft
of a spec. And a reference implementation.

Actually using this schema to validate JSON instances requires using all the
fields to determine which branch you have. They can have fields with the same
name, provided the entire set is distinct. So these are OK (letters as
fieldnames):

    
    
      {a,x}+{a,y}; {a,b}+{a}; {a}+{} (empty object)   
    

Thinking further, duplicate fields implies a different data model, and most
JSON parsers will use a hash. Maybe it would ease adoption to fake up a syntax
(eg addr-1, addr-2, addr-3 for the "same" field, with escaping rules for '-').
I wonder wonder about usage: maybe apps don't use this approach (same field
can have different kinds/types of values); maybe they use a field for each
type, only having a value for one of them? (or an explicit "type" field). It's
important to model common practice.

But I think JS coders do use polymorphic behaviour (same method names with
different code).

3) Add more primitve types. Thinking further, although people end up wanting
more precise primitive types for storing data, if JSON is mainly used for
transferring values between languages, it really need only be as expressive as
the languages themselves, which generally don't go into details like syntactic
nature of primitive values and ranges of integers and lists etc. (They wrap
these concepts in objects; JSON can too).

I'm stepping back from the idea - I just liked that you could add richer types
without losing the property of looking like JSON - but you do lose the
property of representing types with the types of JSON values. Can leave the
extra sophistication to xml schema (and json schema) for those who need it.
Unless JSON itself add more primitive types (which I doubt!).

PS: one more comment to go. I plan to reread your previous comments on CORBA
etc now I have a better idea where you're coming from.

------
late2part
Of particular note, Joel Spolsky's missive on Architecture Astronauts:
[http://www.joelonsoftware.com/articles/fog0000000018.html](http://www.joelonsoftware.com/articles/fog0000000018.html)

------
2sk21
"the simplicity of component models, such as EJB" Sure the author must have
been joking :-)

That aside, good points in this article. CORBA is emblematic of a time when
elaborate architectures were created without any thought of actual
implementation. At IBM in the early 1990s, we had an entire group of 40 people
working on the architecture for a broadband network infrastructure who had
never written a line of code.

~~~
AnimalMuppet
> "the simplicity of component models, such as EJB" Sure the author must have
> been joking :-)

Perhaps not entirely. If that's the kind of problem you have, there may not be
any possible solution that is much simpler than EJB.

There's a reason all these architectures wind up being horrible. The problem
is horrible, and there's no simple, clean solution for it.

------
wainstead
In the early 2000s I worked within a CORBA system. Daemons written in C++,
servers in Java, and the system was scripted via Python.

That you could instantiate an object from the C++ or Java sides in Python and
do things with those services was actually quite fun. I was insulated from the
difficulties of the CORBA implementation because the system came from a
vendor, but bugs and performance issues were perpetual. And this system ran
within the same local network behind the firewall.

One of the truly sucky things about this system was it had its own version of
Python server pages: Python and HTML intermixed within the same file, and whoa
be to you if you got your Python indentation messed up between blocks of HTML.
It was ridiculously hard to debug.

And as the article points out, upgrading was a stop-the-world process where
everything had to go offline, usually for hours. One of the other crippling
"features" of this system were C++ objects persisted in the database as
binary... man those caused headaches. And half the data was in DB2 and half in
an LDAP server. This vendor never met a technology they didn't want to through
into the mix.

------
Ygor
Particularly interesting should be the part about design by committee.

Remove Corba references, and it might apply to many other past and current
pieces of technology:

"There are no entry qualifications to participate in the standardization
process. Some contributors are experts in the field, but, to be blunt, a large
number of members barely understand the technology they are voting on. This
repeatedly has led to the adoption of specifications with serious technical
flaws."

“Vendors respond to RFPs even when they have known technical flaws. This may
seem surprising. After all, why would a vendor propose a standard for
something that is known to suffer technical problems? The reason is that
vendors compete with each other for customers and are continuously jostling
for position. The promise to respond to an RFP, even when it is clear that it
contains serious problems, is sometimes used to gain favor (and, hopefully,
contracts) with users.”

~~~
smhenderson
I find this one to be very familiar as well. I'm sure we could come up with
our own examples and combined have dozens or more.

"Vendors sometimes attempt to block standardization of anything that would
require a change to their existing products. This causes features that should
be standardized to remain proprietary or to be too vaguely specified to be
useful. Some vendors also neglect to distinguish standard features from
proprietary ones, so customers stray into implementation-specific territory
without warning."

------
ak39
CORBA's "failure" was the distributed part, and distribution is an essential
element of the architecture. Unlike COM, where you could easily separate the
OO (IDL) based aspect of the architecture from its distributed implemention
(DCOM), CORBA was always assumed to be distributed.

The idea of implementation language agnostic binary IDL is still powerful
though.

~~~
_pmf_
> The idea of implementation language agnostic binary IDL is still powerful
> though.

IDL is a downright nice specification language when compared to alternatives
with a similar power (I'm looking at you, ASN.1!).

~~~
dcuthbertson
ASN.1 wasn't so bad. In the 90's, I worked for a little company, Gradient
Technologies, where we modified Kerberos to add authentication via Security
Dynamics' key fobs. I hadn't seen ASN.1 before, but found it wasn't much
effort.

------
markbnj
Nice walk down memory lane. We used Orbix in 1995 to build a production
banking platform, one of the first true web-enabled banking services, and it
was a nightmare. It wasn't necessarily the complexity. Our engineers could
understand proxies, stubs, and bindings well enough, and when it all worked it
worked well. But memory leaks on NT and mysterious performance issues sucked
all the life out of the project. Over the next few years I watched CORBA
wither away in the face of growing adoption of simpler web-oriented protocols,
and it never made me even a little sad to see :).

------
thorn
I still maintain the piece of backend which I wrote 8-10 years ago using
Python and omniORB. Performance of CORBA was critical back then. But I have
started to use CORBA for parts where performance and network overhead was not
that critical, which corrupted my code beyond any limits. I was not very
experienced back then. I still struggle and feel pain when I see that old
code. Also omniORB has some weird memory leaks related bug, which I could not
figure out in many years. Sigh...

Partially I have migrated most of component from CORBA to beanstalkd (MQ
server). This is much more simple approach, testing is easier, code flow is
simple. I cannot recommend enough this simple and robust MQ server:
beanstalkd.

For the part when one component wants to call another component in Python and
there is no way I could plug a queue into the flow, I prefer to use Pyro.
Dirty and simple.

------
sepeth
How CORBA compares to Apache Thrift, or Google's protobuf? or do they try to
solve different things?

~~~
dekhn
I consider CORBA's IDL/IIOP combo to be roughly identical to protobuf. CORBA's
distributed RPC capability is like gRPC. At a higher level there are some
differences but they are substantially isomorphic.

~~~
lobster_johnson
CORBA was terrible at versioning. If you tried to use a client with a
mismatching server (e.g., the server's IDL added a field that the client
didn't have), you would typically get segfaults.

Protobuf, on the other hand, builds in versioning right from the start. Every
struct field is tagged with an ID, so unknown fields can simply be ignored,
and marshalling/unmarshalling can be adaptive. It means that structs can be
preserved across version boundaries; in theory, a "v1" client can read a "v2"
struct, modify it, and pass it back with the "v2" fields intact, even though
the client didn't know about them. (This requires that the client doesn't
unmarshal the data into something that would lose the metadata, like a C
struct.)

Another big difference is that CORBA had IORs (Interoperable Object
References), a kind of smart pointer to a remote object. With CORBA, as with
DCOM, you could pass object references around and make method calls on them,
and the calls would be transparently routed to the correct server. You could
have client A get an object from B which got the object from C, and if A did a
.foo() call on the object, it would call C. Of course, this leads to all sorts
of issues, such as having to make sure objects stay alive for as long as any
client (or server, since it goes both ways!) has a reference to it, and
dealing with unresponsive clients/servers.

gRPC is much simpler in this respect, in that it's just RPC calls, pure data,
no objects. In that sense, gRPC is closer to DCE RPC (the basis for Microsoft
RPC, which was the underlying RPC technology of DCOM) [1].

[1]
[https://en.wikipedia.org/wiki/DCE/RPC](https://en.wikipedia.org/wiki/DCE/RPC)

~~~
dekhn
Yes, I know all of this. I never saw IORs as a truly necessary feature,
although I see the attractiveness of the idea. I'm sure you could, if you
desired, implement IOR-like behavor on top of other RPC systems.

I never really had problems with message versioning because I owned the client
and server, and created new messages when I wanted new versions.

~~~
lobster_johnson
Protobuf-style versioning gets pretty useful when you're developing
microservices — where there are potentially a whole bunch of apps that would
otherwise have to be upgraded at the exact same time, even for adding optional
fields.

------
ssaddi
Nice article. I remember reading about CORBA in uni days. The article
summarized it well as it became a bulky technology with very length and
confusing documentation. Industry on the other hand, was ready to adopt
simpler technology that focused on web-based protocols, like HTTP (HTTPS).
Then SOA came and it changed the landscape completely with REST-based
services, that relied on HTTP protocol themselves. Finally, it comes down to
simplicity. A technology that is too complex to implement and understand will
not easily meet demands of constant change and innovation. Hence, as
mentioned, it is reduced to a niche technology.

~~~
xorcist
You skip an important part where "web based protocols" means web services,
which in turn was synonymous with SOAP. This architecture was based on remote
method invocations and was dominant for the better part of a decade (if it
isn't still, for some domains).

------
dekhn
I thought the CORBA IDL and the IIOP were both great. It made the transition
to protocol buffers and Google's RPC system pretty painless.

------
sgt101
well, web services were a complete bust too!

It's interesting (as other commenters have noted) that now people implement
rest interfaces and we don't seem to have any (other) standards for imposed
(controlled) shared state.

Agent standards like FIPA were supposed to enable consensual shared state by
agreement, but have never really been used to do that.

I guess the world really doesn't _need_ these things yet?

~~~
tracker1
In the end, simple, documented, defined (even if the documented/defined are
simply the source you have access to) is easier to deal with than overly
complicated systems that add little value, and a lot of cognitive overhead,
confusion and indirection. This is why today's JSON services rule the roost so
to speak.

Not that there aren't attempts to create some standards... It's just that a
lot of the time they don't add much value.

~~~
rwmj
CORBA (and SOAP) had type safety, something which is missing from "modern"
systems.

JSON is an ill-defined standard - integers, for example, are almost completely
undefined, and this causes real problems in real programs (eg [1]).

[1] [https://lists.gnu.org/archive/html/qemu-
devel/2011-05/thread...](https://lists.gnu.org/archive/html/qemu-
devel/2011-05/threads.html#02162)

~~~
tracker1
Integers don't exist in JS/JSON... You get IEEE754 double precision floating
point numbers, meaning whole numbers between -(2^53 - 1) through (2^53-1) are
supported without rounding issues.

If you need greater precision or fixed decimals, use a base10 string and an
appropriate library in your application.

I'd say the lack of strict date/locations in the spec are probably as big of
an issue, but there are standards for how to handle that that have evolved as
well... (geojson and ISO-8601 date-time strings).

~~~
rwmj
You've made my point with "If you need greater precision or fixed decimals,
use a base10 string and an appropriate library in your application". In other
words, it's not interoperable or type safe. BTW the JSON spec itself doesn't
define integers at all. "Numbers are really floats" comes from Javascript and
hence is just a convention for people who are using JSON from another
language.

The larger problem is the lack of schemas. In JSON they exist but no one uses
them. CORBA forced you to have a schema (IDL). SOAP had schemas, albeit very
complex ones which no one really understood.

~~~
tracker1
"JavaScript Object Notation" ... it makes perfect sense for JSON numbers to
match JS numbers, which is IEEE spec.

------
lukeh
Interesting article. CORBA is still hanging around in Avid's EuCon protocol
for controlling audio applications. Not always that reliable (but hard to say
where the blame lies there).

~~~
gmfawcett
CORBA also persists in the Ada community (e.g. see
[http://www.adacore.com/polyorb/](http://www.adacore.com/polyorb/)).

------
benaston
Those working on other standards should take heed.

------
j03m1
I thought this was the rise and fall of COBRA (gi joe) and when I realized it
wasn't I was sad.

