Hacker News new | past | comments | ask | show | jobs | submit login
Andl, a relational language that is not SQL, is coming to Postgres (andl.org)
272 points by pjungwir on May 30, 2016 | hide | past | web | favorite | 104 comments



Interesting coincidence for me that this hits the front page of HN today as I recently read this article [0].

The author thinks we should move away using the database purely to persist the data, while handling all logic/validation in the server language (e.g. By using an ORM to map your server language to the database) was less robust over time.

Instead, he suggests pushing knowledge of your data into your database as this has proven to be more time resilient and takes advantage of the power of these databases.

I liked the argument (which is better expressed that my paraphrasing) but my largest reservation was that expressiveness is a big consideration for choosing some language, and in this case the alternative to using Ruby/JS/Python + ORM in your MVC framework is to use SQL functions, which are syntactically clunky/dated at the very least.

It seems like Andl is looking to bridge this to some extent.

[0] https://sivers.org/pg


The problem with this theory is programming in some databases (MS SQL Server) is extremely unpleasant.

For example, you can finally, in the year 2016, split strings natively in TSQL:

https://www.brentozar.com/archive/2016/03/splitting-strings-...

Compare that to this SO question:

http://stackoverflow.com/questions/2647/how-do-i-split-a-str...

Note there are 32 answers on that question, many of which fail under various edge cases. Also notice that no one finds this situation absurd in the slightest (Stockholm Syndrome is alive and well in the MSSQL community apparently). This is just one example of the nonsense you have to put up with on SQL Server, there are many others.

And while PG is getting yet another new language capability, it appears Microsoft is pulling SQLCLR support (in Azure, so far):

https://www.brentozar.com/archive/2016/04/breaking-news-lite...

...which probably doesn't matter at the end of the day because almost no DBA would ever let you use it in production.

While it would cause harm to my personal career, nothing would make me happier than to see Postgres crush MSSQL, Microsoft has earned nothing less with their absolute disdain for their users.


> The problem with this theory is programming in some databases (MS SQL Server) is extremely unpleasant.

Oh forget about string splitting and the like. The language itself (T-SQL and SQL in general) is a disaster. No modularization possibilities (wanna do a this particular "SELECT" but with slightly different something? well, copy-paste the whole thing), sensible code autocompletion is next to impossible (type "SELECT _" wat? from wat?), don't get me started on type conversions, dumbest possible error messages, the "IN (@multipleValues)" disaster, and oh my.


I'm not saying this is ideal, but there are certainly possibilities for modularization. You can create views that cover cimmin query predicates and then write queries against those views.


s/cimmin/common/g


> This is just one example of the nonsense you have to put up with on SQL Server, there are many others.

The equivalent of GROUP_CONCAT in MySQL or STRING_AGG in Postgres in MSSQL is ridiculous. Yes, I understand it's more versatile and extensible to use a subselect which can be mostly optimized away, output as XML, prefix each field with your delimiter, and then ignore the first character of the concatenation of those fields, but come on[1]. Throw your users a bone once in a while...

1: http://stackoverflow.com/a/5981860/210749


From that post:

"Or a version that works correctly if the data might contain characters such as <"

What other edge cases are you missing? Do you care? Microsoft certainly doesn't.

TSQL is an embarrassment, it is a perfect example of the arrogance of Microsoft that is luckily finally being corrected on many teams (C#, Azure) but persists in the MSSQL camp.


> "which probably doesn't matter at the end of the day because almost no DBA would ever let you use it in production."

Where I work, we have a DBA and use SQL CLR in production. In fact I'm in the process of writing a SQL CLR user defined function today. It's interesting that Microsoft have dropped support for them in Azure, but they're still supported in SQL Server 2016. Furthermore, starting from Visual Studio 2013 efforts were made to make them easier to deploy direct from Visual Studio, so clearly some people in Microsoft think they're worth supporting.

I'd say as long as you have a sane database structure where you don't try to do all your processing in a single database instance, SQL CLRs are fine. For example, use them in databases that handle scheduled processes rather than in ones that can't have any downtime and you it'll be easier to redeploy them without much ceremony.


Have you ever tried (and succeeded) in signing a sqlclr dll?

I wasn't able to accomplish it and neither could this guy:

http://stackoverflow.com/questions/7032872/what-are-specific...

And if your answer is RTFM then you've probably not used SQL Server before, or were fortunate to have always work on a perfectly configured instance, because a lot of things (say, debugging) often don't work on a brand new install on a brand new windows box.

There are tons of things like this where people have been crying for help from Microsoft for years, but the SQL Server team clearly does not give one fuck about their users problems.


I've not needed to sign a SQLCLR DLL, all the SQLCLR code I've written has been using SAFE assemblies.

That being said, I don't mind helping out. Here are the relevant notes I made when following the "Developer's Guide to SQL Server CLR Integration" on Pluralsight:

"To configure assembly signing, open project properties, SQLCLR, scroll down to find and select the 'Signing...' button, check the 'Sign the assembly' checkbox, choose or create a 'Strong name key file' (if select <New...>, set key file name e.g. CLRLogReader, and set a password, OK to finish), OK to finish, then close project properties tab."

From the troubleshooting section of the same course...

"If you want to deploy an assembly that uses EXTERNAL_ACCESS (or UNSAFE) permissions instead of SAFE permissions, you need to perform some extra steps in order to grant those permissions to the login deploying the assembly. There are a few different ways to set those permissions (including enabling the EXTERNAL ACCESS ASSEMBLY permission for the DBO and switching the TRUSTWORTHY property on, or signing the assembly with a certificate), but the recommended method is to sign the assembly with an asymmetric key (recommended because it is both easy to do and secure). In VS, load properties for SQL Database project, SQLCLR tab, scroll down to find and click on Signing... button. Check the 'Sign the assembly' option, create/select strong key file, OK. Build project, make sure this process completes without errors. Then in SSMS will need to set up a user that can use the key that was used in the build process. Example:

USE master;

CREATE ASYMMETRIC KEY CLRLog

FROM EXECUTABLE FILE = 'Path to SQL CLR DLL' --note this DLL was the one created in the build process carried out in VS

CREATE LOGIN CLRLogin FROM ASYMMETRIC KEY CLRLog

GRANT EXTERNAL ACCESS ASSEMBLY TO CLRLogin

Note that you only have to configure this assembly signing once, subsequent deployments will reuse the key and login we configured to deploy the assembly, so deployment then becomes as straightforward as deployment of SAFE assemblies."

Hope these notes prove useful.


You have been able to split strings in t-sql forever. The link you are referring to splits a string by a delimiter and returns the result as a table.

I've also used CLRs in production since SQL Server 2005. Not sure which circles you run in, but it's quite different than what I've seen. I also use Service Broker, so maybe I'm just an odd duck?!


> You have been able to split strings in t-sql forever. The link you are referring to splits a string by a delimiter and returns the result as a table.

Generally ones wants to either extract the Nth element from a delimited string, or convert a delimited string into a table, are you thinking of a different scenario?

Somewhere on an official Microsoft page, there is something to the effect that SQLCLR is "not recommended", which gives your typical DBA a perfect excuse to say "no", with no interest in the particulars of why Microsoft might have written that warning.


In my experience the fact that you even know the words "Service Broker" makes you an odd duck. (My company uses the heck out of it, BTW... It boggles my mind that it's so unknown to most developers using the Microsoft SQL Server platform.)


What's an example use case for Service Broker?


Asynchronous processing. Service Broker has a queue-based metaphor and it can execute stored procedures asynchronously when new messages arrive. I gnash my teeth every time I see problems "solved" by the "we'll just poll the database periodically for changes" pattern that could be better-served by Service Broker.

Service Broker performs store-and-forward reliably message delivery inside a single SQL Server instance and between instances (on the same machine or over a network). The inter-instance delivery is particularly nice insofar as that it can use certificates for authentication (which is very nice when you have disparate Active Directory environments in play).

We've built a couple of system that use it and have been very pleased with the results. I can't figure out why it's so obscure. The documentation isn't terrible but it's definitely got a steep learning curve.


Interesting connection! :-)

As a web developer who has done mostly Rails for a long time, I default to ORMs. And it seems to me that the last 20 years of architecture theory boils down to "How do I put a layer in front of my database?" Once your schema becomes a public API, your app is frozen.

But on the other hand, I believe in using all the features your database has to offer. I remember doing J2EE in the early oughts everyone spoke about being able to swap out one RDBMS for another, and I eventually decided that kind of lowest-common-denominator approach was not worth the cost in preset-day productivity. Plus building on an open source project is less risky than something with outrageous licensing fees, so why not use it? So my own Rails projects have real foreign keys, CHECK constraints, some SQL functions (even some in C), views, and big queries. If I can express an invariant I trust Postgres more than myself to enforce it. I've also had some projects lately where building a JSON response for the front-end involved hundreds of ActiveRecord objects, and dodging that by using the JSON-construction functions in pg 9.4+ is a huge performance win. You can even still use the ActiveRecord DSL to build the query, using your scopes and associations and Ruby code, but then do a `to_sql` and wrap the whole thing in Postgres JSON fuctions. I feel like there could almost be a gem for that. Anyway, sorry for rambling. :-) I agree that Andl could make it much more appealing for people to "put the logic in the database," and I feel like I have been moving in that direction for a long time. Maybe we will see an ActiveRecord Andl adapter?!


> I remember doing J2EE in the early oughts everyone spoke about being able to swap out one RDBMS for another, and I eventually decided that kind of lowest-common-denominator approach was not worth the cost in preset-day productivity.

Remember who was putting the money behind those ideas.

IBM, for example had a vested interest in folks swapping out Oracle for another database... DB2 for Mainframe for DB2 UDB for AIX/x86. Enterprise is a zero sum game, and zeroing out the competition is a way to grow the business.


ActiveRecord looks on databases as flat files with a funny accent. I don't see how changing the query language will shift the entire design paradigm.


Do you feel that way about all ORMs or just ActiveRecord? How would you like it to be different?


No ORM ever hits a universally happy solution; graphs and sets are equally powerful but express the world in fundamentally different ways. ActiveRecord is particularly annoying because it is bad both to the database and the programmer.

To the database it is bad by making a great show of holding its nose while dealing with the lowly bit box, making scrupulously sure to avoid taking advantage of the fact that the database has locality and enforceability. I've encountered true production race conditions that Rails validations can't solve. It is a professional embarrassment when clients receive duplicate emails about the multi-umptillion dollar software we've sold them. Meanwhile, a single constraint in Postgres prevents it from ever happening again.

And it's bad to the programmer by exposing a massive interface that you are meant to inherit from for construction. Ironically it's named after the Active Record pattern which is about making a thin interface with a table, which you can then build your own business-specific domain translation layer. But in Rails that's not really how it works.

I used to think the mismatch could be solved by lurching towards sets[1]. But I don't think it would be a very easy transition and more to the point, I don't have anything strong to back up my argument that a collection of grumpiness about the ways various ORMs obscure the power of relational modelling.

[1] http://chester.id.au/2013/08/28/notes-towards-a-set-objectiv...


From what I've seen of it, LINQ to SQL (or other ORMs hooking up to LINQ on the .NET ecosystem, like Entity Framework) can avoid the feel of what Jacques is talking about. LINQ lets you express queries in C# (or what have you) syntax but still exposes the "feel" of an SQL database, not just using the database as a persistent object store.


New project is using node/sequelize, and ya, I miss LINQ.


In Postgres you were never really restricted to SQL, there's built-in support for Tcl, Perl and Python, and third-party support for others.

https://www.postgresql.org/docs/9.5/static/xplang.html


Article acknowledges this: "There are currently several language implementations for Postgres, including PL/SQL (similar to SQL/PSM), Python and Perl. The aim of this project is to add Andl to that list, but as more than ‘just another language’."


FYI, the name PL/SQL traditionally refers to Oracle's thing... Postgres has PL/pgSQL.


I did a double take at that. My heart skipped a beat thinking I could maybe suggest porting some old Oracle stuff over to Postgres. A quick Google shattered my dreams. Damn you article.


You can actually. If you want more out of the box tools to migrate from Oracle, then look at EnterpriseDB.


The trouble with this is that the database is the hardest part to scale. In a busy site the db may have its work cut out just serving data. This particular pendulum swings back and forth every decade or so. That said, plenty of sites will never have a scaling problem and andl looks interesting


It's not databases that are hard to scale, it's consistent state.

It's just that databases begin with the design assumption that you want state to be consistent.


This is a great point. I think the way forward is to unite these properties in an append-only schema that can be efficiently queried for a consistent state without requiring expensive transactions. Something like [1] works well enough for RDMSs, but still isn't ideal.

[1] http://higherlogics.blogspot.ca/2015/10/versioning-domain-en...


IMO one of great things that came out of NoSQL are CRDTs[1]. This is something that would be interesting to see if it could work with a relational database.

[1] https://en.wikipedia.org/wiki/Conflict-free_replicated_data_...


Concurrent Revisions [1] are even better IMO. Much simpler to reason about. In fact, if you know how to work with git and other distributed SCM tools, then you already understand the model: branch, do your work locally, then deterministic merge.

[1] http://research.microsoft.com/en-us/projects/revisions/


Yes, I just found out about these recently and am surprised no datastores have been built around them. They are basically a usable version of fork-join causal consistency. They also subsume CRDTs.


Do they? The papers on CRDTs I've skimmed often mention concurrent revisions, but they've never claimed CR subsume CRDTs.


This paper does (in fact, it points out that any total function can be emulated by a query-update automaton, which can be used with concurrent revisions). At least, that's my recollection, it could be a generalization of CR.


What is the "this paper" you're referring to? The papers at the link I provided don't mention CRDTs much.


http://research.microsoft.com/pubs/155638/msr-tr-2011-117%28...:

"Our definition is strictly more general than CRDTs [17] in the following sense: From any state-based CRDT we can obtain a FJ-QUA by using the same state and initial state, the same query and update functions, a fork function that creates a new replica and then merges the forker state, and a join function that uses the merge. Note that the definition of strong eventual consistency in [17], just like ours, requires that updates can be applied to any state."


> Note that the definition of strong eventual consistency in [17], just like ours, requires that updates can be applied to any state.

My favourite gotcha.

It solves distributed statefulness!

... so long as someone else solves statefulness!


I don't think that's a fair summary. Stateful cells have non-local and non-commutative evolution, which is why they're hard to distribute. These frameworks eliminate the non-locality, which isn't negotiable if you want to scale, but at the expense of preserving the requirement that you manually ensure commutativity.

Not a bad tradeoff, and I highly recommend the CR papers if you haven't read them. I'd say most programs have a natural expression in terms of revisions, and you're left with only a small set of circumstances where state simply must evolve consistently and atomically (airplane seat reservations, for instance).

I would still like a framework for reasoning about these circumstances that integrates naturally with CR though.


I hear you, I do. But it irks me when people wave this stuff around like a magic wand, because it's just not.

It's like saying you have an O(1) solution to all TSP graphs with the teeny, tiny caveat that you start with a solution to the TSP graph you're working on.


I still think your last statement is unfair. At worst, the caveat is that the TSP solution is for any graph within a certain class of topologies.

Concurrent revisions in particular has been implemented to work in concurrent, parallel and incremental computing, with no change in semantics. So it's pretty darn close to a magic wand, but it can't violate the laws of logic.


They're saying that there aren't states which forbid updates. If you can forbid updates, it's trivial to enforce consistency.


This is also not as easy as it sounds though when you go distributed. Databases already handle the non distributed case pretty well. It's when you need the performance/latency benefits of distribution that you begin to run into problems that an append only schema doesn't help with.


The reality is that your site is nearly always going to be bottlenecked on the database anyway, though. It is generally the case that network latency completely overwhelms the cost of running your code on the database server, so executing everything requiring transactional consistency in a single round-trip communication to the database is usually the most scalable approach. It's also very much the case that centralization yields the best performance when it comes to both latency and throughput, assuming you can find reasonable partitions by datacenter, unless you can find a synchronization-free execution or are willing to tolerate inconsistency (usually, even then you're doing this for availability, not performance, but if you're Facebook or an ad server sometimes for latency too).

Moreover, while this isn't usually taken advantage of by contemporary databases or schedulers, in research databases knowing details about the transactions that are going to execute allows dramatic improvements in throughput compared to ad-hoc queries (for a variety of reasons, from being able to find a synchronization-free execution that preserves certain invariants to being able to optimize for data locality to trading latency for throughput with group commit... the list goes on).

My personal experience has been that for the vast majority of websites, there simply isn't enough computational work going on over the data to outweigh these benefits.

All that being said: I still hate stored procedures with a fiery passion for other reasons (mostly that they often fail to integrate nicely into the rest of the application's deployment mechanisms, especially if you're using a hosted database solution which prevents custom extensions) and try not to let them anywhere near a database unless there are performance reasons to do so.

(BTW, just to show I'm not talking out of my ass, here's the latest and greatest in serializable performance within a datacenter (aka centralized): http://research.microsoft.com/pubs/255848/SOSP15-final227.pd.... To quote:

"FaRM achieves a peak throughput of 140 million TATP transactions per second on 90 machines with a 4.9 TB database, and it recovers from a failure in less than 50 ms." It also performs 4.5 million TPC-C New Order transactions per second with a 1.9 ms 99% latency [this includes 10% remote partition transactions]).


Food for thought. But what you are describing sounds more performance oriented, not scalability-oriented. Are the speed gains from extra optimization enough to make up form putting more eggs in one basket?


Scalability is just a means to an end. Most often, that end is better performance. So it's pretty relevant if a non-scalable solution can deliver vastly better performance in practice than a more scalable one (and this is relatively common, especially when you start taking into account things like developer cost, hardware cost, and available network bandwidth).


I believe VoltDB only supports pre-compiled queries in order to statically optimise them together.


Historically, that was true, but recent versions also allow ad-hoc queries. That being said, VoltDB is no longer the state of the art in serializable OLTP performance (though maybe it is in the commercial space, but I think HyPer for example outperforms it on a lot of benchmarks).


> "The trouble with this is that the database is the hardest part to scale."

Probably depends on the plan followed for data replication, as some approaches seem to make it easier to add new DB nodes as you scale. I've not tried it myself, but I think Apache Samza has some promising ideas in this area:

https://www.youtube.com/watch?v=fU9hR3kiOK0


> Instead, he suggests pushing knowledge of your data into your database as this has proven to be more time resilient and takes advantage of the power of these databases.

Indeed, moving logic into a relational language is a great idea. If only SQL weren't such a poor pseudo-relational language.


Has a better replacement for SQL been created?


I don't know that one has ever been truly implemented, but it's certainly been discussed. Refer to just about any of Chris Date's work, primarily the third manifesto. I do recall reading about a database system that tries to implement the "true" relational model (mostly according to Date, with the exception of his type system), and has a language similar to Date's Tutorial D. I believe it was written in Haskell and showed up here not very long ago. But I can't find it now...


I don't know if Tutorial D and its brethren are actually better than SQL in the ways that SQL is bothersome to most people. I don't think most people actually care that SQL isn't a pure implementation of the relational system (and it goes further than Codd's original proposal, Codd called for NULLs, Date and Darwen want to throw them out). I think stuff like enabling code modularity and code reuse and otherwise acting like language design has advanced since the 1970s is far more interesting to most people than getting rid of bags in favor of sets.


For me when I tried to do something more complex and reusable tempating/generics were really missing, and lambda functions, but it didn't get into production postgres. Oh and better array support is needed..converting between arrays, select and values () is very tricky. So I was doing a lot of copy-pasting of my own tricks


You might be looking for Project:M36.

https://github.com/agentm/project-m36


There are people who are convinced that QUEL[1] is better (or at least was better at the time).

I never used it myself, so can't comment much about it. The examples on wiki makes it look as if perhaps it would integrate with code better, but that's just a guess.

[1] https://en.wikipedia.org/wiki/QUEL_query_languages


Define "created". There's Datalog, LINQ, and Dedalus at the very least, but these aren't available for any widely used data store (except perhaps LINQ on some object database, like NDatabase).


> but these aren't available for any widely used data store

You defined "created" for me in the context I was using it.


The article at the head of this discussion is about Andl.


I don't really like the idea of using DB-specific language. First of all, current "DB" languages and tooling are terrible and there's place for beautiful language with good SQL support embedded in PostgreSQL. But even with that theoretically good language, it's better to stay separate from your database, as you are (I hope) staying separate from OS you are using or from IDE you are using. Divide program (and development processes) to a clearly distinguishable parts with clearly defined interfaces and avoid tying too much with something. It might be difficult, but it's definitely possible to swap database if a program accesses it mostly using standard SQL. And it's impossible to swap (or sometimes even upgrade) database, if a program uses its proprietary internal mechanism to work.

I know a project which uses Oracle 9. Customer wants to migrate to other version for last 5 years, but he can't, too much hidden obstacles, so he even have to support old hardware (because Oracle 9 doesn't run on newer Windows and old Windows doesn't run on newer hardware). And it's because 50% of a huge system was written using Oracle PL/SQL and it's just not realistic to rewrite it.


Yes, I too read that essay, but after listening to Derek Sivers on the Tim Ferriss podcast (highly recommend):

http://fourhourworkweek.com/2015/12/14/derek-sivers-on-devel...


Interesting inversion of the usual argument to keep logic out of the DB in case you want to migrate from one RDBMS to another. I find Siver's formulation much more persuasive.


I agree. How often do you need to change your RDBMS? Hopefully never. And how often do you see people wanting to rewrite everything in a new language or framework? A little more often. I think many developers are just not as comfortable working with databases.


> I think many developers are just not as comfortable working with databases.

It's because databases are seen as a liability. They are often seen as complicated to test,"un-agile", hard to manage... in fact DBA used to be a job itself. Now developers are asked to be developers and DBA, which leads to over engineered application layers and under engineered data layers, or the rise of things like MongoDB.

In a world where people are trying to push Javascript everywhere, from top to bottom, SQL,schema,migrations, stored procedures, triggers are seen as a relic of the past when in fact, they are a guarantee of stability, robustness and integrity. Maybe it is SQL that should be pushed to upper layers instead ( LINQ,HQL,DQL,... ).


I was taught to use the database where it's appropriate - a well written stored procedure and something light like Dapper [1] will save you time over trying to do everything in the DB or everything in code.

Losing the DBAs from an organisation might look good on the balance sheet, but it's going to cost you big time in the long run *

[1]: https://github.com/StackExchange/dapper-dot-net

  * assuming competent DBAs


"In a world where people are trying to push Javascript everywhere, from top to bottom, SQL,schema,migrations, stored procedures, triggers are seen as a relic of the past when in fact, they are a guarantee of stability, robustness and integrity."

With PLV8 for Postgres you can write your stored procedures in JS, making your entire stack essentially the same language. I've run a couple of production apps like this and it's worked better than I expected. No DML is allowed, just executing of the stored procedures, which lets us refactor the underlying tables pretty easily.


I agree to an extent and in my current project have been making heavy use of PL/SQL to handle a lot of data logic having come off a project that tried to use Entity Framework to achieve the same thing.

PL/SQL, for all its problems, does make it pretty straightforward to write a data API - AddX, AddYToX, AssociateXAndZ. It provides a nice abstraction over whatever table structure you may have and also makes it very clear, especially in the case of Table per Hierarchy type structures, what fields are required in what cases.

The key thing is to limit yourself to what the language is good at - reading and writing data. You shouldn't need to do things like string manipulation, date conversions etc - that is what the calling code is responsible for doing. For me it makes sense to associate the data with the logic specifically relating to it as frontends can come and go without the underlying structure changing.


It sounds nice but from my experience version control, testing frameworks, debuggers and similar tools for managing code really don't play well with "let's put logic in to database".


It's a very interesting approach. I've always liked that PG made a lot of this possible internally but leveraging thrift is definitely a broader stroke. I like it.


While I do agree with you on ORM clunkyness, I cannot shake the outage-paranoia caused by untestable code that is deployed in a non-straightforward manner which makes it hard to rollback. Also, error logging or alerting tooling around it is very miniscule.

I'd be keeping ORM's just to feel more secure and be more robust on the testing side.


It a matter of finding the right balance.

You want scalability? you have to drop your constraints and external keys in your DB. In fact don't use a DB with a schema.

You want safe data? Use a good old relational DB with well designed schema and you're good to go.


Do things at the database was the point of the xbase family.

And it work very well!


any news about commercial use ?

"LICENCE

=======

This version of Andl is free for any kind of experimental use, especially helping to make it better.

For now, the licence does not grant rights for distribution or commercial use.

That will have to wait until I can choose the right licence, which depends a lot on who might want to use it."

https://github.com/davidandl/Andl


> For now, the licence does not grant rights for distribution or commercial use.

Thank you very much, goodbye, have a nice day.


    > That will have to wait until I can choose the right
    > licence, which depends a lot on who might want to 
    > use it
Anybody who introduces this in to a project needs firing.


Firing from his own project?


I believe GP meant anyone who uses andl in their own separate project, not andl itself


Who is 'his'? If you introduce this software in to any project, regardless of your gender, you should be fired, as a direct consequence of the insane license.


That is of course correct. I did not even consider the possibility that someone would do this.


Thought I'd submit this after leaving a comment about it here: https://news.ycombinator.com/item?id=11802900

I'm not involved in any way, but I heard about it from the author's activity on the Postgres mailing lists, and I'm very intrigued to see where the project goes. I thought other HNers might be interested too. I'm glad to see people are experimenting with ways to take the relational model beyond SQL.


From the article:

> The language handler plandl is a DLL written in C, which calls a C++ function, which in turn uses COM to start up the CLR runtime and load the Andl Postgres entry point. Similar capabilities can be provided for Mono.

It's hard enough getting someone to install a non-core contrib module. This sounds like it'd be impossible to get running.


On Windows/CLR, that whole bit is auto-magic (IJW - "It Just Works") interop. It's no harder than just having the DLL around.

For Mono there's a bit of work involved to load it up but apart from that it's smooth enough. As far as the end-user is concerned, they should just need to install mono and they're good.

Source: I wrote a similar module for FreeSWITCH which has lots of users both on Windows/CLR and on Mono.


I'm sorry I don't really understand what Andl could be used for, and wikipedia in English and German doesn't contain anything (ddg has as first hit "paraphilic infantilism"?!), so I will ask here.

Is Andl only for stored procedures?

If so why would it be better than say python?

If I were to choose, I'd prefer a lispier SQL with support for the basic things like map, filter, reduce, ... Is there something alike?

If not, why?

UPDATE: Nevermind, I just found the other links (quite small on top of the headline), so most things are explained.

However, the article suggests people use python or perl for queries instead of SQL. Did anyone see this in production (apart from Stored Procedures)?


Sounds like you would really enjoy Clojure's YeSQL.


> Andl code can execute on any platform for which there is an implementation (in memory database, Sqlite or Postgres), providing identical results.

Sounds like SQL.


Heh, thats my same remark when someone tries to describe the point of an ORM to me.

Further to that point, if you ignore the ORM issue there really isn't much going for this Andl thing.

I'm hesitant to dump too much on it as I don't fully understand the implications of what the article is talking about just yet.

I think its a good start but I'm not sold on moving my logic there yet.

I like my storage to be somewhat unknowledgeable to the rest of the world, it only needs to know what my data is and where it is.

Just keep in mind, I understand fully that just because I find my current situation all fine and dandy that doesn't mean there isn't a better way of doing things.

I just don't see it with this andl thing yet.


What's the compelling argument for why we all need this? I'm unable to distill this from the article.


SQL was here before I was born and I think will still be here long time. Like C.

Thanks to Edgar F. Codd (not Larry Ellison :) )

By the way, I'll look at Andl. Even if I don't like the ORM idea.


Codd gave us the relational model[1], but SQL was the work of Chamberlin and Boyce in System R[2]. It looks pretty familiar to modern eyes! But I believe Oracle was commercialised sooner, and the rest is history.

[1] https://www.seas.upenn.edu/~zives/03f/cis550/codd.pdf [2] http://people.eecs.berkeley.edu/~brewer/cs262/SystemR.pdf


Why the shot at Ellison? He embraced Codd's idea and did a hell of a job applying it, making it the de facto standard, making it commercially viable, and advancing it.


I'd like to hear a little more of Ellison's story. Jobs and Gates both have very well known stories, which intersect and which include partners Wozniak and Paul Allen.

I don't know much about Ellison's story. He certainly has had an unusually long and successful career. I think he had a partner but I can't remember his name now.

Did Ellison really have a visionary connection with computer science? (e.g. along the lines of Gates' "put a computer on every desk") It could be true, but almost nobody thinks of him that way. Certainly relational databases have had an enormous impact on computing and society. Did he ever make any technical contributions, the way Gates and Jobs did? (note: I'm not getting into an argument about Jobs' tech chops :) )

The core technology originated with IBM, and I think IBM had just as much of a role in commercializing it as Oracle, but I don't know much beyond that.

I think there is a meme that Oracle was more of a ruthless competitor, skilled in acquiring other successful companies whose names we don't remember anymore. But perhaps they didn't really move things forward on a technological level the way that Apple and yes even Microsoft did.


I had the same question. This Quora thread seems to settle it:

https://www.quora.com/Was-Larry-Ellison-a-good-programmer

tl;dr - yes Ellison is/was a pretty good programmer, enjoyed programming, and could handle serious technical discussions. But there seems to be consensus that the guys real genius was in business. Sounds kind of like Gates in that respect, moreso than Jobs, who I believe was never that strong of a coder.


Very true re: IBM, they had standardized SQL long before Oracle was "the name" in Db tech.


It seems Andl is an implementation of the Language that Date and Darwin referred to as D in their Third Manifesto.[1]

[1] https://en.wikipedia.org/wiki/D_%28data_language_specificati...


I hate SQL, so I was excited to see this, but there are two things that rule this out to me:

1. It's not open-source; the license on https://github.com/davidandl/Andl specifically forbids distribution and commercial use.

2. It only runs in Microsoft Windows.

Datalog https://en.wikipedia.org/wiki/Datalog is much more interesting as an alternative query language — it has better abstraction capabilities than SQL, and it's much less verbose, but as far as I know there's no implementation of Datalog in Postgres.


I am not sure I understand what this is. The documentation spends a fair amount of time referencing The Third Manifesto... is it a D? If so, how does it interact with valid Postgres tables that, say, have NULL values?


To be honest, I'd rather just use YeSQL (or any SQL template engine) to keep my SQL separate from my code, keep everything looking clean and have no impedance mismatch.


"(...) written in C, which calls a C++ function (...)", well, memory bugs ahead. That would be great if they could move to Rust or another similar language.


>Andl code can execute on any platform for which there is an implementation (in memory database, Sqlite or Postgres), providing identical results.

sounds like .net LINQ


The syntax reminds me of LINQ.


Also sounds similar to M (aka MUMPS), see InterSystem's Cache (commercial) or GT.M (open source). It is an old programming language with a persistent key-value store that is basically a NoSQL database with programming logic combined. From what I've heard finance and health care software often uses this. It similarly puts the business logic in the database.


Is this limited to Windows? The source does not seem to define system requirements, or I am missing it.


No example how it looks like?

> The language handler plandl is a DLL written in C, which calls a C++ function, which in turn uses COM to start up the CLR runtime and load the Andl Postgres entry point. Similar capabilities can be provided for Mono

Seems like a lot of boilerplate


Good. SQL is getting very long in the tooth, but every "fix" throws the baby out with the bathwater.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: