I'm the author of pgcli and mycli. I was tempted to write a universal cli that supported both postgres and mysql, but then I decided to keep them separate for the following reasons:
1. Psql comes with backslash commands that are implemented at the client level and not in the database. So you can't simply send `\dt` to the database. That will fail. The same is true for mysql as well (such as `\u`).
2. There wasn't a huge overlap in users who use Postgres and Mysql at the same time. Even for the minority that do, I tried to keep pgcli and mycli behave similarly so they can switch between them seamlessly.
3. The code complexity needed to support multiple database backends would make it hard for a newcomer to contribute to the project. This is just my guess and I don't have data to back this up.
4. I didn't think I could do a good enough job of maintaining parity with psql and mysql's default client if I had a universal client. I still don't have complete parity but it is close enough that users don't feel like their compromising a lot.
I'm happy you're taking on this challenge and I'll be glad to give you pointers on how the sql completion engine works in pgcli and mycli.
Man, what I really would love is some ipython console with SQLAlchemy stuff built in that auto-sets up the connections and session objects, and that has convenience functions for table / database info and dumping data out to CSV/TSV - that'd come close, I think.
I found out about this from one of the core-devs of pgcli. He made it possible to embed pgcli inside ipython. We wrote a blog post about how to use this: http://pgcli.com/embedding-pgcli-in-ipython.html
Using pandas you can do a lot of that with a query and a connection, dumping to CSV/Excel/JSON etc is pretty trivial once you have your data in a dataframe. It also supports writing data back to tables via SQLAlchemy as well, and any kind of data manipulation you can imagine in between. I'm currently using it to migrate a Mongo database to PostgreSQL.
Thanks for the input! I already have written the introspection SQL for all the current databases in my related project, xo, that I'm planning on putting into this, so I'm not really worried about the implied additional complexity. As well, I don't think that implementing the backslash commands is too difficult, and the real point/motivation is to have a standardized interface for all the databases. I've used both pgcli and mycli, which are nice (thanks!), but I need a single common interface that supports all the databases that xo supports (which is entirely why this was written, as there's no pgcli/mycli equivalent for mysql/sqlite3/oracle -- until now).
this exactly. I was about to comment that (probably) unless this uses prompt-toolkit and works similarly to pgcli or mycli (both of which I use and love), Im gonna prefer them separate.
This is a new tool (literally only started in the last 24 hours), but already provides a common, usable interface to PostgreSQL, MySQL, Microsoft SQL Server, SQLite3, and Oracle databases. Plans to add support for other NewSQL databases (VoltDB, etc) in the future, as well as to make it have feature parity (and the same interface as) postgres' psql command line tool.
Is there a way to try this without figuring out how to set up a go build environment? I've never used go, but wanted to try this out.
I installed go 1.8, and then issued the prescribed command line. It tells me `# github.com/mattn/go-sqlite3 exec: "gcc": executable file not found in %PATH%`. This looks like a rabbit hole of unknown depth that I'd prefer to stay out of.
Is the goal of this project to homogenize syntax or homogenize the call level interface (api)? I can understand the need for the latter, not the former.
This is not about making a generic SQL implementation. This is simply about being able to execute SQL consistently, and to use the same CLI interface for doing so. The user would still need to know the individual SQL dialects for each database they are using. However, at least then if you become familiar with, let's say, mysql by using this tool, the same configuration/settings/etc. will be used for all the other databases.
Oh I see - so it is an adapter not an abstractor. I'm curious what the advantages will be over the widely supported ODBC API. Will you add ODBC support?
I've already looked into adding ODBC support, and will try to add it. The real limitation here is that I am not working with a ODBC setup locally, so writing/testing anything for it is not really possible at the moment. Also, I believe all the ODBC Go driver implementations require the use of a C lib/dll, so it would need to be similar to the Oracle support and not "out of the box". The beauty of this, is that at the moment you can simply do a 'go get -u github.com/knq/usql' and have a working SQL client that is exactly the same for the 4 "natively supported" databases, and with Oracle support if you have the right local client libraries.
see GNU sql, it comes with GNU parallel https://www.gnu.org/software/parallel/ "GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.
The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.
When using GNU SQL for a publication please cite:
O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.
I'm sure people have (I wrote a simple MySQL/Oracle wrapper many jobs ago to help smooth out the bumps when integrating Perl products with client databases). It's a shame no one has released a Perl tool like this yet!
What would really make a difference in how we perceive the SQL CLI world would be an IPython-like interface – complete with syntax highlighting, tab-autocomplete menus, easy multiline backlog entry editing etc.
Yes, I already have a working branch with syntax highlighting and autocomplete hacked together, but the issue is that it breaks for some of the databases. When I can get it stable, I'll make available publicly.
Auto-completion will come in time. I don't know if I would allow "relative" pathing via @@, I'm not sure why you wouldn't just use general paths. As for the other options, such as -f, etc., yes those will be put in place. I am hoping to make this a 100% drop in replacement for psql, so whatever psql supports on the command line, as well as the various backslash commands (ie, \f) will be added.
Relative paths are a must. I don't think I ever use non-relative paths.
Say you have a folder structure like this:
sql/
tables/
foo.sql
bar.sql
main.sql
And the contents of main.sql are something like this:
@@tables/foo.sql
@@tables/foo.sql
Then you can run main.sql via: usql -f sql/main.sql
If script file resolution is not relative pathed to the parent, it gets very fragile as you must invoke the parent from a specific location, or, you have to hard code full pathes in your scripts (arguably even uglier).
Yes psql supports it. The syntax is "\i path/to/file.sql" or "\ir relative/path/to/file.sql". The latter is resolved relative to the parent script. Makes nesting scripts possible without losing your sanity.
I had encountered sqlcl before, but it requires the JVM, and I don't believe it is open source. With usql, it's open source (MIT), already works with more databases than sqlcl does, and very soon I will have built releases for Windows, OSX, and Linux that one will simply be able to download a single, relatively small (roughly 8 megs) binary and be able to work with any SQL database using the exact same command-line interface for each.
It doesn't require Go any more than nginx requires C. Go executables are for the most part static, and depend on very little, making them highly portable across distros.
Yes, the plan is to add support for any database (relational or otherwise) that has the ability to be controlled through a "command and query" style interface. This project is only about 1 day old at this point, but I'll make a point to rapidly integrate as many other DBs that are supported by Go as fast as humanly possible.
Btw, it would be relatively straightforward to add support on your own. PRs always are welcome!
No. You need to use the native SQL dialects for each database. It provides a standard readline-style interface and uses various Go SQL database drivers to talk to the actual databases.
1. Psql comes with backslash commands that are implemented at the client level and not in the database. So you can't simply send `\dt` to the database. That will fail. The same is true for mysql as well (such as `\u`).
2. There wasn't a huge overlap in users who use Postgres and Mysql at the same time. Even for the minority that do, I tried to keep pgcli and mycli behave similarly so they can switch between them seamlessly.
3. The code complexity needed to support multiple database backends would make it hard for a newcomer to contribute to the project. This is just my guess and I don't have data to back this up.
4. I didn't think I could do a good enough job of maintaining parity with psql and mysql's default client if I had a universal client. I still don't have complete parity but it is close enough that users don't feel like their compromising a lot.
I'm happy you're taking on this challenge and I'll be glad to give you pointers on how the sql completion engine works in pgcli and mycli.
Cheers!