This project is inactive for two main reasons:
- SQLite is not a great general-purpose SQL engine. Poor performance of joins is a serious problem that I couldn't solve. The virtual table support is good but not quite good enough; not enough parts of the query are pushed down into the virtual table interface to permit efficient querying of remote tables. Many "ALTER" features are not implemented in SQLite which is a tough sell for experimental data manipulation.
- T-SQL, the procedural language I chose to implement atop SQLite, is not a great general-purpose programming language. Using C# in LINQpad is a more pleasant experience for experimentally messing around with data. R Studio is a good option if you need statistical functions.
I think several good solutions in this problem space exist. A local install of SQL Server Express can be linked to remote servers, allowing you to join local tables to remote ones. That setup serves nearly all of SQL Notebook's use cases better than SQL Notebook does. LINQpad is also very convenient for a lot of use cases.
I appreciate the interest! I may spin off the import/export functionality into its own app someday, as I had a lot of plans in that area, but I think SQL Notebook as it stands is a bit too flawed to develop fully.
A web app for writing and running SQL queries and visualizing the results. Supports Postgres, MySQL, SQL Server, Crate, Vertica, and Presto. Written in Node.js.
T-SQL ( I'm assuming the microsoft variant ) is not a procedural language, it's a declarative language.
> Using C# in LINQpad is a more pleasant experience for experimentally messing around with data.
This is not true. T-SQL is as close to the data as you can get. C#/LINQ/EF/etc are great for business layer/presentation layer. But for messing around with data, you can't really do any better than T-SQL. Trust me on this.
If you are a front-end developer, then LINQpad is great in building .NET code/CRUD libraries/etc for dealing with data, but if you want to work with data, T-SQL is where it is at.
Developing expertise in T-SQL ( it's not as easy as people think ) will open up a new way of understand data/programming/thinking. SQL Server allows importing of .net code/functions which you can run on the sql server via T-SQL so there isn't really much you can't do with T-SQL.
> A local install of SQL Server Express can be linked to remote servers, allowing you to join local tables to remote ones.
SQL Server Developer Edition is free now. That is a great starting RDBMs to learn databases on windows environment. It has almost everything other than enterprise level features ( clustering/replication/etc ) that most people don't need.
I think he's referring to the terrible terrible terrible procedural components of T-SQL - cursors, local variables, IF/THEN control blocks, etc.
Right Tool For The Job. Sql is good for querying/storing data, so use it for that. Then let the C# handle manipulating the data and applying rules and whatnot.
Sure. It also has stored procedures/functions/etc. But just because it has elements of imperative->procedural language doesn't mean it is a procedural language. No more than C#'s LINQ implementations mean C# is a declarative language or lambda expressions means it is a functional language.
And though I'm not of a fan of cursors, it is a legacy of a time when it was somewhat needed. And nothing about local variables/controls/etc makes T-SQL "terrible".
All of it is necessary for database maintenance, t-sql programming, etc.
Compared to a full-featured programming language? It's good for its intent, which is to provide a minimal bit of procedural scaffolding for scripts that alter the database. It is terrible as a general-purpose procedural programming language, and I've worked with too much horrifying T-SQL and PL/SQL that pushes these tools way too far out of their intended workflow.
Yes, T-SQL is predominantely a declarative language. But it includes procedural components for writing procedural scripts. And unless we're talking about maintenance scripts or deployment scripts, don't use them.
That's all you're really claiming here. That some people write shitty code. And I agree. I've spent plenty of time with almost every flavor of SQL code that was incredibly shitty.
But I don't really blame the languages that much. I blame the people who didn't understand when and where to use one language or another.
And frankly, if you're seriously going to complain about T-SQL vs other "more full featured programming languages" and then make a blanket statement about not using any procedural components on SQL flavors . . . ummm, you might be your own worst enemy here.
You just said "I would shit a brick and beat a developer" upthread.
I'm constantly frustrated that the two options in the database world appear to be "use SQL and all its anachronistic warts" or "give up on the relational model". NoSQL throws the baby out with the bath-water.
But this is a tangent. The original post came about calling T-SQL a "procedural language". Which is inaccurate... but the author was talking about adding extensions to Sqlite - SQlite is already a SQL dialect. So if you're bolting on T-SQL to Sqlite, what's the main thing it will bring to the table?
It's procedural components. So, in the context of comparing Sqlite to T-Sql, using the term "procedural language" might be a misnomer, but it makes sense here.
I can appreciate the verbosity and "backwardness" argument you make, but can you cite some examples of its restrictions. (Serious question)
Everything you are saying here sounds like a job for something other than a relational database.
I would shit a brick and beat any database developer with it that introduced the idea of a class as a primitive. Fuck that happy horse shit.
Also, graphs? Really?
You're looking for an ORM and blaming SQL for not being that.
There is no common database pattern that creates one table as a subset of another table. RDBMS is, by nature and design, not something you modify in that way on the fly. If you want that kind of object oriented functionality, push your data into a cache layer or column store or KV.
You said it yourself. Use the right tool for the right job. Nothing you've said makes even remotely any sense as a criticism of T-SQL or SQL in general. That's not the right tool for what you're talking about.
And the inability to pull down a graph as a single operation is the perfect example of sql's limitations.
Relational model is great right? Right. So why throw it out the window and give me a glorified excel spreadsheet as a result set? If I want to pull down a thing and it's related subthings in a single operation, I have to join which is wasteful in that case. This actualy hamstrings ORMs.
I wouldn't hit you with an actual brick, but if I worked with you, I'd certainly nerf dart you on a regular basis.
There are so many pain points in SQL that are defended because theory.
You might want to have a look at that as a significant enabler of code reuse on SQL Server.
(On PgSQL is LATERAL JOIN)
(Ps: you should try to replace SCALAR functions with SET-BASED logic. These can easily be ringfenced into TVF)
Use your presentation layer for displaying your data once it's actually properly processed.
Depends on your definition of "working with data" I guess.....TSQL is fast, but beyond that almost anything beats it for functionality when messing around.
Also, SQL Server is awful at importing/exporting data directly. Almost all forms are broken in various ways and are a source of endless grief. The only reliable built-in way is to use SSIS which is like using a 200-ton tank for a paperweight.
Why choose a proprietary engine? Why not use PostgreSQL or even MySQL?
They probably meant SQL Server Compact, which is a smaller package and comparable to SQLite. It was meant to power WinFS back in 2006. But it's closed source and now legacy.
On the one hand, I find that highly annoying. On the other hand, even if you don't like Microsoft, SQL Server is fairly awesome, performance-wise and stability-wise.
But still, there are many applications where you cannot really choose what database engine you want to use.
I wonder if it would be easy to support both backends and be able to switch between them depending on the query and which features and integrations you want.
So much this. BCP is inferior to 5 lines of C# code, the tools for importing and exporting data in SSMS are buggy and cumbersome, SSIS is a gigantic monster. Simple "Export the results of my query into an INSERT statement" is a moronically obvious workflow and is non-existent.
I've read that the tools division of MSSQL was running on a skeleton crew for a long time, and it shows.
SSIS is a hellscape of unnecessary designers for an ETL framework, and generates ultra-brittle packages that crash if the slightest schema tweaks happen.
SSDT stores your schema and publishes diffs for easy deployment, but will gobble up all your RAM for half an hour to compile... if it doesn't crash.
And SSMS has the worst tabbed interface ever for managing queries.
My question is how other databases fare in these areas. I'm not familiar with any solutions for Reporting UI, ETL, and Schema Version Control in MySQL or PostgreSQL. What built-in tools do other databases have that outperform SQL Server's (obviously crumbling) functionality?
I've only used SSIS for a few months now, but wouldn't this just be a simple Data Flow Task for that. Now, I'm not saying I would use SSIS for that (because I frankly hate SSIS) but it is a trivial thing to do.
The day every .NET shop on Earth admits this is the day I'll be a happy man. A SQL server is not for writing business code.
I coupled that library with a CodeMirror editor and got a working web based environment very quickly.
Something from the first of these sites may be adaptable:
· https://pgexercises.com | https://news.ycombinator.com/item?id=6893333
The other great thing about this type of setup is that students can feel free to experiment with deleting records and doing all kinds of things in SQL knowing that they can immediately restore everything to the initial state by hitting refresh.
The last component that I think helped the class was adding an anonymous "share" button that allowed students in the class to submit the contents of their editor so that I could run it for the whole class and we could talk about the trade-offs of different approaches.
Only Mozilla dev, a die-hard NoSQL fan, prevented adoption in Firefox (he since left Mozilla) and Microsoft couldn't decide which of it's 10 SQL engines (no joke, think about MS Access, AD, Exchange, Outlook, WinFS, etc all had/have their own embedded SQL database incarnations) to use in Internet Explorer. In the end Microsoft nowadays ships SQLite with Windows. But WebSQL isn't supported by Firefox and IE/Edge - but the web users moved on, and like 99% of mobile devices have a webkit/blink based browser, and Chrome and Safari have the largest user base on desktop too.
Yeah, it was super simple and useful, but nobody wanted to do the work to make sure interoperability was a thing, in part because SQLite has very complex behavior and lots of quirks to adhere to.
Some people from Google have developed https://github.com/google/lovefield on top of IndexedDB to replace it.
.tables (list tables).
.schema <table> (show table schema).
.help shows commands.
Or just enter some SQL to get results.
DataGrip from Jetbrains looks promising but the connect/disconnect/setup process is really painful.
Anyway, bookmarking for the next time I'll need to play with relational data.
It's would be great if there was some kind of full version trial so people could try it with full intellisense and package manager support.
-How about import from clipboard (useful for cut and paste from excel)
-It doesn't seem to recognize tab delimiters in a .txt file. Maybe the import window should have a delimiter selector?
-Does it have a crosstab/pivot tool? Most sql dialects are lacking here because they make you explicitly define crosstab columns which is a pain for exploration work.
Zeppelin has source available, or a binary package with a spark interpreter, or a binary package with "all interpreters". That one is 712MB. If I just want to use it, I still don't understand what to actually do. I'm downloading the big .tgz as we speak.
Following the installation guide,
Log dir doesn't exist, create A:\TEMP\zeppelin\zeppelin-0.7.1-bin-all\logs
Pid dir doesn't exist, create A:\TEMP\zeppelin\zeppelin-0.7.1-bin-all\run
The system cannot find the path specified.
A subdirectory or file logs already exists.
The system cannot find the path specified.
And now I'm completely stuck.
So the value in this compared to Zeppelin is that it's small, straight-forward, and simple.
The very first "terminal" like thing I learned as a really young kid was how to load a floppy disk, which usually just meant typing a:.
If you already have a trusted Zeppelin or Jupyter or other notebook, then this may not be needed, but if you want to keep all local, this may be a quick and easy alternative... assuming you like SQL.
I get something of this experience in Emacs via `org-mode`, `sql-mode`, and `ob-sql-mode` minus the data-importing functionality... though with babel it's probably doable in a code block using a script.
Bonus: org-mode lets you export to many formats which makes sharing results quite easy.
As a side benefit, it is easy to ggplot results. :)
(scroll down to "Batteries included")
I was playing around a bit with it:
There are versions for linux and windows and iirc it supports any ODBC connection as well as out of the box SSL tunnelling for mysql/postgres/etc.
This tool looks like it might be a useful replacement for this purpose, especially if it can handle CSV data, as well.