Hey: @ryanworl wasn't aimed at your comment, you're right, they should watch and learn it all..
It's not valid for big datasets, as it stores all in memory (or maybe it is?), but as a learning experience has been amazing to think and develop a real database: planner, executor, choose algorithms, implement features as lateral joins and so on.
I will definetly listen very carefully to these talks.
Request to any video filter expert
I started watching this. The slides are unreadable but the camera is perfectly still and the slides are for several "key frames" where the compression algorithm decides to replace one set of compression artifacts for another.
For example try to read the first keyword under "Translates into:":
The keyword is unreadable at the start but as you keep looking at it over 50 keyframes it becomes readable to me.
Since the camera is in a fixed position it should be possible to combine the data from those artifacts into a single superresolution video with very small assumptions. (i.e. the assumption that the image is the same image until at least 5% change or something). There's not even anyone moving in front of it.
-> Can someone who actually knows this stuff apply a superresolution interlacing filter to this video and post the superresolution version somewhere?
I hope this is not too much work, and I am sure we would all appreciate the results since the slides are not human-readable before applying some kind of superresolution!
https://arxiv.org/abs/1801.04590 - "Frame-Recurrent Video Super-Resolution"
but if you look at p. 8, I think many of the algorithms still wouldn't end up with readable text. This paper is from this year, so it is an area of active research.
I wrote a quick mail to the authors to see if they would put the video through their setup (since the last paper update was just 3 months ago) and share their results.
2. Trying it myself...
After my downvotes I tried this small piece of software:
Which shows a before/after. Here is their page on their super-resolution algorithm:
I used their plugin on virtualdub on a sample of the video. The results weren't useable. Here is a picture which shows the before and after:
(The diagonal lines are a watermark because I didn't pay to register video enhancer.) Also note that though it might look like a sharpen mask was applied, in fact it was not: this is just the superresolution that video enhancer came up with.
Now granted I don't think that this particular site uses state of the art algorithms (its references on the page I linked are decades old) but it's the first one I found.
The site also has a page explaining when it doesn't work:
It specifically calls out "If your video is compressed to a low bitrate, in many cases this is very bad for super-resolution."
This certainly seems to be the case here. On my comparison picture above you can see that it certainly is an improvement, it is just not enough. I still can't read most of the lines. I think this also doesn't use as many keyframes as it could. (Which makes sense - it is rare that a rare static image is up for, in this case, 25 full seconds!)
There are at least 14 full keyframes there so I think there is more detail to be extracted, but it would, obviously, take longer analysis. I'll let you know if I find anything better or get an answer from the paper authors.
If it's really about the given talk and not exploring state of art in algorithms for recovering of the (textual?) information lost due to video compression, you'd be better off to communicate
1) with Dr Hipp who gave the talk -- he published the more recent original slides on the sqlite site:
so it could be reasonable he'd be willing to publish these older slides (which he probably considers in some aspects outdated). Then only if that fails:
2) with the author of the video who possibly still has a higher quality version of the video, if the quality was dropped during the video compression or preparation for youtube, e.g. while trying to reduce bandwidth or reencode from the native recording format.
Still, I think more information is recoverable. For example I could tell Video Enhancer wasn't using that many frames (from the encoding progress and framerates), maybe just a few adjacent frames.
Yet you could see a very clear improvement:
(As though a sharpen mask was applied, but it isn't.)
I did see if I could get an improvement if I forced it to use more frames by running through a few times, each time doubling video speed so I end up with 1 frame from the whole segment - but it didn't end up better. Anyway, now I have the original.
In SQL, indexes are implicit.. they are used if available but it's easy get a large query to scan sometimes when it shouldnt... what if there was a different query language with explicit index syntax.. I think you'd get a lot more predictable performance.
1. Query expresses the result it produces, not the method that was used to obtain it. Semantic vs implementation. It may be a pain to write, but it will be easier to read later.
2. DBA could add/drop indexes on the fly to tune performance of a live system without making any changes to the application code. And being 100% certain he is not changing the semantics of what's going on.
As others noted, if you must you can use query hints for force particular index to be used for a particular operation. MSSQL also allows to pin down a query plan you like for a given query so that it doesn't drift away later due to environment changes.
I agree it is sometimes a pain to force SQL to use the index you wanted it to use.
In my experience, the most common reason for an optimizer choosing not to use an existing index, is out of date statistics. For those who aren’t aware, the database collects table statistics for things like cardinality, number of distinct values, etc... This is the information the optimizer uses when it’s building a plan. If they get out of date the optimizer will start to come up with nonsense plans. Even worse, if your stats get too out of date, you can become scared to update them, because a new set of stats can potentially change the plans built for every single query in ways that are hard to predict.
As others have stated, you can put index hints directly into your queries, but this should be avoided as they’re hard to maintain. Most ‘enterprise’ RBDMS also have some form of plan management, but this should be avoided even more, as managed plans permanently bypass the optimizer, which is even harder to maintain.
Something like (in some terrible pseudocode):
q = parse_query("...");
q.hint(INDEX, "b", "b_idx1");
c = q.compile();
r = c.run(...);
It was useful occasionally "back in the day" when MySQL's query planner was really bad, but today it will mostly appear useful if there's something subtly wrong with your config. I don't use MySQL much any more, but on Postgres one typical mistake might be that the costs configured for the query planner doesn't match your hardware (e.g. if your seek cost is configured to be high enough relative to sequential reads, sequential scans starts to look really good even when you need only a small portion of the data; principle will be the same on MySQL but I don't know if you have the same control over the query planner costs or not).
There is, it's a feature in MySQL called Index Hints .
A DBA can even sit there as queries fly past, and add hints on the fly.
And then you change a query from "select" (lowercase) to "SELECT" (uppercase), and query plans break and you break production.
Just a guess.
Part of this is because of the way a well-normalized database is organized. Most databases have a few large tables and many smaller tables. So in the general case, most of your joins will be against smaller tables. Joins with larger tables are usually very fast, as long as the fields you join on are indexed (and you're not doing a CROSS JOIN or something.) The other thing that helps (which it sounds like you did by "nesting related joins") is to always think about limiting (filtering) the datasets you're joining against at as many stages as possible; that way you're always doing the least amount of work necessary, and it's usually conceptually simpler to read and understand.
As others have said, most databases do have index hinting as part of the query language. However, in my (long) experience, you should almost never use it. Index hints should be a huge code smell.
Even better, try to add a basic feature to H2 (eg. a new built-in function). It is surprisingly easy, and you come away with a decent understanding of the basics of building an SQL database engine.
I was writing database systems professionally, back in the days before the RDBMS concept was even a thing. So here's my (enormously long and convoluted) 2 cents worth. Please be sure to pack a sandwich and get hydrated before you continue.
Say you were dealing with doctors and patients, and needed to store that information in a database. Back in the day, you'd typically use a so-called hierarchical database. To design one of those, you need to decide, what is the most common access method expected to be: getting the patients for a given doctor, or the doctors for a given patient? You'd design the schema accordingly. Then, the preferred access method was easy to code, and efficient to run. The "other" access method was still possible, but harder to code, and slower to run. The database schema depended on how you thought the users would access the data.
But that is absolutely what NOT what to do with an RDBMS. Certainly you look at the users' screens, reports, and so on - but that's just to determine what unique entities the system must handle - in this case, doctors and patients. Then you ignore how the users will access the data, and work out what are the inherent logical relationships between all the entities.
Your initial answer might be this. A doctor can have many patients, and a patient can have many doctors. As any competent relational designer will instantly know, this means you need a resolving table whose primary key is a composite key comprising the primary keys of the other two tables. So if Mary was a patient of Tom, you'd add Mary to the patients table (if not already there), Tom to the doctors table (ditto), then add a Mary/Tom record to the resolving table. By that means, a doctor could have any number of patients, a patient could have any number of doctors, and it's trivially easy to write simple, performant SQL to access that data however you want.
But then you'd have a ghastly thought: patients can also be doctors, and doctors can also be patients! Say Tom was also a patient of Mary! Now you need a Tom record in the patient's table, but that would inappropriately duplicate all his data from the doctors table! Something's clearly wrong. You'd soon see that from a data modelling viewpoint, you don't want doctors and patients as separate entities - you want a single entity Person, and a resolving table to relate arbitrary people in specified ways.
So this. IMHO, many developers using relational databases have absolutely no idea about any of that. They design hopelessly unnormalised schemas, which then need reams of ghastly SQL to get anything out. The query planner can barely parse all that crap, let along optimise it. The database has to stop every five minutes to wet its brow and take a drink.
So here's my advice to any inexperienced relational database designers who have actually managed to get this far!! If you can answer the following questions off the top of your head, you're probably on the right track. If you can't, you're lacking basic knowledge that you need in order to use an RDBMS properly:
- what is a primary key?
- what is a foreign key?
- what is an important difference between primary keys and foreign keys?
- what is a composite key? When would you use one?
- what are the three main relations?
- what is normalisation?
- what is denormalization?
- what is a normal form?
and so on.
Just my 2c! :-)
It seems to be called OpenEdge Advanced Business Language these days:
And yeah, the terms you mentioned are foundational RDBMS terms, common to all reasonable implementations of RDBMS's. :)
It doesn't even compute. It doesn't even make any sense, in how they use it in relation to the topic.
Are the issues at right angles of one another? No.
Are the issues statistically independent of one another? Perhaps.
I suggest to use a more appropriate descriptive word to describe the situation.
You folks should read the urban meaning of orthogonal, to understand how people roll their eyes at you, when you inappropriately use the term.
Just another friendly PSA.
For a word that's gone through that exact cycle, have a stare at "artificial," which was the adjective for "artifice," which at one time meant craftsmanship. When St. Paul's Cathedral was first shown to King Charles II, he praised it for being "very artificial" -- a compliment. 
In the meantime, I agree that it can be frustrating to see words apparently misused. But I think this is hardly the mark of an "idiot," as you put it.
There's a euclidean geometric answer to that statement, but it's hardly the only correct answer.
When people use it to mean that they're speaking of two issues that have a range of independent possibilities, it's not wrong to invoke linearly independent bases.
> You folks should read the urban meaning of orthogonal
nope, nope, nope. that site's a hive of scum and villainy, and a massive number of entries are just random nonsense.
i'd rather go to wiktionary, which includes:
"Of two or more problems or subjects, independent of or irrelevant to each other."
If that mattered at all, then we'd have stopped using other remapped words first, like "tree".
Who really does understand how a SQL engine work? Don't you usually require to understand how something work before starting using it? Which SQL analyst or DB architect really knows about the internals of a SQL engine? Do they know about basic data structures? Advanced data structures? Backtracking?
That's why I tend to avoid systematically using a SQL engine unless the data schema is very very simple, and manage and filter the data case by case in code. SQL is good for archiving and storing data, and work as an intermediary, but I don't think it should drive how a software works. Databases can be very complex, and unfortunately, since developers like to make things complicated, it becomes hairy.
I think SQL was designed when RAM was scarce and expensive, so to speed up data access, it has to be properly indexed with a database engine. I really wonder who, today, have data that cannot fit in RAM, apart from big actors.
I tend to advocate for simple designs and avoid complexity as most as I can, so I might biased, but many languages already offers things like sets, maps, multimaps, etc. Tailoring data structures might yield good results too.
Databases still scare me.
Databases are not very complex and use pretty much only textbook data structures and algorithms. Understanding how they process a given query and how a query will probably perform/scale (even without EXPLAIN ANALYZE) is not hard to learn. You do need to learn it (at some point; you don't for small data, which is most). But it's far from difficult.
> That's why I tend to avoid systematically using a SQL engine unless the data schema is very very simple, and manage and filter the data case by case in code.
And that's the mentality that gives us webshops were applying a simple filter results in a couple seconds load time and uses hundreds of MB of RAM per request, server side.
What you can't do is start out with a schema and a query and understand the execution profile. Query planners take statistics into account when determining which indexes to use or to fall back to scanning, and can normally only examine a fraction of possible join orders. So you can usually only fully predict the performance profile of simple queries. Databases can run perfectly well one day and fall off a performance cliff the next, when statistics change; doesn't happen often, but it can happen.
More likely, something works perfectly well in test and in the early days, but rapidly becomes untenable as the data grows and superlinear slowdowns emerge.
Thus you need to put run-time monitoring into place, and have a routine process of examining slow-running queries, and fixing things that are slow: rewriting queries, adding indexes, possibly denormalizing data, materializing views, etc. It's an ongoing process in any application that works with lots of data.
I might also add that a basic understanding of data modelling and what an index can and can't do is sufficient to avoid many, many performance pitfalls (this again has mostly nothing to do with SQL per se). Any undergrad course on databases teaches these basics.
Sqlite for example is conceptually very simple, and it's fairly cleanly layered, so you can aim to understand the parser, the compiler and the bytecode execution engine separately.
It's also exceptionally well documented. Look at the "Technical and Design Documentation" section here .
It won't tell you everything about how a more complex database like e.g. Postgres works, but it will give a very good overview of most of what is relevant for a database user that wants a better understanding of why a database does what it does with a given query.
Do you really think that replacing an "ORDER BY" statement with your own sort is going to be simpler?
Now it's true that most databases have a ton of features and even more optional features that can (and will) interact in interesting ways, but I thought it was fairly obvious that most applications use very few or none of these. They are the 90/10-sort of features; 10 % of a database vendor's customers need 90 % of the specialized features in a database, and every single one of them uses a different handful.
Obviously you don't need to understand all these specialized features to use a database; you only need to grasp the handful if any at all you actually need at a time. Applications striving for wide database compatibility tend to rarely use any of these, simply because they don't exist in all databases, or work differently, or have divergent interfaces.
So any time you have an application that runs on MySQL or postgres in production but is developed and tested on SQLite (an antipattern itself, but I digress), you can be assured that you'll only see fairly basic DDL and SQL.
(You also seem to be intermingling understanding and building. I can use and understand how a typewriter works without having a clue how to build one. Yes, there are lots of hard problems solved by databases, but how they do it is mostly a don't care. I don't have to care how SQLite does power-fail-safe transaction, it does and what that means for me, is all I have to know as a user.)
Unless you're generating totally dynamic queries that's a moot point.
You can always try it and measure it -- just like you know, you would profile a program in any programming language. And you can trivially have the database show you the query plan as well.
Do you also not use APIs because you don't know a priori if a call is O(1) or O(N) or O(Nlog(N)) etc?
>I think SQL was designed when RAM was scarce and expensive, so to speed up data access, it has to be properly indexed with a database engine. I really wonder who, today, have data that cannot fit in RAM, apart from big actors.
That's really orthogonal.
Speed and indexes still matter today with big data (or plain "tens of thousands of web users" loads), where we often have to denormalize or use indexed non-sql stores just to get more speed for the huge data we still need to be able to query fast.
Besides, something indexed will be faster whether they are in disk or in RAM compared to something in the same storage that's not indexed.
So unless we're coding something trivial, server side we still want all the speed we can get from our data than plain having them as simple structures RAM provides.
You wouldn't use a linked link as opposed to a hash table just because your data "fit in RAM". Even in RAM ~O(1) vs ~ O(N) matters .
SQL was invented and caught on because: companies had tried and were burned by no-sql stores with incompatible storage standards, lax reliability guarantees, no interoperability between client programs, no over-econmpassing logical abstraction (compared to relational algebra) but ad-hoc reinventions of the square wheel, and so on. Ancient issues the wave of Mongo fanboys brought back all over again.
 unless the linked list is so tiny as to fit in cache and avoid the hash table inderection, but I digress
There is literature legion on the implementation of the database that would assuage you, should you concern yourself with reading it. I don't think you need to. Trust that many many smart people have engineered many many decades of excellent software.
That is, not to say, that you won't need to peek below or concern yourself with certain choices -- indices, commits, columnar-vs-row, etc. as your performance or access patterns dictate.
More importantly, the relational model is still the gem that shines as a beacon to model logic and data and is too often undervalued due to its association with 'enterprise software' and the implementation language (SQL is a bit warty).
can you recommend some material?
Keeping all your data in RAM has significant problems, even if it all fits. For example, would you want to lose all your customers' orders and billing information if your code crashed?
In addition to the relational database model, SQL databases offer ACID transactions, which are useful if you want to have consistent and reliable data:
You could summon Antirez I guess
There are things like WAL and snapshots. Having your dataset in RAM and querying directly doesn't exclude persisting it to disk. Read Stonebraker's "The End of an Architectural Era". Basically the OP is right in that SQL DBs were designed assuming that RAM was scarce and that asumption is no longer valid. They are innefficient for every common use case. By at least an order of magnitude.
I tried, but it lost me in section 2.3:
>It seems plausible that the next decade will bring domination by shared-nothing computer systems, often called grid computing or blade computing.
No, it doesn't seem plausible at all. This has been, by some accounts, the future of computing, since at least the 80s.
But shared-nothing is just too darned hard to program for.
Also, main memory is still scarce. We're just barely up to the 1TB of just some of the (small, by today's standards) databases the paper mentions. Ironically, it seemed to emphasis traditional business database needs over what might happen with the tech industry itself, which has turned out to be the main driving force behind database usage (and data creation).
As a non-native speaker, I think that preclude is the word you're looking for. Not disagreeing with what you say.
If that's not what you need, there are plenty of procedural languages whith which you describe how to get things.
A query plan will change based on statistics. It's the engine's job to decide if your query is best served by parallelizing it to multiple cores, or deciding if it's worth JITing it before execution.
The actual execution of the query is an implementation detail of the engine, and should be. That's the entire point of a SQL standard: To provide users with an interface to talk to engines, which then retrieve the data you asked for.
It is its core strength and the reason why it's so successful.
The entire archive of the pgsql users' mailing list disagrees. Every wants to know why their plan is suboptimal, or why it changed. People also want to know why the planner takes 100ms to generate the plan and only 1ms to execute the query, and so forth.
The idea that you just say what you want and you get the optimal result from your database is just ridiculous to anyone who has had to use them under any significant load.
This seems like mere selection bias.
As the sibling comment points out, users who have no problem aren't likely to post "Everything is fine!" to the mailing list. In fact, it would likely be rude to do so.
However, database engines aren't perfect -- I know I've encountered bugs in older SQL server versions where the query never finishes but making some trivial adjustments fixes it. This is a bug. And most mailing lists are filled with people encountering bugs. Saying what you want and getting the best result is exactly what you should expect.
And rather a lot of the archives of $scripting_language_of_your_choice are people confused about duck-typing/type system failures. That doesn't mean scripting languages should be replaced with statically typed ones; just that there are pain points in every system, and right (or wrong) tools for every job.
Don't believe me? Check how much of the FAQ traffic from first-time Rustaceans (or Swift/Java/etc. newcomers) has to do with how to satisfy their language's type system.
You pick your poison. SQL gives you a clearly defined set of tradeoffs up front. If that's not for you, no worries, move along.
That's like arguing you should be able to dictate the assembly that your compiler produces. The entire point of SQL (and most compilers) is that they can use their knowledge to optimize the result in ways that are too difficult or too involved for humans to do. Most people cannot out-optimize a compiler in the general case. And most people cannot out-optimize a SQL DBMS.
Also, with SQL, the result might be highly dependent on the data itself. A table with 1,000 rows yesterday might be queried entirely differently from the same table now with 100,000 rows. Are you going to constantly go back and dictate the query plan to the engine every few months as the data changes? Probably not. Use the tool as intended and you'll be fine. Anything else is premature optimization at best.
A straightforward Postgres database will almost certainly fulfill your performance requirements unlsss you have a pretty edge-case scenario. Under those circumstances, it would be foolish to start by assuming that you are doing something that the database cannot easily accomplish. After all, it’s much easier to migrate the parts of your stack that require custom code to a new infrastructure when you reach scale, rather that trying to constantly patch bugs and performance issues in your shitty wannabe SWL engine!
Millions of users beg to differ.
Presumably, at a minimum, all the people who work on such engines, including committees to the various open-source ones.
But also lots of other people.
> Don't you usually require to understand how something work before starting using it?
No. Very few programmers understand how compilers work before they start using them. I'd say it's more common to require working on something to really understand how it works than the reverse.
> I think SQL was designed when RAM was scarce and expensive, so to speed up data access, it has to be properly indexed with a database engine. I really wonder who, today, have data that cannot fit in RAM, apart from big actors.
Indexing is no less important for in-memory data access.
The incompetence and ignorance shown in this post is simply astounding.
It's also a question of price. Once you get above about 256 GB of RAM server prices start to go up really really fast. And while there are systems with dozens of TB of RAM they are stupidly expensive.
So even if, in theory, most databases could fit in RAM, most people cannot afford that. And at the end of the day, 100+TB isn't that large a database in the grand scheme of things and you're not easily fitting that into RAM.
I think it might be as high as 1TB these days, though with what's going on with DDR4 prices, the situation is strange at the moment.
Of course, I don't disput your point that a 100+TB database isn't all that large, especially with indexes.
I suspect that it's this false dichotomy of "fit in RAM" and "big data" has resulted in many needless forays into distributed computing.
* Trivially Small
* Fits In RAM
* Fits on one server (CPU / Storage)
* Fits on one Big Hardware (Mainframe or other Specialized equipment)
* Requires Distributed Storage And/Or Processing
> data problems
I find there is also, occasionally, a lack of awareness of when a problem is data-heavy, encouraged by abstraction layers like ORMs.
This can lead to casual or naive (neither meant derogatorily) distributed computing, where the app hoovers up the data from database to be processed. This can be great for anything CPU-intensive but terrible for the I/O-intensive.
> Fits on one Big Hardware (Mainframe or other Specialized equipment)
I don't think this really exists today, unless you're including an otherwise commodity high-end (e.g. 8-socket) server that carries up to a 4x price premium in "specialized equipment".
I'm aware that mainframes still exist, but, for a variety of reasons , I'd consider them as being in a world of their own, rather than a step on this continuum.
 e.g. inherently distributed architecture, not obvious if storage scale actually greater than high-end commodity, interop issues
Unless your data requirements are very specific, mem only, or not adapted to the relational paradigm any SQL engine will provide you with the best and more efficient algorithms to manipulate your data in the most common situations.
I think that if any, databases should be used more.
Too bad that Michael Stonebraker, Turing Award winner, disagrees with you. SQL are not the best solution for any common use case from the performance perspective.
Nevermind what they do to the design of an application. IMHO less people should default to using a database upfront. At least while protyping the idea.
Surely it should be the opposite? While you're prototyping you should use a DB by default, then switch to your own implementation if you find out that it will speed things up (and you need that speed). It's not like the code that you would replace a DB with is going to be trivial, using a DB is going to keep the code simple until you need it to be complex.
Modern RDBMS can handle millions of ops/sec and terabytes of data. They do the job fine 99% of the time and are constantly adding new features.
If you have the 1% need for another data store, there are hundreds of options, and interestingly many of them are also starting to implement SQL as an interface now.
I’m using postgresql usually
If less frequently used parts of it don't fit, offload to disk.
Ideally, the DB should guarantee data integrity after a crash even if the DB is served from RAM.
That's the ideal scenario: You have the best of all worlds.
Coincidentally, that's exactly what Postgres does.
On top of that, it's the best NoSQL database currently available.
Of course, you can use it with SQL if you ever feel the need.
Surely that depends on the load? From what I understand Postgress isn't easy to set up when the data spans more than one server, which many NoSQL database do actually handle fairly well.
Aka leaky abstractions. SQL just wasn't designed for performance. A query language that takes performance into account should definitely ignore any ideas from SQL. Maybe have declared data structures instead of tables with operations that use known algorithms, explicitly chosen.
That is strictly true in the literal sense that SQL is just a textual representation of relational algebra and calculus, and noone says a mathematical notation is "designed for performance" or otherwise.
But in a more practical, useful sense, it's the language most designed for performance, since the query planner has so much leeway to perform optimisation. It can do more dramatic transformations of the parse tree even than a C compiler.
The other reason seems to be a failure to learn (or a belief in "this time it's different") the lessons from the 70s and 80s that informed many of the fundamental design decisions of RDBMSes.
The "NoSQL" world has had a remarkable number of incidents with ACID failures. Unsurprisingly, a fix involves sacrificing performance.
This isn't to say that the trade-off is never worth it. In fact, RDBMSes can and do offer such trade-offs as options. It's just not the default.
It may be accurate that RDBMSes are designed and "shipped" with default configurations that are ACID-first  (to coin a term), whereas the "world outside" is performance-first .
However, it's nowhere near accurate, and maybe even disingenuous, to suggest that SQL or the relational model somehow prevents high performance. The reality of the actual tools contradicts your claim, as the sibling comment pointed out.
 with the exception of early MySQL which defaulted to MyISAM as a storage engine
 as the joke goes, so is writing to /dev/null and reading from /dev/zero
You keep using that word. I don’t think it means what you think it means.
Because people have been doing it since the 1980s...