Hacker News new | past | comments | ask | show | jobs | submit login

Did this change recently? I remember my database professor in college was adamant that when they talk about databases, I am to assume some things as a given (going by memory, am probably not completely accurate):

that the data set is large enough that cannot fit in memory

that storage is orders of magnitude slower than memory and memory is orders of magnitude slower than processor cache

Oracle has the “best implementation” given these constraints.

Is that not the case?




It's worth noting that in current conditions the assumptions may be unwarranted.

First, while storage used to be orders of magnitude slower than memory, not SSD storage is just a single order of magnitude slower;

Second, in many domains now it's often practical to ensure that your data set can fit in memory. For example, if your system is for storing financial transactions (which is a prime market for Oracle), then your enterprise has to be quite large to get a terabyte of transactions and you can put a terabyte (or much more) of RAM in a database system if you choose to.


That's precisely the point, Oracle forbids benchmarking and comparisons in its licensing.

So how would anyone (legally) know?


Well, you cannot legally publish a benchmark, but you can set up your own for your private uses. It is not like Oracle DB detects it is being benchmarked and shuts off itself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: