Hacker Newsnew | comments | show | ask | jobs | submit | army's comments login

"This equity is provided in order to align incentives."

That leaves out the other - probably more significant - reason - it's provided because it can increase the (real or perceived) value of compensation packages without spending scarce cash.

"Lack of equity might cause an employee to feel left out and/or poorly compensated." ... uhh... or they might not have joined the company in the first place?

-----


The original version of the "10x" idea didn't require the productivity distribution to be bimodal - it could be a bell curve or skewed bell curve with high variance. I think it's contextual as well - if you're working on a project that's just at the edge of your present abilities, your productivity is going to be pretty marginal compared to someone who's done it before and has strong aptitude at it.

I think it's silly to simply sort programmers into two buckets.

It does make a lot of sense though to try to attract and retain good programmers though - having a programmer who's at the 75th percentile of productivity instead of the 50th percentile is going to give you a big boost in productivity if productivity is high variance. It also makes sense to try to match programmers to projects that they have good aptitude and motivation for.

-----


You could probably conspire with another party to sell your shares back and forth at +10% every day. That way the official market price of your shares would eventually get to the point where you could share them at their true value.

-----


Your analysis of the arbitrage is faulty - if there's a cost associated with time-shifting energy, then the gap will only narrow to the extent that it's still economically justifiable for people to invest the money in time-shifting energy. What you're describing (no peak/non-peak difference plus people still buying batteries to time-shift energy) isn't a steady state - why would people continue to buy the batteries when they're going to lose money on the deal? You'd actually expect the gap to be the price difference per watt plus some additional amount for the capital and inconvenience.

Also, if there's significant arbitrage, it will also reduce peak prices - because of reduced demand, and also because of reduced costs of expensive additional capacity. I.e. it can reduce energy costs for people who don't use it.

-----


Yes, what I meant is that each Tesla battery sold will contribute to reduce the peak price and to increase the non-peak price, effectively reducing the gap that is the basis for the value proposition.

Because of this effect, the value proposition for existing Tesla battery owners will lower over time. Ironically, the ones that benefit at long term are the non-owners because their peak time price will be lower.

-----


In special cases multiple string concatenations can be optimized into using a StringBuilder calls. Not in the general case.

-----


I'd say in the general case multiple string concatenations can be optimized. Not in special cases.

-----


Do you know for sure that RethinkDB can't work out what columns are being filtered by a function? In the examples it would certainly be possible with some analysis.

-----


It can't work them out because you compose the query in a third party scripting language.

RethinkDB has no access to the structure of the source in order to analyze it statically and work out an optimal I/O read plan. It interacts with the language runtime by providing an API and receiving callbacks to the API from the runtime.

SQL is parsed & analyzed statically at the server, a plan is created based on that analysis and executed. So with SQL it is possible to do so.

With RethinkDB you compose your query in the script, basically, and all of the optimization opportunities end with the exposed API (no function source analysis).

It's not impossible to redesign the API to provide or even mandate static details like requested fields to RethinkDB, and it has a bit of that, but it allows freely mixing in client-side logic and even OP is confused about what it means to have a client-side mapping function.

If they would allow complex expressions to run on the server, it'd become quite verbose to compose that via an API in an introspective way, to the point it'd warrant a DSL in a string... and we're back to SQL again.

-----


> RethinkDB has no access to the structure of the source in order to analyze it statically and work out an optimal I/O read plan.

Actually this isn't true. One of the really cool things about RethinkDB is that despite the fact that queries are specified in third party scripting languages they actually get compiled to an intermediate language that RethinkDB can understand.

That being said AFAIK RethinkDB doesn't optimize selects the way columnar databases do. I believe it can only read from disk at a per document granularity. But it does have the ability to optimize this in the future.

-----


I don't think that's true. From what I perused of the driver implementations, I think that as calls are made, the driver basically builds an AST up, and then when you call run() it compacts it and sends it over to the DB. ie, when you call filter() you aren't actually filtering, you're adding a filter operation to the AST.

I would think that would allow Rethink to analyze the structure of the query and perform appropriate optimizations.

-----


I'm talking about map(), and you're talking about filter().

Here's the code in question:

  .map(function(album){
    return {artist : album("vendor")("name")}
  })
If this is simply adding a node to an AST, it could be expressed without a function:

  .map({artist : ['album','vendor','name']})
Using a function for this would be quite superfluous.

-----


You can express it both ways in RethinkDB, and they'd both do the same thing -- add a node to the AST. The function is just a convenience syntax.

-----


> It can't work them out because you compose the query in a third party scripting language.

The restrictions on what language features you can use in lambdas inside queries exist because the query isn't executed on the client, the query in the client language is parsed into a client-language-independent query description which is shipped back to the server and executed on the server. So all the information about the query is available to the server (how much it actually uses for optimization, I don't know, but the query is not opaque to the server; what is composed in the scripting language has the same relation to what the server sees as when you use an SQL abstraction layer that builds SQL and sends it back to the server with an SQL DB.)

-----


Right, I'd be a lot more forgiving of MongoDB if they had been bringing the product to market 10-15 years earlier.

-----


Why? It was stupid and unsafe 10-15 years ago when MySQL was doing it, too, and all the devs who had been using more mature DBs (Oracle, DB2, etc.) complained about how bad it was.

-----


I was nodding in agreement right up until the word "Oracle". Essential any history of databases will say that for years, Oracle was not an RDBMS even by non-strict definitions (the claim is that Ellison didn't originally understand the concept correctly), and certainly did not offer ACID guarantees.

Possibly Oracle had fixed 100% of that by the time MySQL came out, but now we're just talking about the timing of adding in safety, again -- and both IBM and Stonebraker's Ingres project (Postgres predecessor) had RDBMS with ACID in the late 1970s, and advertised the fact, so it wasn't a secret.

Except in the early DOS/Windows world, where customers hadn't learned of the importance of reliability in hardware and software, and were more concerned simply with price.

Oracle originally catered to that. MySQL did too, in some sense.

In very recent years, it appears to me that people are re-learning the same lessons from scratch all over again, ignoring history, with certain kinds of recently popular databases.

-----


I am curious as to why. The underlying systems have only gotten more reliable and faster then they were 10-15 years ago. 10-15 years ago writing to disk was actually _more_ of a challenge then it is now with SSD's that have zero seek time.

-----


I don't think it's gotten any easier to verify that something was actually persisted to disk though.

The hard part has always been verifying that the data is actually persisted to the hardware. And the number of layers between you and the physical storage has increased not decreased. And the number of those layers with a tendency to lie to you has increased not decreased.

For some systems it's not considered to be persisted until it's been written to n+1 physical media for exactly these reasons. The os could be lying to you by buffering the write, the driver software for the disk could be lying to you as well by buffering the data. Even the physical hardware could be lying to you by buffering the write.

In many ways writing may have gotten more reliable but verifying the write has gotten way harder.

-----


'When you query with ReQL you tack on functions and “compose” your data, with SQL you tell the query engine the steps necessary (aka prescribe) to return your data:'

... what? ReQL and SQL are both declarative query languages: I don't really see the author is getting at. Is there an implication that SQL isn't declarative?

The only real difference is that the API is based around chaining function calls rather than expressing what is needed as a string - there are many SQL query builder APIs that will let you build SQL queries by chaining together function calls.

-----


One real difference is that you get most of the benefits of an ORM framework right in the driver, without depending on other packages or frameworks. And the API is very consistent across different languages.

The biggest problem with composing SQL strings is that you have to be very very careful about SQL injections, and if you deal with that in a slightly sophisticated, reusable manner you are half-way to an ORM already. As far as I can determine, the ReQL drivers make injection attacks very difficult.

-----


Composing SQL strings at runtime should be the very last resort in my opinion.

This is what stored procedures and parameterized queries are for. Even if I am going to do dynamic SQL, I do it in a stored procedure if I can.

-----


Then you are one of the rare SQL-englightened beings.

I still don't see how you can pass user input from, say, a python string into a stored procedure call without worrying about injections. Or converting between your app's data structures and whatever string is necessary for your stored procedure.

-----


Your driver should be able to handle parameterized queries for you.

    query('SELECT * FROM users WHERE id = ANY ($1::int[])', [1, 2, 3]);
    query('SELECT * FROM users WHERE lower(uname) = lower($1)', 'foo');
Where's the injection vulnerability?

-----


So I may not be an SQL expert, but why would it be difficult to produce an injection string for $1? Of course, if you supply it "guaranteed" integers, then you can't. Injections normally happen with user inputs, not constants.

-----


Because SQL query compilers generally don't execute the parameters. They do not just concatenate the given parameter strings into the query template and then run that. Instead, parameters are always treated as parameters, the query template is compiled and the parameters are passed into that compiled representation of the query where they are simply regarded as variables, not eval'd.

-----


lookup prepared statements.

-----


The biggest problem with a different query language for every project is that when they get around to implementing some of the more esoteric (but extremely useful) SQL features it may not match well with what they've designed so far, and it may be hard to implement for the devs, conceptualize for the users, or just be plain weirdly tacked on causing a cognitive mismatch in how it's used (some side channel, for instance).

Using a query builder (or ORM) of some sort still allows the escape hatch of raw SQL to do those really crazy things that are sometimes needed for performance, or just because what you are trying to do is rather weird. SQL is a very mature language, it's unlikely you are going to run into something someone else hasn't before.

-----


Datomic uses datalog but also exposes lower levels of the data accses api. This allows people with special needs to drop down and do there own thing, while others can use datalog.

It seams like a good idea to allow diffrent layers of data access.

-----


Raw SQL still needs to be parsed and converted into some data structure. The difference I think is that in RQL you are just making the data structure on the client side then sending it to the server.

-----


Another point of ORMs is to make queries reusable. In Django for example, you can reuse the same queryset in views, forms, templating and the admin. You can process and edit the queryset much easier than composed SQL strings.

This should also be possible to build on top of ReQL, though I don't know any examples.

-----


> The only real difference is that the API is based around chaining function calls rather than expressing what is needed as a string

The query in RethinkDB is very much an expression. In the JavaScript driver you build this expression with function calls. There are other drivers which let you build the expression in a much more declarative way (like my Haskell driver).

-----


Let me give you an arbitrary, unknown sql string in a variable `s`, and I'll do the same with a ReQL query.

The challenge is to make sure the column/field `age` is more than 20.

My code is:

   query.filter(r.row['age'] > 20)
What's yours? (Hint: start by writing a compliant SQL parser)

-----


Let us compare this newfangled "auto-mobile" invention to my favorite form of transportation, the horse: Where would you mount your favorite riding saddle on an auto-mobile?

(Hint: start by learning metalworking)

-----


This is nice, and as others have noted there are similar APIs that can be used with SQL. But I find it bad practice to extend arbitrary queries with additional conditions. This is very likely to lead to poor performance and perhaps correctness problems. In practice you need to have more semantic understanding of your query, so having an opaque 'query' object is no more helpful than a SQL string.

-----


using ActiveRecord:

    query.where('age > ?', 20)
If I'm understanding you correctly.

-----


That's moving the goalposts. The original challenge was take an arbitrary SQL query stored in a string.

-----


I believe army's point was that when using an SQL query builder API, one does not start with a string, but something which allows them to do a similar check that you showed.

I'm also not sure how your comment replies to army's point. The point, as I understood it, is that it is not accurate to characterize SQL queries as steps that tell the engine what to do. SQL is declarative, and leaves the execution plan up to the database itself. army's comments about the API and strings were trying to point out the only perceived difference, which is not relevant to the question of declarative versus imperative.

-----


Wait what? Not sure what you are asking for

"SELECT * FROM table WHERE age > 20"?

-----


You already have an existing SQL query in a string variable. You need to add the age > 20 condition to that query.

-----


  SELECT * FROM ($S) AS FOO WHERE FOO.age > 20

-----


I think that will work for the specific case but won't generalize.

-----


Consider:

"You already have an existing ReQL query in a string variable. You need to add the age > 20 condition to that query."

Same problem. Comparing apples and oranges, strings and some "live" code. If you put ReQL and SQL into the same category (either as a string or as a thing that represents some "live" running code that you can manipulate at runtime) then it is difficult for me, at least, to really grasp what the differences are between them. SQL is certainly not considered an imperative language, eh?

----

EDIT to respond to the comments below from TylerE and pests:

Oh but you do have ReQL as a string: when you type it into the editor, when it lives on disk as a file of source code. At some point that code becomes live and you can interact with it. The exact same basic transformation happens whether the syntax is ReQL or SQL, just in different ways and at different times depending on how you choose to run it not what syntax it's in. The issues are orthogonal and it certainly fair to demand that we compare the right things.

If you want to say that ReQL is a better syntax than SQL, well, I don't see it (yet.)

If you want to say that the product in question provides a nice way to run ReQL syntax queries in some fashion that is fundamentally better than the way that some other product allows you to run SQL queries, that is a whole different issue (and NOT the one I am addressing in my comment above.)

I hope that makes sense. ;-) Cheers!

-----


No it's not. Because there is no such thing as a ReQL query in a string. It's an object, usually built by chain methods. There is no textual representation.

Edit to your edit: It seems you are fundamentally not getting it. The ReQL is live code in your native programming environment. That means you can inspect it and manipulate it. SQL doesn't get interpreted (or whatever, it's black box) until it hits the server.

Imagine you're in a world where there are no XML parsing libraries. SQL is a string containing XML. ReQL is a DOM object.

One is much more useful than the other.

-----


Your argument is so silly.

You are a comparing a language (SQL is independent to the language you're programming with) to an API.

RethinkDB has API available for three languages: JavaScript, Python & Ruby. If you take look carefully while it tries to be consistent across them, there are still parts that are specific to given language. If you would want to use RethinkDB with a language that is completely different (for example a functional language), assuming RethinkDB would support it, you're guaranteed that the interaction with the DB would be completely different, while you could still use the same SQL language[1].

If you want to compare RethinkDB's API to something similar you should compare it with something like JOOQ[2].

Just to preemptively respond to argument about translating DSL to SQL. Currently modern driver communicates with database using binary protocol, the SQL is compiled on client side. You could actually skip SQL altogether, but then you would lose flexibility of being able to support many other databases.

[1] http://pgocaml.forge.ocamlcore.org/

[2] https://en.wikipedia.org/wiki/Java_Object_Oriented_Querying

-----


Try to understand the context in which what I am saying makes sense, it will blow your mind and make you a better programmer when you do.

(Expanding upon that: We are both correct but not in the same context. There's a context in which what you are saying is true and sensible, and there is another context wherein what I am saying is true and sensible. I can switch between the contexts, so I am not trying to disagree with you, I am trying to give you data to help you to understand this other context and switch between then too. Additionally, this other context is of a higher "logical level" in the mathematical sense than the one we already have in common, and so when you do grok it I can confidently predict both that it with blow you mind and improve your ability to write software.)

-----


You would never have a ReQL query in a string though. I don't think its fair to ask about that case.

-----


Does a string really have the filter method?

-----


A ReQL query does. That's the point, it's a native object you can chain stuff off of.

-----


Then it's not a string, it's a ReQL query object. Same way a Django model instance isn't a string. Your comparison seems way too artificially made up towards Rethink, so much so that it discredits itself.

Imagine me saying "here's a SQL string, let's see which database can execute it more easily, Rethink or Postgres. Hint: Start with writing a parser to convert it to ReQL".

-----


Why would you ever do this?

-----


Hey, he's too busy working 300 hours a week to worry about things like "averages" and "mathematics".

-----


Not sure who "he" is that you are referring to here. I've never worked in finance. I was just relating what people in finance have told me.

-----


Uh, it's completely acceptable to "discriminate" against people based on factors that relate to their ability to succeed in the job. I don't think it's controversial that people's social skills or style of social interaction has a bearing on their job performance. You can argue to what extent and so on, but it's bizarre to think that it's not relevant.

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: