Hacker Newsnew | comments | show | ask | jobs | submit | batasrki's comments login

I don't think Riak does. I believe its architecture is masterless.

-----


The 55 mile range was calculated by Tesla's engineers, not Top Gear's. So, Tesla Motors should be pissed at its own engineers and no one else.

-----


This is Tesla's statement on the issue:

"On March 29 2011, Tesla filed a lawsuit to stop Top Gear’s continued rebroadcasts of an episode containing malicious falsehoods about the Tesla Roadster. Top Gear’s Executive Producer, Andy Wilman, has drafted a blog to present their side of the story. Like the episode itself, however, his proclamations do more to confound than enlighten.

Mr. Wilman admits that Top Gear wrote the script before filming the testing of the Roadsters. The script in question, concluding with the line "in the real world, it absolutely doesn’t work" was lying around on set while Top Gear was allegedly "testing" the Roadsters. It seems actual test results don’t matter when the verdict has already been given -- even if it means staging tests to meet those predetermined conclusions.

Now Mr. Wilman wants us to believe that when Top Gear concluded that the Roadster "doesn't work," it "had nothing to do with how the Tesla performed." Are we to take this seriously? According to Mr. Wilman, when Top Gear said the car "doesn't work," they "primarily" meant that it was too expensive. Surely they could have come to that conclusion without staging misleading scenes that made the car look like it didn’t work.

Mr. Wilman's other contentions are just as disingenuous. He states that they never said the Roadster "ran out of charge." If not, why were four men shown pushing it into the hangar?

Mr. Wilman states that "We never said that the Tesla was completely immobilized as a result of the motor overheating." If not, why is the Roadster depicted coming to a stop with the fabricated sound effect of a motor dying?

Mr. Wilman also objects to Tesla explaining our case, and the virtues of the Roadster. Top Gear has been re-broadcasting lies about the Roadster for years, yet are uncomfortable with Tesla helping journalists set the record straight about the Roadster’s revolutionary technology.

Mr. Wilman seems to want Top Gear to be judged neither by what it says, nor by what it does. Top Gear needs to provide its viewers, and Tesla, straightforward answers to these questions."

I hadn't heard that Tesla calculated that range, the direct quote from the show used the pronoun 'we' and I thought part of the suit was because the 55 mile range statement from top gear defamed Tesla and made them look as if they were lying about the range.

-----


The overarching point is that there aren't recharging stations as ubiquitous as the fuel pumps and that it takes anywhere between 30 minutes and 12 hours to recharge the batteries. Was the point proven clumsily, yes, it's Top Gear, they're nothing if not clumsy especially Jeremy Clarkson.

The point still stands. Also note that Nissan itself says that repeated fast charging of the Leaf's battery will cause it to be unable to hold a full charge. The fast charge option being the 30 minute, 80% charge option which is the fastest you can charge a currently-in-production electric vehicle.

These are the facts and there's no agreeing or disagreeing with them. Top Gear presented them in a clumsy way, but they're still facts.

-----


Actually, had you bothered to watch to the end of that episode, the point they made was about HOW a car is driven and now WHAT car you drive.

They drove the Prius as fast as it would go, because someone who buys a Prius will do that and at that speed it would be less efficient than an M3 which is the "sports car" you mention.

And I agree with that point. If I flog my Civic, it'll return much smaller mileage than if I drive carefully and efficiently. So, again, Top Gear's point is proven in that, people aren't willing to think critically and live in a reality.

-----


Exactly the point I took away from that test, it matters much more how you drive (calmly rather than aggressiely) than the car that you drive. This was even confirmed on mythbusters in their driving calm versus driving angry 'myth'.

-----


To be honest, Mongo's execs have done pretty much the same thing. As I said in another comment, the Changelog episode on Mongo was very illuminating with regards to the marketing tactics of 10gen.

I do like both, as well.

-----


Is this the episode?

http://thechangelog.com/post/3742814720/episode-0-5-1-mongod...

-----


Yep, that's the one

-----


If they're doing the same thing, that's just as shitty. But I've been meaning to listen to that episode of the Changelog for awhile now, so thanks for the reminder!

-----


The 'safe' feature isn't on by default, yet. Also, the benchmarks 10gen publishes are based on default setup, so basically, Mongo writes to RAM, therefore it's fast.

I love Mongo and am using it in a few apps, but their marketing does blow, I admit.

Also, Eliot Horowitz came out and bashed on Riak's eventual consistency promise by basically misleading devs into thinking that writing to MongoDB will always result in 'full consistency'. Listen to the ChangeLog episode on Mongo to hear that.

-----


10gen doesn't publish any benchmarks. See http://www.mongodb.org/display/DOCS/Benchmarks for the official position.

I transcribed the MongoDB vs. Riak part of the Changelog webcast (available at http://thechangelog.com/post/3742814720/episode-0-5-1-mongod...):

------------------------

Riak and all the dynamo-style databases are really distributed key/value stores and I think, you know, I've never used Riak in production, but I have no reason not to believe it's not a very good, highly scalable distributed key/value store.

The difference between something like Riak and Mongo is that Mongo tries to solve a more generic problem. A couple of key points: one is consistency. Mongo is fully consistent, and all dynamo implementations are eventually consistent and for a lot of developers and a lot of applications, eventual consistency just is not an option. So I think for the default data store for a web site, you need something that's fully consistent.

The other major difference is just data model and query-ability and being able to manipulate data. So for example with Mongo you can index on any fields you want, you can have compound indexes, you can sort, you know, all the same types of queries you do with a relational database work with Mongo. In addition, you can update individual fields, you can increment counters, you can do a lot of the same kinds of update operations you would do with a relational database. It maps much closer to a relational database than to a key/value store. Key/value stores are great if you've got billions of keys and you need to store them, they'll work very well, but if you need to replace a relational database with something that is pretty feature-comparable, they're not designed to do that.

-----------------------

It starts at minute 17.

edited: formatting.

-----


>Mongo is fully consistent

Can you please explain this for a case where there are multiple replica sets, the database is sharded and nodes are across data centers? What's sacrificed? Something must be.

-----


When we talk about consistency, we're talking about taking the database from one consistant state to another.

With replica sets, we're still only dealing with one master. We can get inconsistant reads from the replicas, but we're always writing to a single master, which allows that master to determine the integrity of a write.

With sharding, we're still only dealing with one canonical home for a specific key(defined by the shard key). (besides latency, I'm not sure how datacenters would affect this)

What we're giving up in this case is availability. If an entire replica set goes down, we can't read or write any data for the key ranges contained on those machines. This is where Riak shines.

With Riak, any node can accept writes, and nodes contain copys of several other nodes data. What that means is, as long as we have one node up, we can write to the database. Because of this, there is the possibility of nodes having different views of the data. This is handled in a number of ways(read repairs, vector clocks, etc). Check out the Amazon Dynamo paper for more info, great read.

I'm sure I'm missing some stuff, but I think that covers the gist of it.

EDIT: One thing that I want to make clear, I don't think that one architecture is better than the other. They each have their own pros and cons, and are really suited to solve different problems.

-----


None of this is guaranteed by default. By default, writes are flushed every 60 seconds. By default, there's no journaling. How can one claim full consistency if the the former two points are true?

Don't get me wrong, I love mongo. I'm building a web app backed by it. But the marketing talk is grating, which whT this post nails.

-----


I think those two issues are orthogonal to consistency. In ACID, consistency and durability are two different letters and CAP doesn't even mention durability. Are you referring to another definition of consistency?

-----


How is flushing a write every 60 seconds orthogonal to consistency? If there's a server crash between the write to RAM and the subsequent flush, the data is lost, is it not? How do you guarantee the data is there in that case?

-----


That would mean the data set was not durable, it doesn't speak to consistency at all. DB consistency is about transaction ordering. Transaction 1 always comes before transaction 2, but 2 may exist or not as it pleases. Transaction 1 must be present if 2 is present.

-----


MongoDB is partition tolerant and consistent.

You can never have multi-master with MongoDB, which is required for "always writable." However, it can be readable. Our CEO did a series of posts on distributed consistency, see http://blog.mongodb.org/post/475279604/on-distributed-consis....

-----


If a slave can continue serving reads whilst partitioned from a master that continues to accept writes then you cannot guarantee consistency. If a slave cannot serve reads when partitioned then you aren't available. If a master cannot accept writes when partitioned then you aren't available. See this excellent post from Coda Hale on why it is meaningless to claim a system is partition tolerant http://codahale.com/you-cant-sacrifice-partition-tolerance/.

One love.

- Lil' B

-----


I interpreted "what is sacrificed?" as asking which letter of CAP MongoDB was giving up. Coda's article actually explains exactly the tradeoffs MongoDB makes for CP:

-------------------

Choosing Consistency Over Availability

If a system chooses to provide Consistency over Availability in the presence of partitions (again, read: failures), it will preserve the guarantees of its atomic reads and writes by refusing to respond to some requests. It may decide to shut down entirely (like the clients of a single-node data store), refuse writes (like Two-Phase Commit), or only respond to reads and writes for pieces of data whose "master" node is inside the partition component (like Membase).

This is perfectly reasonable. There are plenty of things (atomic counters, for one) which are made much easier (or even possible) by strongly consistent systems. They are a perfectly valid type of tool for satisfying a particular set of business requirements.

-------------------

-----


In a replica set configuration, all reads and writes are routed to the master by default. In this scenario, consistency is guaranteed. (You can optionally mark reads as "slaveOk", but then you admit inconsistency.)

This does sacrifice availability (in the CAP sense), but I haven't heard anyone claim otherwise.

-----


"In a replica set configuration, all reads and writes are routed to the master by default. In this scenario, consistency is guaranteed."

One would hope that reading and writing a single node database was consistent. This is table stakes for something calling itself a persistent store. Claiming partition tolerance in the above is the same as claiming availability. The former claim has been made. Rest left as exercise for the reader.

Namasté.

- Lil' B

-----


If a slave is partitioned from its master, it won't be able to serve requests. (Unless the request is a read query marked as "slaveOk", in which case you admit inconsistency.) I highly doubt anyone would claim otherwise.

-----


Which C is lying ? CEO or CAP ? I let to the heart of pure and to the late night sysadmin to decide.

-----


Lil' B, stop trying to outsmart us all, MongoDB works, supports JSON, and autoshards.

-----


Thanks God. At least it's not Cassandra. :)

-----


The implication is that the people for whom eventual consistency is not an option will never reach a data set size or availability requirement that'll require them to use replication and experience the lag (and eventual consistency) involved.

-----


That's not completely true. Take a look at Google's Megastore paper: http://www.cidrdb.org/cidr2011/Papers/CIDR11_Paper32.pdf

James Hamilton has a good summary of the ideas in the paper: http://perspectives.mvdirona.com/2011/01/09/GoogleMegastoreT...

-----


think you're viewing my statement out of the necessary context..

-----


Among major features touted are auto-sharding and replica sets. I don't know if the implication is that it's only for web apps/websites that won't need those

-----


In the sharded case, at any given moment each object will still live on exactly one replica set, which will have at most one master. You can do operations (such as findAndModify http://bit.ly/ilomQo) that require a "current" version of an object because all writes are always sent to the master for that object. You can also choose to accept a weaker form of consistency for some reads by directing them to slaves for performance. This decision can be made per-operation from most languages.

As for trade-offs: Relative to a relational db, there is no way to guarantee a consistent view of multiple objects because they could live on different servers which disagree about when "now" is. Relative to an eventually consistent system, you are unable to do writes if you can't contact the master or a majority of nodes are down.

-----


Sorry, I respectfully disagree. If the market will bear it, you can charge for a service before "mass adoption"

-----


not when the service suffers from a chicken and egg problem.

Why would someone pay money to list their job on a site with no traffic?

Why would someone go to a site like that if you only have a dozen jobs?

This can be something good...but they are shooting it in the leg by being greedy

-----


To be clear, greed certainly isn't our intent. As Dave (djbrowning) wrote, we're concerned about the quality of the content on the site.

We're also giving away coupon codes right now (see my post below) to drive job posters to the site.

-----


If quality was your only concern, you'd just charge a one time $5 fee. So I'd say greed is at least somewhat involved.

And yes giving away coupon codes is good and well...but then you run into the quality problem that you use as an excuse to charge from the start.

Don't get me wrong, there is nothing wrong with greed...provided it doesn't cause you to kill your business before it gets the chance to get off the ground.

-----


Business exist to earn money and the best way to convince others of the value of your service is to charge for it. You can argue that it isn't good business sense, but calling it greed is a judgmental call that you have no evidence of.

-----


I think a good strategy is to get your minimum viable product out as soon as possible and call it a beta. Don't charge for the beta and provide plenty of warning to existing clients by signalling early on your intention to charge for some/all features when you are out of beta. After a private beta period, this is how I'm planning to handle Mighty CV, a resumé building app with hacker leanings that I've been working on. I'm looking for private beta users to kick the tyres a bit, so if you feel inclined then you can sign up for the private beta at http://www.mightycv.com.

I always remember being impressed with the way Heroku did things in the early days. After beta feedback it must have become clear to them that it made sense for them to rewrite from the ground up. This left them with a beta platform which they gracefully continued to support, renamed herokugarden, whilst also rolling out the paid for service. They then provided plenty of info on how herokugarden users could migrate to the new Heroku platform for free too. I'm sure they learnt a lot early on about what direction they needed to take the Heroku platform. Anyone remember the web based code editor? Without the early feedback from beta users perhaps they would have pushed more in that direction instead of changing course towards the Heroku we all know and love today.

-----


I think the $75 per mo is actually a smart idea. It gives the founders lot more leeway in finding and sustaining quality traffic for the postings.

-----


What about manual approval until the site has momentum?

-----


We (Dave & I) both like this. Thinking more about it. Thank you.

-----


Nothing is wrong with being greedy/wanting money for your effort. Just make sure to find an answer for the above mentioned "Chicken-and-egg" problem.

-----


It's not greedy, it's just not a good idea to try and capitalize on a service that isn't running on all cylinders yet. Having said that, definitely a chicken and egg problem. I would worry about getting users before starting to charge for the service. Once a larger number of members has been obtained thinkings of ways to monetize it should be fairly straightforward.

-----


The typical employer expects to pay to post a job listing. Yes more traffic is better, but I bet this site will start getting major traffic within a month. Being a specific niche I also predict it will get some major google SEO juice before to long. So that egg better start running or the chicken will catch up!

-----


I don't think the chicken/egg problem applies here. It applies to e.g. dating sites because there are other options (e.g. other dating sites, bars, etc.). In this case, a ton of people want to work remotely and there is no central place to find that. Now I know of one place, so I'll definitely be checking it often.

-----


well said on both posts sir.

I doubt running this site costs so many resources that they NEED to be charging right now.

another option would be to leave the charge now link and go out to craigslist, dice, etc and be like hey! want to have your job listed on our site for free?

-----


Anecdotal evidence: http://functionaljobs.com.

Similar great idea, but no new posts in almost a month.

-----


Um.... and they cost a metric ton more than we do...

Someone said something about greed?

-----


They might seem a lot more expensive, but when you're looking to hire a new employee (an expense on the order of tens or hundreds of thousands of dollars), the difference between $75 and $500 to advertise the position isn't really that significant.

-----


In addition to what csomar said, for a small business or someone looking to hire an independent contractor, I think many advertisers would consider $425 to be a huge difference. But, I think a lot of potential advertisers would consider $75 significant in the first place.

But for me, charging advertisers a significant fee is valuable to establish that they're serious about hiring. That's the rationale behind the fee structure on my site http://WheresTheRemote.com/ . To me, an advertiser paying a fee of something like 1x or 2x the hourly rate they're advertising (for an independent contractor or employee, respectively) is a token of their sincerity about wanting to hire and pay the rate they advertise, which the site requires them to include in the ad. Conversely, unwillingness to pay such a fee makes me concerned that they would just waste the time of the job seekers visiting my site and I don't want to publish the ad, since quality is an important goal for my project. Of course, having a decent amount of traffic would help establish the value proposition for advertisers to pay such a fee.

-----


You are here assuming that this source will bring enough traffic to be your only source? Otherwise, you'll need to advertise in many sites and at that time the price makes a difference.

-----


I think his point is that the market won't bear it.

-----


This is certainly the reason I went to see it. However, not only was I underwhelmed by the regurgitation of the Pocahontas story, I was extremely underwhelmed by the 3D. The only effect that made me think the film was "realistic" was the flies that kept flying around when they were in the forrest.

Avatar did exactly the opposite of what intended to do and that is, it turned me off of 3D movies not on to them. Sure, it made the money it did, but at least for me, it didn't do anything to advance the technology nor promote it.

-----


Yea, that was the exact same effect Avatar had on me. I'm really glad I saw it 3D and now feel no desire to see another movie in 3D for at least a couple years (when hopefully some new neat tech has come along which I'll feel compelled to check out).

-----


That's throwing out the baby with the bathwater.

Go to a real IMAX theater and watch a real IMAX movie. They tend to be science documentaries, like Hubble 3D. Compared to that, Avatar just seemed fuzzy and out of focus to me.

-----


On an unrelated note, that whiteboard graph disturbed me a bit. Does every YC startup hope to get bought out? Why would that even be a goal or something to aspire to?

-----


it says "liquidity" which can imply IPO.

-----


It seems to be part of the definition of a startup that the end goal is either acquisition or IPO. PG has said (in an interview, might have been on Mixergy) that YC can't make a profit on a startup unless it has an exit.

-----


Well, 0 is slightly off. Ge extended the Bush tax cuts, so he quite literally cut taxes for 100% of America and not the talking point number of 95. Also, failing to raise the taxes on those top 2% puts a huge tarnish on the rest of his achievements. Add this to that tarnish. Sorry, but he failed on his most important mandates. No public option, successful blackmail by the republicans and now most likely, no net neutrality.

-----


The Republicans were not going to allow Obama's preferred tax cut version to get through congress, and because the mid-term elections swelled Republican ranks better prospects in the new year with the next congress would be even more unlikely. But don't take my word for it. That's what Bill Clinton said, along with endorsing the tax cut deal by saying he didn't believe there was a better deal out there. See here: http://www.youtube.com/watch?v=DYHDPxohkrc

He failed on his most important mandates? In the middle of a historic financial catastrophe which nearly triggered a full-blown depression (which under his watch the freefall was stopped successfully and the stock market is now largely recovered) he gets a historic healthcare law passed which had been attempted and failed at for more than 50 years. And because it's only 90% instead of 100% of the desired outcome it's a failure? Why not give credit for what it is, and opportunity for improvement it provides? Social Security, one of the most important social safety nets we have was not what it was when it first started either. This doesn't even get into passing the biggest financial reform laws (against powerful lobbies) since the Great Depression. No, I see it another way. This president has already had a very busy two years.

-----


Better deal, maybe not. He could've and should've forced their hand by calling out those Republicans who were pulling for the rich and against the common man. The same must be said for the assholes who are voting down the 9/11 responders bill. Why should that fall on the shoulders of Jon Stewart? The young people want Obama to do this. His failure to do so has been the greatest disappointment to me.

-----


Something tells me it cut fairly deeply for Obama to extend all the tax-cuts, but you have to look at the larger picture. The U.S. economy has suffered a severe and historic financial fall. To recover from that it's theoretically more favorable to provide citizens with tax cuts than not. Add to that the hand Obama was dealt at the mid-term elections which tilted the scales of congress in the Republican favor, and you see much more pressure to compromise in order to govern for progress of the country rather than draw lines in the sand for gridlock.

I'm not saying Obama is Jesus in the flesh, and that he never gets anything wrong. At the same time, I don't think it's helpful for people to sit back and criticize with broad strokes. Have you called your representatives about the 9/11 responders bill? Or net neutrality? Have we, the tech experts that hang out on sites like HN, taken concrete actions to make our voice heard on things like net neutrality, or do we just wait until the verdicts are nearly in from politicians who don't understand the ramifications as well as we do, and then complain when it's not what we want?

-----


Wrong tax cut: http://www.nytimes.com/2010/10/19/us/politics/19taxes.html.

-----


You're right, but it's not like he didn't try. Everything you cite are things that he and the Dems push for WEEKS before "caving" because it wasn't going to pass unless they compromised with the Republicans.

Again, I agree with the sentiment, but not with the blame assigned.

-----


Compromising doesn't mean taking their version of the compromise. Compromising also means doing the math. In what world does 13 equal 24?

That's what I blame the democrats for. They're shit at compromising.

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: