Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The end of NoSQL? FathomDB launches scalable relational DB (fathomdb.com)
49 points by amirnathoo on March 30, 2010 | hide | past | favorite | 66 comments



Tacking "The end of NoSQL?" onto the title is karma-whoring via exploiting the silly database holy-warring we've seen on HN lately.


Accusations of karma-whoring are karma-whoring ;)


It feels a little bit like the whole Erlang joke all over again.


No, it's a demand for people to stop douching up the site with trolling titles.


Recursion, I love it.


HN has fixed that, can only vote up once. Damnit, genius!


Here we go again. The AltDB movement is not just about scaling. While one of the goals is to make things scalable, it's not THE goal.

BTW AltDB is my term for the NoSQL movement. It implies much less.


Care to define the goals? Apparently it's not about scaling. It's not about SQL, because there's no reason you can't support a SQL-style query language.

So I do seriously want to know what's left... We're always on the lookout for the next feature to add ;-)


To me, it's about reducing the impedance mismatch between your data model and the implementation of that model in the database schema. Some data work better in a relational model and others work better in a document model. The idea of one ending the other seems preposterous to me.

Congrats on the launch though!


> To me, it's about reducing the impedance mismatch between your data model and the implementation of that model in the database schema.

The whole point of that "impedance" in the relational model is to for people to think about their data and describe it in a program-independent way: years of experience in the field is what led to its development and what contributes to its continued relevance. Saying that's a problem with the relational model is like saying that design patterns are flawed because they describe commonly used OO designs.

> To me, it's about reducing the impedance mismatch between your data model and the implementation of that model in the database schema.

I'm not sure what sort of impedance mismatch could exist between a program and a relational model but not between a program and a document model. The latter is trivially implementable in terms of the former.


Take a look at my comment again, I think you think I'm saying something I didn't.

The whole point of that "impedance" in the relational model is to for people to think about their data and describe it in a program-independent way ....

The relational model is hardly the only way to reason about data in a program-independent way.

... years of experience in the field is what led to its development and what contributes to its continued relevance.

I agree, never said otherwise.

Saying that's a problem with the relational model is like saying that design patterns are flawed because they describe commonly used OO designs.

I didn't say there's a problem with the relational model.

I'm not sure what sort of impedance mismatch could exist between a program and a relational model ...

I never said the word "program," you did. I said "data model." You're conflating the data model with the program code (or maybe you think I am, can't really tell). They are not the same thing.

http://en.wikipedia.org/wiki/Abstract_data_type

All I said is that there are some domains for which relational models make sense and there are others for which document-oriented (or other "NoSQL") models make sense.


Cool answer - thanks. I guess some of this is about better ORMs, some of this is the schemaless idea, and probably the real answer is a combination of both.

I worry that object databases were the last attempt to solve the impedance mismatch...


Of course, this is a very old idea (predating the invention of relational databases, even) -- it just gets reinvented every decade or so, with a fresh helping of media hype.


Of course, this decade, we actually have OOP, so OO databases make a lot more sense.


We had OOP last decade, and the one before that too.


It really gets old, doesn't it?

I've seen so many market changes in my career and, yet, I still write code in a text editor with a terminal window on its side. All of them under a nice windowing environment.


In my opinion, one of the goals of the AltDB movement is building software that fits in different parts of the CAP Theorem triangle. While relational databases are REALLY good (usually) at providing consistent data, things start to fall apart when you setup geographical replication. In the past, consistency seemed to be the most important thing for all applications. While it still is desirable and important, some users are beginning to prefer availability and partition tolerance over consistency.


I am reminded of the Judean People's Front from Life of Brian. Henceforth we shall call you AltDB, and we shall add geographical replication to your list of demands :-)

More seriously, we'll see what we can do meet that list with our tech!


I don't know if this is a goal of the NoSQL movement in general, but I use Redis rather than something like MySQL because I need speed. We're using Redis to route web requests, and we can't add 50+ ms onto every request on top of the >300 ms request time we have already from the application itself.


If any DB is taking 50ms to reply (SQL, NoSQL, even MS Access!), something is seriously wrong. If you're using Rails, I'd fire up NewRelic or something like that and check out what on earth is going on.

Of course, 300ms is also way too long. Do you mean under stress-testing, with request queuing? 300ms is a mind-boggling quantity of CPU cycles...


If we were to use MySQL, we'd be using multiple complex queries to get the job done. From what I've seen in New Relic on Heroku, queries have generally taken about 40-50 total for one request on PostgreSQL.

We're building a general purpose method of running Ruby applications, so 300 ms is certainly a possible request time (including database, processing things, and rendering a view) for some applications that we may be running.


I think your performance here stems more from configuration than from MySQL vs Redis. If you're using Redis in-memory, if you dedicate the same amount of memory to your database, you should get roughly comparable performance between MySQL/PostreSQL and Redis. The under-the-hood differences if you're in memory just aren't that great, particularly because modern machines have such ridiculously powerful CPUs. Yes, it sounds like you'll have to do table join(s); yes, the joined tables need indexes; yes, you have the SQL parsing overhead; yes, it's a B-tree rather than hash-based indexing. But I'd be very surprised if you ended up with the order-of-magnitude differences you're suggesting.

One of the things we're working on with FathomDB is ways to identify slow queries and help figure out what to do about them. If you're game, you can contact me with the queries and the table structure you were using in SQL and I'll try to figure out what's going on!


I am sorry - but the subject here misses the point. Just having someone else who can host a cloud MySQL database for you in what appears to be a single location doesn't solve the problem that people have running high-availabilty global internet services.

The challenges with MySQL are that scaling it effectively to a multiple-master, multi-datacenter solution is almost impossible. Especially if that's over a wide area network.


The new technology isn't based on MySQL. It's a fully distributed, no single-point-of-failure, scalable relational database. You can choose fully managed hosted traditional MySQL, or we'll be hand-picking early customers for the scalable technology. Contact me if you're interested.

We're starting with single-location, but we can certainly support cross-datacenter configurations. CAP seems to dictate that this will either require acknowledgement from the remote datacenter (choose consistency) or a window of loss (choose availability / partition tolerance).

Until we see customers regularly hosting their webservers in multiple datacenters, we're going to stay focused on optimizing for a single location. But maybe my viewpoint is out-of-date... How many datacenters are you currently distributing your webservers across?


Your website FAQ question 1 says it's standard MySQL. In another comment here you say: "a new database which we've built ourselves, which we've announced but isn't publicly available yet."

If it's standard MySQL, then you hit the same issues as if I were to host MySQL myself. If it's your own database you've built yourself, but is not available yet, then I am not going to trust it with my data.

In what I do I often need to have a high-availabilty website where by a loss of connectivity, flood, power issue etc in that data centre can cause outages. Most of the time on our critical projects we have one or two physical data-centers with EC2 for testing and disaster recovery.

Although being based in Europe - we do a lot of work in SE Asia. The connectivity between Asia and Europe (often you end up with 400ms+ RTT and 10% packet loss) means you have to host local websites in the region, and even in country (you might be able to drive between Malaysia and Singapore, but the internet inter-connectivity is terrible).

If you have any kind of application that ever needs high levels of read-write traffic you quickly end up needing to distribute the database in some way, and for this type of application, it's an awful lot easier to build upon Cassandra and do a bit more work in the application then to try and get MySQL or PostgreSQL to act as a multi-site, multi-master database over sub-optimal links.


There's a lot we need to do... we just put up the page with the DEMO video today, everything else isn't yet updated to reflect the announcement.

This is why we're pursuing both products: if you have a hair-on-fire problem, and are willing to spend the time to investigate and become comfortable with a new DB, then you should use the new database. If you're not there yet and MySQL works for you, we'll run it for you so it's less painful. You've chosen Cassandra because you're obviously in the first group; willing to learn about new databases because of your Asia/Europe pain. You're invested in that, so I'm not going to try to persuade you to change your mind; we're looking to help those that have the same problem, but haven't already locked themselves in to non-SQL databases like Cassandra.


But maybe my viewpoint is out-of-date... How many datacenters are you currently distributing your webservers across?

Heh, heh, heh. I work for Google. Need I say more?

That said, we like to insist on N+2 globally or N+1 per region. Meaning that if it takes N data centers to host a specific service world wide, we need at least N+2 data centers. And if it is a global application we insist that every region have at least one more data center than needed. (If 2 go down then we can spill across regions, but we'd like to avoid that for latency reasons.)

Failover from one data center to another usually is fairly automated. Which means that we are keeping data mirrored on a constant basis.

Obviously we're not in your target market. But my suspicion is that as cloud technology becomes more broadly used you should expect variations on our approach to become more common. And therefore you should at least keep the needs of multiple data centers in your head.


Very interesting - obviously your employer is considerably ahead of the curve compared to where the rest of the market is right now!

Care to introduce us to the right person on the AppEngine team? Imagine AppEngine with full SQL support...


Talk to the appscale folks. (Search "appscale".)

Google almost certainly isn't going to swap out their appengine stuff for your stuff. However, other folks (especially potential competitors) are potential users.


Consistent, Available, and Partitionable on the fly. Choose any two.

Each choice is justifiable for different applications. But you have to choose. Anyone who thinks they can make the choice for me and then tries to tell me that they have met all my possible data needs is selling snake oil.


I think you mean Partition Tolerance. I would ask you whether you've actually read the proof, but that question seems redundant.


So, ad hominem attacks aside, which tradeoff did you make for your product, or did you find a way to sidestep the problem altogether?

I mean, you're announcing this thing like the 8th world wonder ("The end of NoSQL") and seem to have no problem comparing yourself to Oracle, no less.

Do you have anything to show for it? Or At least a date when you will show us something?


The reason why I mentioned Oracle in the demo is precisely because of the whole CAP debacle. So many people talk about CAP as if the SQL database was itself an impossibility, much less Oracle's scalable offering - but you can pick up the phone and an Oracle salesperson will sell you the physical device that you're implying can't exist.

So, if somebody misquotes the CAP theorem, it's an indication to me that perhaps the thread isn't going to go anywhere. I'll point them to the place where they can read about it (the proof was where the CAP theorem was formalized, not Eric Brewer's original presentation), and move on. In this case, that wasn't fair to btilly, who just made a typo.

So, as I said in the presentation and elsewhere here, if you have a hair-on-fire problem with scaling your database, and are willing to be an early customer of our new database, and would make a good reference customer in a few months, we'd love to work with you today. Otherwise, you'll have to wait till we open it up more widely.


> you can pick up the phone and an Oracle salesperson will sell you the physical device that you're implying can't exist

Bullshit, and I will elaborate.

Oracle has two "scalable offerings:"

RAC relies on a single large san to "scale."

Exadata relies on super-fast interconnects to get multiple machines to look more like a single huge one, which has the obvious speed-of-light limitations as well as the price one.

Neither approach is the kind of scale out with commodity hardware that something like Cassandra gives you.

Invoking the mythical "pay Oracle enough money and they will make it scale" mantra is frequently done by people who either don't know better or are deliberately muddying the water, but that doesn't make it right.


Just because Oracle isn't using the exact same techniques as Cassandra, that doesn't make Oracle's ability to scale fictitious.


I see you aced Straw Man 101.

Did you not understand what I wrote, or do you think that "large san" or "fast interconnects limited to machines in the same couple of racks" count as "scaling?"


Ouch - aren't you getting a little bit personal here? I'll try to remain professional, and I'll accept the fact that you're quoting things you didn't say, as the re-write was clearer as to your intention.

When it comes to scaling, there's no rule which says you have to scale in any particular way. Many database customers consider scaling _up_ a form of scaling, and probably 99% of the world's database users will never go beyond what can be achieved on a single machine. They don't care about your pet project, about how the way it works is cooler and technologically purer. They consider databases a tool, and they don't really care how it works. I don't think too deeply about how a can opener works. I'm able to open more cans faster with a more powerful electric can opener. Even scaling up is still scaling.

There are customers with bigger needs. For some of them, SAN based scaling is just what they need; more IOPS let their database keep running and they can get on with their lives. If they need more IOPS, they add more drives to their SAN. Scaling with a SAN is still scaling.

Some customers have more complex demands, and look to Exadata or Netezza. They might start with just a few nodes, and add more nodes as their load increases. It's still scaling.

Now, we in CompSci circles get excited about scaling using clusters. That's _sexy_ scaling. Cassandra and FathomDB can both change the rules of the game, and I'm excited about what FathomDB can do here, just as I can see you're passionate about Cassandra. But let's not pretend that most customers really care about how it scales; they care about what we can do. To customers, if it looks like a duck, and it quacks like a duck, it is a duck. But when they're choosing a database, scaling is not the only requirement, and certainly scaling in a certain way is unlikely to be on their list of requirements. If you've ever read an RFP, there's a bewildering number of questions that have nothing to do with technology at all, and often the technology section is a depressingly short list. Although part of the RFP game is to try to get the purchaser to write in requirements that only you can deliver, purchasers are considerably more savvy than that.

You can jump up and down like Rumpelstiltskin arguing that yours is the only database that's _really_ scaling and the Oracle solution didn't meet the requirements, and that they should have chosen you. At the end of the day all you're left with is one person jumping up and down screaming about the rules of the game, and a customer that got what they needed and an Oracle salesperson that earned their commission.


I really don't like how you keep sidestepping the hard questions that were raised.

Instead of curling up in semantics discussions, how about simply answering a few of those? I think you could do that without revealing anything about the magic sauce of your product.

To reiterate:

* Does your system support the full SQL vocabulary, including all join types?

* Is it ACID?

* Do I really not have to arrange my data in any special way (schema-, or partition-wise)?

* Does it really scale near-linear, regardless of the workload that I apply?

* Why do you bother with MySQL-Hosting as a secondary product if your new db scales down just fine?


It might seem to you that we can answer these questions without revealing secrets, but consider that (1) I've not been answering them and (2) you're asking whether we can do what was thought to be impossible, so real answers will necessarily provide direction. Short answers will just annoy you, full answers will reveal too much, and frankly, there's little upside in replying. If you'd make a good early-adopter customer for us, contact us and we can have the discussion. But equally, I see that you haven't replied to my call for early-adopters!

One I can definitely answer: yes, full SQL vocabulary support (though we haven't implemented everything yet!)


Short answers will just annoy you, full answers will reveal too much

Oh well, a simple yes or no would be fine, really.

However, I guess it's safe to assume then, that you're not ACID and that data will have to be rearranged to accomodate your system. That's my take-home because a simple "yes" to either question would not have revealed anything.


Well, if you promise it won't annoy you: Yes, Yes, Correct, Yes (for non-pathological workloads), Choice is good


but you can pick up the phone and an Oracle salesperson will sell you the physical device that you're implying can't exist.

Well, thing is, part of your claim (if I understood it correctly) is that I just throw my data and my queries at your wonder-device and it will magically be fast and scale. You never mention any special care I must take, you even explicitly say it's not partitioning.

This is decidedly not the case with what said Oracle salesperson will sell to me. The sale will normally include an expensive support-contract and probably a dedicated DB Admin. Because operating a RAC- or Exadata Cluster is, for all I know, far from trivial. You still have to make a bunch of critical decisions and tradeoffs upfront or it will probably suck.

Moreover you're effectively saying that you solved what Amazon and Google either couldn't or didn't bother to solve with their SimpleDB and Appengine Datastore offerings.

Still sounds like quite a bold claim to me. But as said elsewhere, I appreciate that you're standing by it and will keep looking forward to what you will unveil.

Just one more thing that I really don't understand: If you have this doomsday device at your hands, then why on earth do you bother with MySQL hosting as a secondary product?


You're misreading his argument or are being deliberately obtuse. The fact that you can scale the relational model using Oracle's software, with or without a support contract or appropriate tuning measures, is sufficient evidence to invalidate the claim that many make that the relational model does not scale.

The relational model != ACID. The relational model != SQL. The relational model has nothing in it implicitly that ruins it's ability to obey CAP. You are right, that you will have to make some consistency tradeoffs to get partition tolerance. Again, to repeat, this has nothing to do with the relational model itself, just with the CAP theorem.

Just because it is hard to build a RDBMS that loosens ACID but scales and is much easier to build a ad-hoc non-formally-defined model like that seen in Cassandra or BigTable does not mean it is impossible. You simply have to pay for it, in software, hardware, and consulting fees.

Nothing is stopping the open source world from making PostgreSQL or MySQL scale out other than time, energy, and perhaps a seminal paper or two out of acadamic that provide foundational models that are necessary to implement it correctly. Odds are, they've done this work at Oracle, but it's not available for obvious reasons.


You're misreading his argument or are being deliberately obtuse.

Well, it seems you're misreading mine now.

does not mean it is impossible.

I never said that.

If you re-read my post you will see that I was merely wondering about their claim of not just meeting the goalposts set by Oracle et al but of exceeding them by a large margin. As I understood it their new system is supposedly not only maintenance-free, but also efficient regardless of the workload or schema that I, as a user, might throw at it. And all that supposedly hosted on commodity VMs in a cloud (probably EC2).

That's about as bold a claim as it gets in database-land, so if I was reading too much into it then feel free to correct me on that, but please don't suggest I said something that I never said.

To put it into perspective (again): I would think both Google and Amazon have probably also thought long and hard about this problem. A full fledged SQL Store would be a much more sexy offering than their crippled SimpleDB/Datastore solutions after all. There's also a good amount of academic research in the area, with Cassandra (Dynamo/BigTable) seemingly being what many people consider the best we can currently do in terms of linear scalability on commodity hardware - at the cost of not having SQL and no ACID.

So here comes this startup out of the blue, claiming to have solved all these things.

Yes, leaps like that happen sometimes. Google happened, too, after all. But I think my skepticism is not completely unjustified either?

Hence my question which tradeoff FathomDB chose to make, to get at least a basic idea about what to expect.


SimpleDB and App Engine are geo-distributed, but it looks like FathomDB isn't.


While that may be true I don't see how it relates. From what we know FathomDB claims linear, cost-efficient scalability over a fully featured relational database. That's quite a feat, even if you do it "only" in one location.


You're right that I did mean partition tolerance. And yes, I've read the proof, but that was about a year and a half ago.


Whatever, it was obvious what you meant -- if someone has the choice of either figuring what you mean and addressing your question, or changing the subject with defensive pedantry, and chooses the latter, it may say something about their argument.


Actually, in the distributed database field, 'partitionable on the fly' makes a lot of sense. I used this to infer the poster wasn't really familiar with the CAP theorem (which is only formally expressed in the proof), but it seems this wasn't fair in this case.

The big problem with the CAP theorem is trying to decide what on earth C, A and P _really_ mean. Sounds like I need to do a blog post...


How does it scale? After some reading it seems to be just a hosted database service with several vertical levels of scaling. What happens if Tera instance can't handle your load? Can FathomDB scale horizontally without sharding?


Sorry - it's difficult to be totally clear in 6 minutes! We have two offerings ... a fully-managed MySQL database-as-a-service which lets you grow up to the biggest server on the cloud, and a new database which we've built ourselves, which we've announced but isn't publicly available yet.

The scalable technology does horizontal scaling, and no - you don't have to shard. Of course, in some sense your data is sharded, because it is distributed across multiple machines, but we're not doing anything that you'd consider sharding in anything other than the most pedantic sense!


This sounds like a great, very valuable service. However, as far as I can tell, this service is not fundamentally different than running a MySQL server with replication. Nothing indicates it's "scalable" in the way that say, Cassandra, is. Replication is a perfectly adequate system for many people but it has its limits, especially for a write-heavy workload.

If FathomDB is more sophisticated than a highly managed MySQL-as-a-service I would be very curious to know.

That said, many of the complaints about scaling out MySQL stem from the painful management overhead of running a replicated setup, managing backups, etc. If FathomDB can vastly reduce the cost and difficulty of doing these things it may move MySQL a long ways towards addressing many of the reasons people are moving away from it.

"The end of NoSQL" is just another piece of claptrap that I've seen recently on this and related topics. I would agree that "AltDB" might be a better term simply because it's less inflammatory and better expresses the goals of many of the diverse projects now lumped under "NoSQL".


We need to work on our messaging! We're offering two different technologies: a fully managed MySQL as-a-service, and the new scalable database technology. But they're both relational databases in the service model.

With the MySQL-as-a-service 'traditional' tech, we take care of the backups, monitoring etc. We'll offer fully managed replication in future. As you say, these basic steps will make running MySQL much more attractive.

We've learned though, that there are still problems even here. You still have to think about how big your server should be. You still have to think about what happens when you outgrow the biggest server your cloud offers.

The new technology does scaling across machines in the same way that Cassandra promises, or that Oracle's Exadata does for relational DBs. It lets you start on a shared server and grow seamlessly to the point where you're running across multiple servers. But it's still early days for that tech (after all, lots of people still think it's impossible, despite the fact that Oracle is happily selling it!), and so we're not opening it up publicly yet, whereas the standard hosted MySQL is publicly available.


The new technology does scaling across machines in the same way that Cassandra promises, or that Oracle's Exadata does for relational DBs.

Care to elaborate a little bit more on your approach?

The claim you make (linear horizontal scalability) implies you have either created your own RDBMS or patched an existing one in a really exciting way.

I'm curious about how you managed in such a short time-frame what so far only Oracle can offer (with 20 years of experience under their belt)?

I'm also curious about when your new database offering will be available for public testing? Because currently the exorbitant claims make this smell a lot like vaporware...


We're looking for early-adopter customers with hair-on-fire problems with their database that would also make good reference customers in a few months. If you fit the bill, please contact me!

Otherwise, sorry, but you'll just have to wait :-)

To put the effort into perspective: it's not a new DB from scratch - that would be a Herculean effort - just the lowest levels. It's not based on MySQL/Drizzle; it's based on a different open-source DB with a friendlier license and more hackable code. It's a new technique for scaling cross machines, which is refreshingly simple, which is why you're not seeing too many details :-) It's not just Oracle that knows how to scale databases, BTW; Greenplum, Netezza, Vertica etc. have all built distributed databases (admittedly for OLAP workloads) in relatively short timeframes.


I'm quite interested in how you're handling joins in a manner that is scalable. What tradeoffs did you have to make? Do you support full join semantics, or are some types of joins not supported? Its not that much of a stretch to scale out an RDBMS not using any joins, but I can't bring myself to really take your claims seriously until you elaborate on how you handle joins (if at all).


I think that FathomDB's product is scalable to the degree that the existing database vendors are able to scale using multi-node clusters. Oracle's RAC tops out at 100 nodes. All of the other vendors (Teradata, Greenplum, etc) have some niche or secret sauce that allows them to scale vertical markets like OLAP. I don't see them scaling to Google or Facebook scale with an ACID-compliant relational database. They aren't really bringing anything new to the table other than a cheaper price point.

You bring up a very good point with the join issue. In addition, I'm wondering how they're going to scale writes. This is the bottleneck that eventually chokes RAC out, as it has to hold true to ACID principles. Even without ACID guarantees, eventually the system would spiral down into a chaotic quagmire of inconsistency as it scales. You've essentially got to have an locking and/or arbitration system that can quickly return an absolutely-positively we-wrote-this-in-a-consistent-way after a write query is executed. Nevermind trying to execute MVCC transactions over a cluster of nodes.

SQL can be scaled, ACID can't. Without ACID, the warm fuzzy feeling SQL gives you isn't quite as warm and fuzzy.


(admittedly for OLAP workloads)

Well, you say that as if the difference between OLAP and OLTP was a minor thing. As far as I know OLAP is usually done on column stores with quite different properties to your regular MVCC RDBMS.

But well, I am not a database engineer. So, at least that was a response, thanks. I'll be looking forward to what you can deliver.


a different open-source DB with a friendlier license

That would be postgresql?


A tad off topic but...Does anyone actually know how to get a FathomDB account? I thought, as a long time Rackspace customer and the promo code "RACK" they promote on their site, it would be relatively simple, I've tried to register three times over the past few months, but I never hear from a peep from FathomDB........


Not sure whether it's the end of NoSQL, but we're certainly working to deliver the promises of NoSQL while retaining the power of SQL.

To borrow from a much better man than I... This is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning.


You know... Most of these nosql databases feel oddly nostalgic for me. It's the kind of stuff mainframe folks were using in the 70's with some clustering thrown in.


What? IMS?


What? IMS?

That would be XML. I'm guessing: http://en.wikipedia.org/wiki/CODASYL


Is this drizzle?


No - the scalable database is our own technology. We looked at writing a MySQL/Drizzle storage engine, but there's a lot of pieces that are different that stretch up through the stack, and I was a bit concerned over the MySQL licensing now that it's part of Oracle.

In future we could build a MySQL/Drizzle storage engine, and we'll probably run Drizzle-as-a-service once we start to see customer demand.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: