Hacker News new | past | comments | ask | show | jobs | submit | bradleyjg's comments login

Code is human readable text. Authors should craft it with care.

The linter people are hypercorrectionalists of the type that would change “to boldly go where no man has gone before!” because it’s a split infinitive.


Inflation is down and the economy is doing better, but what's the point when poverty rates just skyrocketed to over 50% of the population [1]? Will it "trickle down", like it never did anywhere?

This seems not true. The poor in countries with high gdp per capita are much better off than in countries with low gdp per capita. They are also better off than the poor were in prior times when those countries had lower gdp per capita. So some kind of trickling down must have happened.

It might be more convenient for your sense of moral indignation if it were otherwise but facts are facts.


This is some kind of mental gymnastics right there, I showed you poverty is rising and we're supposed to take it as meaning the people are richer for it?

Show me a single example of a country drastically slashing public spending and its citizens living better afterwards.


I found being in the US east coast in teams spread across Europe and the US to have the advantage of being able to touch base with anyone fairly easily but the disadvantage of never getting that natural quiet period.

I don’t see how us east cost <-> apac is really feasible on any kind of regular basis.


It depends if you're a morning person or not. I used to manage teams in India & China from the east coast and it was usually a-ok having 2-3 hours of overlap each morning. It's FAR harder from the west coast and there's really no way around having evening (US) meetings since the alternative is asking the remote team to be in the office quite late.

That said, this is also why many Asian teams get accustomed to working US hours -- essentially "2nd shift" -- to accommodate west coast overlap.


I think daily/weekly you'd shift work hours (that what I do when I have a meeting like that). Global meetings are always a pain, though I'd say the worst meetings are when US-based folks plan a meeting based on their local timezone, as opposed to UTC, because it always ends up at 3am in Australia (so I bow out of those).


We have US/UK/Shanghai teams that for years we tried to combine into one big distributed team, which never worked well. Usually big meetings happened twice, once for US/UK and once for UK/Shanghai, and on the rare occasions they wanted one big meeting it was centered on UK time, so US people had to get up really early and Shanghai people were on really late.

Then there was one time when both of us in the US happened to have taken the next day off, and we're both naturally night owls, so I suggested just for that one time moving the meeting 6-7 hours earlier on a different day - we'd both still be awake, it would be just before the start of the UK workday, and just before 5pm for Shanghai, so for once they didn't have to get online late.

We didn't do it because the UK group didn't want to get up an hour earlier.


We had occasional interlock meetings, briefings, and webinars but it pretty much meant that from the east coast you were doing them at 10 or 11pm. I'm sure senior execs did more frequently. We didn't really have people on the west coast, especially latterly when I was there.

Up to 5 or 6 hours is a reasonable range for routine interactions.


I did US East Coast / Shanghai for one project. We got really good at asynchronous messaging, and that went a long way.

Still didn't fix the 1-workday latency for discussions, though.


A lot of companies have definitely gotten better at asynchronous communications over the past few years. Not perfect for everything but if you can keep the weird hour calls to a dull roar it's probably more manageable than even 10 years ago.


There’s a lot of different ways to be miserable in this world but one in particular that seems to hit a lot of people in our industry is not knowing what you want.

So many people followed a path laid out by others for so long—-study hard, do extracurriculars, go to a good college, study hard again, do internships, get a job with a prestigious company, work towards promo.

At some point, generally in late 20s but can be before or after, many such people realize that they have if not everything they ever dreamed of then at least a lot of what they worked towards. But they still aren’t happy because in all those years they never figured out what kind of people they actually were—what it is that would make them happy.

We all need money, but do you need that much money? If yes, then ok. Try to think about what you need the money for in the next stupid meeting about story points. But if no, then you have options.


I think it's not as simple as thinking: there is this one kind of person you are and it's a constant and you need to figure out who that is and how to be that person to be happy.

People change a lot. There's no golden idol person in my future that would make me happy if only I could break free and become that person. That person changes constantly.

I think we are unhappy because we've been fed a lie that that person is just over the horizon and you're not living your full life properly unless you become that person. The problem is that that person changes constantly. It's always out of reach and who's to say you will be happy once you get to that point?

A change in mindset is the cure. But a change in mindset does not cause economies to scale, shareholders to get value and cause the wheels of industry to move. We are always arriving for something we think will save us.


What gets me more is the constant narrative of the pursuit of happiness as the end goal, which drives people into painting the perfect future in their imagination where their relationships have zero friction, they enjoy every day in the office, etc.

Because you just can't think "what would make me happy" without involving people who aren't even around in the present moment. In fact, if somehow someone manages to come up with a way to satisfy happiness by ignoring their environment then that person deserves my sympathy, because the goal is not to be "happy no matter what goes around you": That requires detachment.


How many of those are getting through the lottery today, given the flood of it staffing agency applications?


The more fundamental issue is that it’s legally a temporary visa program even though no one uses it that way. The law even acknowledges this with the dual intent doctrine.

If we are going to reform things we might as well scrap the visa altogether and roll whatever changes are needed into the employment based resident visas, including if necessary adjusting the numbers.

Of course that won’t happen because Congress doesn’t pass major overhauls of anything anymore. But if we are going to dream, might as well dream big.


This is true, but it’s also how basically every country runs their visa programs. They initially admit foreigners in a temporary status and after a few years grant a permanent status. This is how Mexico residency works in general, and even how marriage-based residency works in Colombia. I don’t really see it as much of a problem.


The problem is there is no official path.

Contrast a new marriage. The government issues a conditional permanent residency and then two years later an application can be filed to remove the condition. It’s a similar story with the investor’s visa.

The H1B doesn’t lead to anything. It’s designed as if the alien is just going to leave after 3 or 6 years. Any kind of accommodations between it and the EB process are afterthoughts.


Yeah, personally I think they should keep the "temporary" aspect, that it's conditional on employment, and the lottery, but remove the employer lock-in. So an employer has to initially sponsor an H1-B, but they damn well better be paying competitive wages and benefits otherwise they're not going to be able to keep the H1-B. Once the H1-B immigrant enters the country and works one day, they personally can renew, transfer their employment, and make all the paperwork decisions.

There would probably need to be some tweaks (e.g. an employer looses sponsorship privileges if they can't keep the people they sponsor), but I think that's the right path.


You don’t want to be hard on the outside, soft on the inside. Especially because you probably aren’t that hard on the outside!

Defense in depth.


Still too small for that. It’s barely bigger than an amuse bouche.

I think this must be a tasting portion, maybe a cooking school thing or similar.


Not amuse bouche. To me it looks like a standard sized primi in a prix fix menu


Or just use less water to cook the pasta? What’s the downside?


First, temperature dynamics and being able to reliably determine doneness by time. Boiling water is nature's thermal measuring stick, and if your water is dropping to 150 it's a Problem. This is especially the case if you like to let the water 'coast' heat-off so that your pasta doesn't burn to the bottom of the pan during an unattended rolling boil.

Second, clumping. You want the ability to freely stir and get water between pieces of pasta, or if the heat is on for the boiling bubbles to stir, in order to avoid the pasta sticking together.

Third, use 80% less water and you only get a 5x higher concentration of starch. I don't have measurements in front of me but I suspect this simply isn't starchy enough to take a tiny portion of that water and use it as an effective emulsifier. The article pins 1% starch as a threshold of effect, and I doubt I'm losing 0.2% of pasta weight when cooking to al dente.

Note: This is all for dried durum wheat-flour pasta, the generic industrial 'macaroni' of American agribusiness. Egg pasta is a very different product, with different cooking characteristics, that happens to share the name. Durum semolina pasta, whole-wheat pasta, gluten-free "pasta", rice pasta... no guarantees that this is applicable.


Correction: To attain 1% of 5kg of water, I need 50g of starch. In 500g of pasta, there's no way in hell I'm losing fully 10% of the weight of the pasta. If I cut the water to pasta ratio by 80%, I would still need to lose 2% of the weight of the pasta, and I don't think that's happening.


You certainly can, but:

- It's still not going to be enough starch

- You can't rely on box cooking times even as a starting point. Your pasta will take significantly longer to cook, since it will bring down the temperature of the water when you put it in, since there's so little water


1. The starch comes from the pasta, not the water. Decreasing the water increases the concentration of the starch in said water. That’s why every good recipe for cacio e Pepe I’ve seen recommends using as little water as possible

2. This has been thoroughly debunked. Kenji did a full write up of this but suffice to say that starches absorb water starting at 180 degrees. As long as you have the water above that temp it will cook in the same amount of time.

https://www.seriouseats.com/how-to-cook-pasta-salt-water-boi...


Have not seen that article but I agree. I’ve been cooking pasta with less water and using box times for years. Has never ever failed for me.


In my experience box cooking times are never quite right, and irrelevant if you're going to be finishing your pasta in the sauce anyway.

Unless you're extremely familiar with the exact brand of pasta, temperature of your stovetop, etc., you should be tasting your pasta toward the end of cooking to decide when to stop cooking it.

> - It's still not going to be enough starch

I'm inclined to disagree, but only have anecdata on this, so I can't really get into an extended debate over it. So I guess now I get to look forward to experimenting with starch additions the next few times I cook pasta.


simply lift pasta and observe (and do this enough that you learn what to look for). that's enough to avoid tasting it until a final confirmation.


For a longer pasta, sure. But something like Fusilli can be more difficult to judge, I've found.


Here is a great video on cooking pasta with less water:

https://www.youtube.com/watch?v=259MXuK62gU&t=219s


Clumping


I don't get clumping. I use an adequate quality pasta (De Cecco mostly), stir it when I put it in the water, and a few times after that, cooking to al dente. If I'm making a Caccio e Pepe or Carbonara I cook the spaghetti or (my preference) Buccatini I'm aiming for the minimum amount of liquid left, ideally just enough to put in the sauce. I use a frying pan so I can lay the noods out flat to minimise the water.

As I said I don't get clumping, it is absolutely possible to cook noods in minimal water without clumping because I do it so try switching some thing up if it's happening to you.


How do you stir long pasta in minimal water before it has softened?

While small pasta shapes are relatively easy to stir such that they break contact with anything nearby right from the beginning, long pasta tends to move together when stirring until they’ve softened - at which point they’ve already started sticking together.

You can try to stir it so that the pasta isn’t all running parallel before it softens, but then you get ends start sticking out of the water until it softens more, leading to uneven cooking.

For long pastas, I’ve found using more water and just adding a little flour while cooking to be a lot easier.


I use kitchen tongs to pick up and jostle the noods as another commentator mentions. It starts out parallel as you say. Flour would add a flavour I didn't want and I don't have an issue with uneven cooking or clumping so I don't need to.


It’s more of a “jostle” than a stir when cooking spaghetti in a frying pan.


Boil long pasta in a skillet, not a pot.


Ooh, I've never thought to use a pan.


De Cecco? Nah, that's pretty bad. You want to try Garofalo or Molisana.


I live in the NW England, De Cecco is "middle" quality where I live and affordable, the brands you mention aren't available.


shrug then go with De Cecco, it's still better than Barilla. But if you find a Molisana or especially a Garofalo, do grab a pack and taste the difference.


I'll pick up a pack if I see it. The top quality the supermarket we go to has is Rummo, it's the next step up from De Cecco (in the supermarket at least) and I buy it sometimes, but to me there's not a hell of a lot of difference between the two for the price difference.


Nonsense


Please articulate more, I'm listening (mind you, I'm italian and opinionated about my pasta)


Stir.


Sometimes I forget to stir and have to reboil the pasta. Long noodles like spaghetti will stick like crazy and have inconsistent cooking. If I need to cook quickly I use less water. Otherwise more water is hands off.


Reduce your pasta water. You can even save it like stock. Adding some is also a savior when reheating sauces that break easily.


Or use the normal amount of water and reduce the liquid after straining the pasta out?


By the time you reduce the liquid the pasta is going to be pretty cold. Just using less water takes less time.


If you do that, you gotta strain into another pot, and then reduce that. No need. Just use a lot less water, and barely cover the pasta.


This seems reasonable?

Suppose the full result is worth 7 impact points, which is broken up into 5 points for the partial result and 2 points for the fix. The journal has a threshold of 6 points for publication.

Had the authors held the paper until they had the full result, the journal would have published it, but neither part was significant enough.

Scholarship is better off for them not having done so, because someone else might have gotten the fix, but the journal seems to have acted reasonably.


If people thought this way - internalizing this publishing point idea - it would incentivize sitting on your incremental results, fiercely keeping them secret if and until you can prove the whole bigger result by yourself. However long that might take.

If a series of incremental results were as prestigious as holding off to bundle them people would have reason to collaborate and complete each other's work more eagerly. Delaying an almost complete result for a year so that a journal will think it has enough impact point seems straightforwardly net bad, it slows down both progress & collaboration.


> If people thought this way - internalizing this publishing point idea - it would incentivize sitting on your incremental results, fiercely keeping them secret if and until you can prove the whole bigger result by yourself. However long that might take.

This is exactly what people think, and exactly what happens, especially in winner-takes-all situations. You end up with an interesting tension between how long you can wait to build your story, and how long until someone else publishes the same findings and takes all the credit.

A classic example in physics involves the discovery of the J/ψ particle [0]. Samuel Ting's group at MIT discovered it first (chronologically) but Ting decided he needed time to flesh out the findings, and so sat on the discovery and kept it quiet. Meanwhile, Burton Richter's group at Stanford also happened upon the discovery, but they were less inclined to be quiet. Ting found out, and (in a spirit of collaboration) both groups submitted their papers for publication at the same time, and were published in the same issue of Physical Review Letters.

They both won the Nobel 2 years later.

0: https://en.wikipedia.org/wiki/J/psi_meson


Wait, how did they both know that they both discovered it, but only after they had both discovered it?


People talk. The field isn't that big.


They got an optimal result in that case, isn't that nice.


The reasonable thing to do here is to discourage all of your collaborators from ever submitting anything to that journal again. Work with your team, submit incremental results to journals who will accept them, and let the picky journal suffer a loss of reputation from not featuring some of the top researchers in the field.


To supply a counter viewpoint here... The opposite is the "least publishable unit" which leads to loads and loads of almost-nothing results flooding the journals and other publication outlets. It would be hard to keep up with all that if there wasn't a reasonable threshold. If anything then I find that threshold too low currently, rather than too high. The "publish or perish" principle also pushes people that way.


That's much less of a problem than the fact that papers are such poor media for sharing knowledge. They are published too slowly to be immediately useful versus just a quick chat, and simultaneously written in too rushed a way to comprehensively educate people on progress in the field.


> versus just a quick chat,

Everybody is free to keep a blog for this kind of informal chat/brainstorming kind of communication. Paper publications should be well-written, structured, thought-through results that make it worthwhile for the reader to spend their time. Anything else belongs to a blog post.


The educational and editorial quality of papers from before 1980 or so beats just about anything published today. That is what publish or perish - impact factor - smallest publishable unit culture did.


Don‘t know much about publishing in maths but in some disciplines it is clearly incentivised to create the biggest possible number of papers out of a single research project, leading automatically to incremental publishing of results. I call it atomic publishing (from Greek atomos - indivisible) since such a paper contains only one result that cannot be split up anymore.


Andrew Wiles spent 6 years working on 1 paper, and then another year working on a minor follow-up.

https://en.m.wikipedia.org/wiki/Wiles%27s_proof_of_Fermat%27...


Or cheese slicer publishing, as you are selling your cheese one slice at a time. The practice is usually frowned upon.


I thought this was called salami slicing in publication.


Science is almost all incremental results. There's far more incentive to get published now than there is to "sit on" an incremental result hoping to add to it to make a bigger splash.


Academic science discovers continuous integration.

In the software world, it's often desired to have a steady stream of small, individually reviewable commits, that each deliver a incremental set of value.

Dropping a 20000 files changed bomb "Complete rewrite of linux kernel audio subsystem" is not seen as prestigious. Repeated, gradual contributions and involvement in the community is.


The big question here is if journal space is a limited resource. Obviously it was at one point.

Supposing it is, you have to trade off publishing these incremental results against publishing someone else’s complete result.

What if it had taken ten papers to get there instead of two? For a sufficiently important problem, sure, but the interesting question is at a problem that’s interesting enough to publish complete but barely.


The limiting factor isn’t journal space, but attention among the audience. (In theory) the journals publishing restrictions help to filter and condense information so the audience is maximally informed given that they will only read a fixed amount


Journal space is not a limited resource. Premium journal space is.

That's because every researcher has a hierarchy of journals that they monitor. Prestigious journals are read by many researchers. So you're essentially competing for access to the limited attention of many researchers.

Conversely, publishing in a premium journal has more value than a regular journal. And the big scientific publishers are therefore in competition to make sure that they own the premium journals. Which they have multiple tricks to ensure.

Interestingly, their tricks only really work in science. That's because in the humanities, it is harder to establish objective opinions about quality. By contrast everyone can agree in science that Nature generally has the best papers. So attempting to raise the price on a prestigious science journal, works. Attempting to raise the price on a prestigious humanities journal, results in its circulation going down. Which makes it less prestigious.


Space isn't a limited resource, but prestige points are deliberatly limited, as a proxy for the publications' competition for attention. We can appreciate the irony, while considering the outcome reasonable - after all, the results weren't kept out of the literature. They just got published with a label that more or less puts them lower in the search ranking for the next mathematician who looks up the topic.


Hyper focusing on a single journal publication is going to lead to absurdities like this. A researcher is judged by the total delta of his improvements, at least by his peers and future humanity. (the sum of all points, not the max).


It is easy to defend any side of the argument by inflating the "pitfalls of other approach" ad absurdum. This is silly. Obviously, balance is the key, as always.

Instead, we should look at which side the, uh, industry currently tends to err. And this is definitely not the "sitting on your incremental results" side. The current motto of academia is to publish more. It doesn't matter if your papers are crap, it doesn't matter if you already have significant results and are working on something big, you have to publish to keep your position. How many crappy papers you release is a KPI of academia.

I mean, I can imagine a world were it would have been a good idea. I think it's a better world, where science journals don't exist. Instead, anybody can put any crap on ~arxiv.org~ Sci-Hub and anybody can leave comments, upvote/downvote stuff, papers have actual links and all other modern social network mechanics up to the point you can have a feed of most interesting new papers tailored specially for you. This is open-source, non-profit, 1/1000 of what universities used to pay for journal subscriptions is used to maintain the servers. Most importantly, because of some nice search screens or whatever the paper's metadata becomes more important than the paper itself, and in the end we are able to assign 10-word simple summary on what the current community consensus on the paper is: if it proves anything, "almost proves" anything, has been 10 times disproved, 20 research teams failed to reproduce to results or 100 people (see names in the popup) tried to read and failed to understand this gibberish. Nothing gets retracted, ever.

Then it would be great. But as things are and all these "highly reputable journals" keep being a plague of society, it is actually kinda nice that somebody encourages you to finish your stuff before publishing.

Now, should have been this paper of Tao been rejected? I don't know, I think not. Especially the second one. But it's somewhat refreshing.


Two submission in medium reputation journal does not have significantly lower prestige than one in high reputation journal.


Gauss did something along these lines and held back mathematical progress by decades.


Gauss had plenty of room for slack, giving people time to catch up on his work..

Every night Gauss went to sleep, mathematics was held back a week.


During his college/grad school days, he was going half nuts, ideas would come to him faster than he could write them down.

Finally one professor saw what was happening, insisted that Gauss take some time off - being German that involved walking in the woods.


These patterns are ultimately detrimental to team/community building, however.

You see it in software as well: As a manager in calibration meetings, I have repeatedly seen how it is harder to convince a committee to promote/give a high rating to someone with a large pile of crucial but individually small projects delivered than someone with a single large project.

This is discouraging to people whose efforts seem to be unrewarded and creates bad incentives for people to hoard work and avoid sharing until one large impact, and it's disastrous when (as in most software teams) those people don't have significant autonomy over which projects they're assigned.


Hello, fellow Metamate ;)


The idea that a small number of reviewers can accurately quantify the importance of a paper as some number of "impact points," and the idea that a journal should rely on this number and an arbitrary cut off point to decide publication, are both unreasonable ideas.

The journal may have acted systematically, but the system is arbitrary and capricious. Thus, the journal did not act reasonably.


> This seems reasonable?

In some sense, but it does feel like the journal is missing the bigger picture somewhat. Say the two papers are A and B, and we have A + B = C. The journal is saying they'll publish C, but not A and B!


How many step papers before a keystone paper seems reasonable to you?

I suspect readers don’t find it as exciting to read partial result papers. Unless there is an open invitation to compete on its completion, which would have a purpose and be fun. If papers are not page turners, then the journal is going to have a hard time keeping subscribers.

On the other hand, publishing a proof of a Millennium Problem as several installments, is probably a fantastic idea. Time to absorb each contributing result. And the suspense!

Then republish the collected papers as a signed special leather limited series edition. Easton, get on this!


Publishing partial results is always an invitation to compete in the completion, unless the completion is dependent on special lab capabilities which need time and money to acquire. There is no need to literally invite anyone.


I meant if the editors found the paper’s problem and progress especially worthy of a competition.


> I suspect readers don’t find it as exciting to read partial result papers. Unless there is an open invitation to compete on its completion, which would have a purpose and be fun. If papers are not page turners, then the journal is going to have a hard time keeping subscribers.

Yeah I agree, a partial result is never going to be as exciting as a full solution to a major problem. Thinking on it a little more, it seems more of a shame the journal wasn't willing to publish the first part as that sounds like it was the bulk of the work towards the end result.

I quite like that he went to publish a less-than-perfect result, rather than sitting on it in the hopes of making the final improvement. That seems in the spirit of collaboration and advancing science, whereas the journal rejecting the paper because it's 98% of the problem rather than the full thing seems a shame.

Having said that I guess as a journal editor you have to make these calls all the time, and Im sure every author pitches their work in the best light ("There's a breakthrough just around the corner...") and Im sure there are plenty of ideas that turn out to be dead ends.


... A and B separately.


I agree this is reasonable from the individual publisher standpoint. I once received feedback from a reviewer that I was "searching for the minimum publishable unit", and in some sense the reviewer was right -- as soon as I thought the result could be published I started working towards the publication. A publisher can reasonably resist these kinds of papers, as you're pointing out.

I think the impact to scholarship in general is less clear. Do you immediately publish once you get a "big enough" result, so that others can build off of it? Or does this needlessly clutter the field with publications? There's probably some optimal balance, but I don't think the right balance is immediately clear.


Why would publishing anything new needlessly clutter the field?

Discovering something is hard, proving it correct is hard, and writing a paper about is hard. Why delay all this?


Playing devils advocate, there isn’t a consensus on what is incremental vs what is derivative. In theory, the latter may not warrant publication because anyone familiar with the state-of-the-art could connect the dots without reading about it in a publication.


Ouch. That would hurt to hear. It's like they're effectively saying, "yeah, obviously you came up with something more significant than this, which you're holding back. No one would be so incapable that this was as far as they could take the result!"


Thankfully the reviewer feedback was of such low quality in general that it had little impact on my feelings, haha. I think that’s unfortunately common. My advisor told me “leave some obvious but unimportant mistakes, so they have something to criticize, they can feel good, and move on”. I honestly think that was good advice.


If this was actually how stuff was measured, it might be defensible. I'm having trouble believing that things are actually done this objectively rather than the rejections being somewhat arbitrary. Do you think that results can really be analyzed and compared in this way? How do you know that it's 5 and 2 and not 6 and 1 or 4 and 3, and how do you determine how many points a full result is worth in total?


But proportionally, wouldn't a solution without an epsilon loss be much better than a solution with epsilon?

I am not sure what's the exact conjecture that the author solved, but if the epsilon difference is between an approximate solution versus an exact solution, and the journal rejected the exact solution because it was "only an epsilon improvement", I might question how reputable that journal really was.


It's demonstrably (there is one demonstration right there) self-defeating and counter-productive, and so by definition not reasonable.

Each individual step along the way merely has some rationale, but rationales come in the full spectrum of quality.


Given the current incentive scheme in place it's locally reasonable, but the current incentives suck. Is the goal to score the most impact points or to advance our understanding of the field?


In my experience, it depends on the scientist. But it’s hard to know what an advance is. Like, people long searched for evidence of æther before giving up and accepting that light doesn’t need a medium to travel in. Perhaps 100 years from now people will laugh at the attention is all you need paper that led to the llm craze. Who knows. That’s why it’s important to give space to science. From my understanding Lorenz worked for 5 years without publishing as a research scientist before writing his atmospheric circulation paper. That paper essentially created the field of chaos. Would he be able to do the same today? Maybe? Or maybe counting papers or impact factors or all these other metrics turned science into a game instead of an intellectual pursuit. Shame we cannot ask Lorenz or Maxwell about their times as a scientist. They are dead.


I don’t think that’s a useful way to think about this, especially when theres so little information provided about this. Reviewing is a capricious process.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: