Hacker News new | past | comments | ask | show | jobs | submit login
Understanding Integer Overflow in C/C++ (2012) [pdf] (utah.edu)
65 points by 0xmohit on July 9, 2016 | hide | past | favorite | 63 comments

In fact, in the course of our work, we have found that even experts writing safe integer libraries or tools to detect integer errorssometimes make mistakes due to the subtleties of C/C++ semantics for numerical operations

Another nail in the coffin for the meme that "good programmers don't write code with undefined behavior".

Is that really a thing? That sounds almost as illogical as saying 'good programmers don't write bugs'. Should be more like 'good programmers are aware of undefined behaviour and try to not invoke it' or so.

Yes, it totally is a thing, although it's usually described more as "C(++)'s undefined behavior isn't a problem because any good programmer should be able to easily avoid it"--a sentiment that I have seen (and challenged) on Hacker News a fair amount. The contention that I (and the authors) have is that integer overflows are so pervasive that it's not practicable to expect that even good developers are capable of avoiding them.

Some things to really emphasize here:

1. This study found 47% of packages tested exhibited undefined integer overflow on their test suite. This isn't using fuzz testing or anything to try to find corner cases to make overflow far more likely, this is more or less "opening the application triggered undefined behavior"-level pervasiveness.

2. The study also looks at unsigned integer overflow, which is well-defined but often no less problematic. The classic example I'll give is the calloc bug--calloc takes two arguments and multiplies them to find the size to allocate, and virtually every time that I've seen this function implemented, an exploitable CVE can be found (when the multiplication overflows).

3. The prelude to this work (that produced the IOC, and, ultimately, Clang's UBSAN) is what really woke the community up to the problem. It actually did prompt the C and C++ specifications to make changes (INT_MIN % -1 is now considered UB, and C++, but not C, makes 1 << 31 == INT_MIN defined behavior).

Now that memory errors are slowly disappearing, it's integer problems that are becoming the pain-in-the-neck for exploits. My personal opinion is that the treat integers as ℤ_{2³²} model is no longer a good idea as the default integer model, and we need to start looking at bringing back models like big integer (à la Python), saturating, or explicit trapping models (à la Ada); I was quite disappointed when Rust opted to keep the wraparound model.

> Now that memory errors are slowly disappearing, it's integer problems that are becoming the pain-in-the-neck for exploits

What fraction of integer overflow exploits crucially have to be combined with an unsafe memory model in order to be weaponizable? I suspect the number is close to 99%.

(I know there are a handful of overflow-related security problems that are independent of memory safety--someone mentioned money overflow issues in online games when this topic came up last--but they are few and far between.)

> I was quite disappointed when Rust opted to keep the wraparound model.

That's oversimplified. In debug builds, Rust panics on overflow; in release builds, Rust may or may not panic. It doesn't right now out of performance considerations. That is not a decision that is set in stone if the performance becomes acceptable in the future.

This means that Rust has experience with wraparound and with panics. So this question should be answerable by someone on the Rust team: how much does the check cost? A few percent? A factor of 2? Surely not more than that?

Depends. The problem is not so much the overhead of the check itself as that it introduces a lot of control flow edges which interfere with optimizations. For example, on x86 you can kiss most uses of LEA goodbye...

Sure, but when all is said and done, what's the cost? There are plenty of Rust benchmarks by now, possibly some of them are part of the test suite. How much do they slow down?

Note that it is possible to enable overflow checking in release builds as well.

AFAIK the case for undefined integer overflow is optimisations regarding loop termination, carry bit handling, and usage of 64-bit vs 32-bit registers. In light of that can't we say that "The compiler may perform signed integer arithmetic using registers larger than the type's bits and truncate the result when writing to memory"? So e.g.

    int a = INT_MAX;
    int b = INT_MAX;
    int c = a + b;
    c > a; // true on 64 bit iff c was NOT written to memory (so -O1 and above); false on 32 bit
    (c&0x7fffffff) > a; // false always
    // i.e. you can always test if two positive values overflow with
Another idea I haven't seen used enough is to put values in 64-bit doubles, do your arithmetic that way and check if the final value fits in your integer type. As long as you restrict yourself to adding, subtracting and multiplying you'll get an exact result back (ok unless you run on a buggy pentium from over 25 years ago, but I think all those were recalled). A problem you have to watch out for is if you've entered the range where your ulp is larger than 1 and your math gets inexact.

> Asooka wrote:

  // i.e. you can always test if two positive values overflow with
  ((a + b) & 0x7F..F) > a
This is bad advice. The act of computing (a + b) is undefined behavior. Masking it afterward will not save you against a strict compiler that exploits (or traps) all undefined behaviors.

I think the 'Asooka' is aware of this, and was not offering advice. Rather, he was proposing that the world would be simpler if the specs were changed so that signed addition never resulted in undefined behavior. In this simpler counterfactual world, his test would work. In our current easier to optimize world, it would not.

I use float/double float values A Lot these days for arithmetic. So that reduces integer use to values read over a comms line or read otherwise from hardware.

Of course, floats have their own quantum of grief.

> Now that memory errors are slowly disappearing, it's integer problems that are becoming the pain-in-the-neck for exploits.

Indeed. Progress! :)

I think you may be right about Z_{2^32} but 'C' is hopelessly wedded to it. As practice, it might even be worth having separate types for things on that ring and things expected to not be on that ring. Whether tools to automatically check it exist or not, this at least moves it to be easier to inspect for.

Rust should have forced you to specify behavior whenever the compiler can't prove that the difference doesn't matter, like so:

1. default - fail the compile if an overflow could happen

2. wrap - like gcc's -fwrapv option, and C's unsigned type

3. trap - like gcc's -ftrapv option

4. saturate - common DSP behavior

5. unsafe - like gcc's C signed integer cruelty (fastest code)

Eh, is it too late to add this? For compatibility with old code, there could be a compiler option to override the default behavior for a whole source file.

You're right, there does not appear to be a way to achieve 1.

Rust arithmetic does 3. in debug builds, or something vaguely similar if requested by using the checked_* functions, and 2. in release builds, or if requested by using the wrapping_* functions, and 4. by using the saturating_* functions.

Recommended reading:


> big integer

If that's your use case, use it. Libraries like GMPrecision exist. Doesn't make a difference for one by off errors, e.g.

Yes, it is. I've made a small HN career of countering this meme whenever it comes up :)

It's a classic form of no-true-Scotsman, but very popular.

If it's a fallacy, why wouldn't a programmer do good to avoid UB?

Avoiding undefined behavior is good, and a good programmer tries to avoid undefined behavior. The fallacy is that there exists a population of C and C++ programmers who are skilled enough to always recognize and avoid undefined behavior. While you could define "a good programmer" as one who always recognizes and avoids undefined behavior, the unhelpful result would be concluding that very few good C or C++ programmers exist, possibly none.

By modus tollens, C can't be a good language, then. Gosh!

Almost all the integers I use in 'C' are highly constrained. I try very hard ( and nearly always succeed ) in writing code that manages those constraints properly. This may include oversubscribing integers - using a long long instead of a long, perhaps.

This involves at least doing a depth-first examination of all invocations of that operation, and possibly writing tests for all of them. That encourages keeping things local.

I think the safety-think here is slightly inappropriate. Of course you can do it wrong. But it is in no way an unreasonable expectation that you not do that.

And again- for the general population out there - perhaps 'C' is not for you. I am a 'C' programmer only because of path dependencies in what's happened to me. I would not recommend it to anyone. That's not elitist - it's just how things played out.

Have you tried using tis-interpreter [0] (an interpreter of C for detecting undefined behavior written in OCaml) written by Pascal Cuoq [1].

[0] https://github.com/TrustInSoft/tis-interpreter

[1] http://stackoverflow.com/users/139746/pascal-cuoq

I cannot say that I have. Looks very interesting. Thanks!

I think it's apparent that we write in C because of network effects and not because it's the best tool for the job. It's great that people like you can handle it, but I'd much rather write in a language that keeps my stupid mistakes from becoming expensive security holes. It allows me to write code faster and to work with other non-perfect programmers.

> network effects

C, like unix, is a small payload spreading like a virus, depending on the coder to keep it alive. The simplicity and the emergent behavior are as much an evolutionary advantage as a burden because of the UB. At 600 pages, the standard is not really simple anymore, as the RNA accrues more inputs from different vectors.

I started to read the The UNIX-Haters Handbook [1] after I installed a Windows VM and saw the link here the other day.

1: http://web.mit.edu/~simsong/www/ugh.pdf

Yeah, but whither faster? When I was an imperfect programmer ( perfect to some epsilon TBD ) I wanted to be a more perfect programmer. This was not without cost.

I suppose if you're not interested in that ( and I understand that impulse, believe me ) then ...

I have to, by training, attribute all defects to process and human behavior. But as things emerge, maybe the tools will really fill the gap. But again, at a cost.

I am, say, pretty good with Python but I am by no means a master. It would take having a full time exposure for months, perhaps years to make that claim. And this fragments us. It may well be that I'd actually be better fit for a job using Python but because I can't make this claim now, we get a false negative on fitness.

I'm just using "me" and "Python" to construct an example.

Wild and whacky idea: has anyone ever built a language with separate types for ordinal/cardinal numbers and nominal numbers? The idea would be that with nominal numbers you don't care about the precise result of individual calculations, just statistical properties like determinism, distribution, etc. So arithmetic on nominal numbers would be permitted to overflow, as operations are just various ways to jump from one nominal to another. However, most numbers would be ordinals/cardinals, and any overflow when generating an ordinal/cardinal would be considered an error.


What can you usefully do with a nominal number, besides input/output and comparing two of them for equality? I don't think you need language support here, library support should suffice, unless I'm missing something.

I was imagining that you could do everything with nominals that you normally do with integers. Arithmetic saving results in nominal numbers would just disable overflow checking. This would allow all operations involving regular numbers to double down and perform extremely strict overflow checking.

The question is (and I think was), what arithmetic do you do with numbers whose values don't matter and what concrete value they happen to have is purely accidental and arbitrary and unimportant.

From the OP, examples of intentional overflow are "..hashing, cryptography, random number generation, and finding the largest representable value for a type."

Which are not categorical variables? Those all are examples using actual numbers, meaning the values matter.

> Nominal numbers or categorical numbers are numeric codes, meaning numerals used for identification only. The numerical value is in principle irrelevant, and they do not indicate quantity, rank, or any other measurement.

You talked about nominal numbers up to (but excluding) this post.

"nominal" means "in name only", so it's a "number", not a number. It just happens to have the same name as a number with a value.

Yes, I'm still calling them nominal numbers, but you can give whatever name you want for "I don't care what the precise results of arithmetic are." I'll stop here because there's nothing I hate more than pedantic arguments about semantics.

It isn't pedantic. You asked about arithmetic of nominal numbers, which makes no sense at all.

    >  "I don't care what the precise results of arithmetic are." 
This is wrong! That is NOT what "nominal numbers" are!

I already quoted it, "nominal numbers" have no value!

You are absolutely right though: This conversation has lead nowhere. I'm not sure your identification of who is to blame is correct though.

You asked about nominal numbers, and when asked for details your examples all are about actual numbers. When I point that out your reaction is to be offended. How mature.

You can't talk about "nominal numbers" and then be miffed that people don't understand that you are not actually talking about nominal numbers at all! I only come to understand that now. Tip: Don't talk about water when you mean sand, and then act confused when someone brings you water instead of sand.

If I choose to add my phone number and yours, who's going to stop me? Would the universe explode? Would my phone number not be nominal anymore? It's a sort of "if a tree falls in the forest" question. Regardless of whether you're right or wrong (I just read the wikipedia page and didn't see anything about arithmetic, but maybe you have a phd in math), the essence of pedantry is to continue to insist I used the wrong word even after you understand what I meant.

Hm, I'm still missing something here, but I think your comment about phone numbers might be the key to the mystery, so let's go with that for now.

If you choose to add two phone numbers, I think we both agree that the phone numbers are still nominal. But I think we also agree that you have computed nonsense. There's nothing you can do with that sum. Which naturally raises the question: why should a programming language support this operation?

Come on guys, you're getting hung up on the wrong aspect here. I'm starting with precisely the operations the OP mentions as requiring wraparound overflow support: computing digests, hashes, random numbers, things like that. Currently we use unsigned numbers when we require overflow, but things are still extremely error-prone as the OP shows. So my thought was to come up with a new dichotomy: instead of signed vs unsigned, just ask for whether you want wrap-around overflow or not. Pretty please can we just forget I ever used the word 'nominal'?

If you think adding two phone numbers can have no possible useful application, what do you say to taking strings of text, converting them to numbers, and repeatedly folding them over each other to compute a cryptographic digest? You're right that it is meaningless in the context of the original domain, but it clearly has application. The two are distinct ideas.

Okay, finally I get it! As far as I know, Rust has a few types in the stdlib that are guaranteed to overflow, that you can use if that's what you want. C# has something even more interesting:

    checked(a+b) // crash on overflow
    unchecked(a+b) // wrap on overflow
    a+b // use the default of the surrounding context
The global default is provided by a compiler switch. Here's the thing though: the default setting of that compiler switch is to wrap on overflow. Even in debug mode! And you yourself identified the cause: most of the time, you're not doing crypto and such; most of the time, you do not want wrapping. But if you make overflow checking "opt-out" instead of "opt-in", the program slows down.

Most interesting. Yeah, I'm inclined to just eat that one-time slowdown. We've all anchored (https://en.wikipedia.org/wiki/Anchoring) on running our programs without these runtime checks, so that our apps seem slower with them. But if we take on that one-time cost the programs will eventually get faster with faster hardware, better compilers, more cores, etc. And we'll be safe from these issues for ever more.

Thanks for letting me know my idea finally got across. I was massively under-estimating its subtlety.

> Yeah, I'm inclined to just eat that one-time slowdown.

Maybe eventually the world will agree with you. We already eat the slowdown that comes with bounds-checked arrays. And indeed, compilers are getting better at removing that bit of overhead, compared to, say, the 1970s.

Since I have you here: I teach programming with this portable, safe assembly language (as opposed to the portable, expressive assembly language that was C). It has bounds-checked arrays. It always initializes variables. The type system is strong, so you can't ever create an address out of a number. It compiles with -ftrapv so any signed overflow immediately triggers an unrecoverable error. It has pointers and requires manual memory management, but it has refcounts so you can't ever have a use-after-free error. As a consequence any copy of an address incurs some overhead to update refcounts, and any copy of a struct incurs even more overhead to update refcounts of all addresses contained within. Between initialization, bounds checking, overflow checking and refcounting, I think I've pretty much eliminated all sources of undefined behavior.




I've superficially glanced at mu. It seems to have even more awesome stuff than I realized. But still: you're basically programming in glorified assembly. You can't add three numbers without splitting the thing into two statements. Is this really a pleasant way to program? I think this may be a good way to teach programming, but only because no one writes big programs when starting out. On the other hand, you claim to have written the UI in mu itself, so maybe I'm just plain wrong here. (Or did I misread that?)

You're right that I wouldn't want the whole world to program this way.

That said, it's been surprisingly ergonomic. If it had a compiler and could generate binaries, and if it had parity in libraries with C, I think I would strongly prefer using it to C. The massively increased safety is worth giving up a lot of expressivity for, IMO. No comparison with HLL languages for prototyping, of course. Eventually I hope to get the usual HLLs reimplemented atop it. One subtle point is that since it's refcounted at the Assembly level, any language built atop it requires minimal GC support.

The fact that it's Assembly has caused some confusion with others, so I want to point out that:

a) you can add any number of numbers in a single instruction; that gets handled behind the scenes. Many instructions are variadic where it makes sense.

b) you don't need push/pop/call to call a function, function calls look just like primitive operations.

c) Mu knows about types, so it knows how to allocate them, how to print them, and (eventually) how to serialize/unserialize them. These things can be overridden, but they work out of the box for any new types you create.

d) Mu supports generic types. Here's a generic linked-list, for example: http://akkartik.github.io/mu/html/064list.mu.html. Types starting with underscores are type parameters.

e) You get structured programming by default using the break and loop instructions which translate to jumps just after the enclosing '}' and '{' label, respectively. Factorial in Mu looks surprisingly clean: http://akkartik.github.io/mu/html/factorial.mu.html

What's common to these features is that I could imagine implementing them in a compiler using just machine language and Mu. What I'm trying to avoid is having to write a complex optimizer. These features (I think) don't massively increase the impedance mismatch between Mu and machine code; they can be implemented with basically glorified search-and-replace.

The Mu editor is built in Mu, yes. That's just because I don't have a HLL yet. It's not to say that I think everyone should build apps only in the Assembly layer. It's a little like the early days of C and Unix: until the language got stable people built the original versions of compiler and OS in Assembly. But they were eventually able to move to C.

Okay, factorial and linked list changed my mind. This is way more ergonomic than I gave it credit for. It doesn't seem to quite agree with /post/mu; in particular, I see no usage of next-ingredient and instead I see the typical function prototypes you seem to be critiquing. Does this mean next-ingredient is gone, or is it optional or...?

Ah, thanks for the feedback! I'd written that article before Mu got function headers, and never gone back to re-read it.

a) next-ingredient is still around, and I still teach it first before graduating students to headers.

b) Headers get translated to next-ingredient calls behind the scenes, but without the need to type-check ingredients at runtime.

c) You might still choose to use next-ingredient if you want to implement optional arguments, or variadic functions. A weird one is interpolate (in http://akkartik.github.io/mu/html/061text.mu.html), which scans through its arguments twice, first to compute the size of the array to allocate to hold them, and then a second time to copy them over. Since calls to next-ingredient can get arbitrarily complex, Mu doesn't bother type-checking them. It just moves the type-checking for explicit calls to next-ingredient to run-time.

I'll update that post. Thanks again.

Cool. Two more silly questions. (1) How fast is mu these days? Is it roughly competitive with CPython? (2) You mentioned it would be nice to have a compiler someday. It occurred that maybe something like the JVM would be closed to the existing semantics than x86. It's also strongly typed and has bounds-checked arrays and stuff. Hm. I guess that's not a question. Never mind.

Not silly at all!

(1) It's a good question. I think small benchmarks will likely be much slower in Mu since Python probably has optimized primitives for the common things whereas Mu is still (naïvely) interpreted. On larger apps I think the gap might come down, since Python will start relying more on unoptimized primitives. But I should probably measure at some point..

(2) Lol. I actually modeled Mu's on JVM to some extent, so it's intended to be a better replacement of that layer of the stack, something that compiles down to native rather than relying on a bytecode interpreter, and also something that is designed with higher-order functional programming in mind, as well as generics out of the box (so no type erasure and so on).

But overflow is a hardware issue - we run out of bits. The number range of a modulo operation on the other hand can be anything. That's why I said there are two issues that you seem to mix. Don't forget that overflow (running out of bits) is signaled by a flag for the CPU register where it's happening (accessible to a programmer only when using assembler).

So I'm not sure what your plan really is here. To support modulo operations in hardware by using the limited bits of the registers limits your modulo operations to the size of the registers. If you are talking about purely software operations and not the hardware - I'm not sure - well, that exists. We already have modulo operations in all programming languages, it's not missing. And if you are talking about the hardware support, you are free to ignore the flags (which you only get to access in assembler anyway) and use the registers as a kind of "implied modulo", that's the point of the discussion that when overflow happens your code won't notice.

Again: Nominal numbers have no values. So what "arithmetic" do you want to do? There is no overflow. Because there is no arithmetic with nominal numbers.

The examples you quoted above are NOT using nominal numbers. Those are ACTUAL numbers. Those are modulo operations - that is NOT "nominal numbers".

Personal attacks won't help, they never do in any discussion. You should seriously consider changing your discussion style.

If your question was about modular arithmetic, it confuses two different issues. It would be just a coincidence if the highest number representable with the given amount of bits happens to be the same number you want for your modulo operations. Those are only accidentally related issues.

Author here. FWIW the linked paper is the extended edition as published in TOSEM 2015, not the original ICSE 2012 paper, which I was surprised by given the year in the subject.

That said, I'm glad this edition is getting a bit of attention as we were able to be more thorough given the longer format (as well as more time), including an automated study of the top 10k Debian packages. The paper details what's new in more detail.

Let me know if you have any questions or comments.

May your integers be safe!

I'm curious to hear your thoughts on https://news.ycombinator.com/item?id=12063597! The long thread beneath it basically convinced me that calling them 'nominal' was a bad idea, but perhaps if we had a chance to do C over again we should replace the dichotomy between signed and unsigned with abort-on-overflow and wrap-on-overflow?

Also, Robbert Krebbers's PhD thesis titled "The C standard formalized in Coq" [0]. From the summary:

Our formal specification of C is faithful to the C11 standard, which means it describes all undefined behaviors of C11. As a consequence, when one proves something about a given program with respect to our semantics, it should behave that way with any ostensibly C11 compliant compiler such as GCC or Clang.

[0] http://robbertkrebbers.nl/thesis.html

>a truncation error on a cast of a floating point value to a 16-bit integer played a crucial role in the destruction of Ariane 5 flight 501 in 1996. These errors are also a source of serious vulnerabilities, such as integer overflow errors in OpenSSH [MITRE Corporation 2002] and Firefox [MITRE Corporation 2010], both of which allow attackers to execute arbitrary code. In their 2011 report MITRE places integer overflows in the “Top 25 Most Dangerous Software Errors” [Christey et al. 2011].

Wow, that's more than a little enlightening.

It's interesting that the article mentions the Ariane 5 bug, but doesn't mention that the Ariane 5 software (at least the component that caused the bug) was written in Ada, not C.

The real problem is that the Ariane 4 software was used without re-testing for the new rocket. The overflow could not occur on the Ariane 4. The higher acceleration of the Ariane 5 resulted in the overflow. The overflow was caught by the Ada runtime system, and generated an error message. The error message was then processed as data.

This is not targetted directly at you, but when I spot people talk about Ariane 5, it is generally only to cover one particular aspect of the bug which confirms their personal opinion or agenda. So for reference, here is the full report: https://www.ima.umn.edu/~arnold/disasters/ariane5rep.html


> The failure of the Ariane 501 was caused by the complete loss of guidance and attitude information 37 seconds after start of the main engine ignition sequence (30 seconds after lift- off). This loss of information was due to specification and design errors in the software of the inertial reference system.

> The extensive reviews and tests carried out during the Ariane 5 Development Programme did not include adequate analysis and testing of the inertial reference system or of the complete flight control system, which could have detected the potential failure. </quote>

Some interpretations:

Put it in the contract: The lessons of Ariane (https://www.irisa.fr/pampa/EPEE/Ariane5.html)

The Ariane 5 bug and a few lessons (http://www.leshatton.org/Documents/Ariane5_STQE499.pdf)

I heard tell that Ada was to blame for the Ariane V disaster. Is this true? (http://www.adapower.com/index.php?Command=Class&ClassID=FAQ&...)

This is absolutely true and greatly increases the cognitive load from using 'C'. IMO, all systems which use 'C' need a great deal of test furniture and possibly code analyzers to manage this. And of course, nobody wants to pay for this. But I mean - if you have not done a basic fuzz test of the thing then you have really no idea.

What would be interesting is a series of experiments to directly contrast the Therac, Ariane and OpenSSL errors using something like Rust. As I understand it, there still may well have been a crash of some sort.

The OpenBSD devs were really enthusiastic about reallocarray as a way to catch some integer overflow issues like the OpenSSH issue.

Do there exist any good tools for identifying undefined behavior in C or C++ code? Compiler warnings are getting better, and "sanitizers" can catch undefined behavior that actually happens at runtime, but I still haven't found anything that consistently catches unwanted optimizations based on compiler assumptions of "no undefined behavior".

Stack (https://css.csail.mit.edu/stack/) is the best tool that I've found, and while it's a great start, it's extremely difficult to install, clumsy to use, doesn't appear to have been touched in years, and only catches a limited subset of constructs. Is there anything else out there? If not, is it because it's perceived as too difficult to do, or is it just not perceived as a need?

Probably Coverity (http://www.synopsys.com/software/coverity/Pages/default.aspx) is the best / most complete implementation of this idea.

Although it has impressive capabilities, you'll need to deal with their sales people, get a license, and invest a non-trivial amount of time learning the system, modifying your software build, etc, to get a lot out of in. Unless your code is mission-critical, it's probably more trouble than it's worth.

Not all undefined behavior can be caught at compile time.

However, LLVM and GCC can detect a lot of undefined behavior at runtime with ubsan. I'm not sure how much performance penalty there is. Presumably checks are added on every signed integer arithmetic operation, pointer dereference, etc., but some checks can probably be removed by the optimizer (for example, a branch may be taken if a pointer is not null, which means the optimizer can remove all null checks in that branch as long as the pointer hasn't been modified).

Edit: oops, I missed the part where you said you're aware of sanitizers.

GCC has -fsanitize which is pretty good.

Integer overflows don't kill people; people kill people with integer overflows.

I'll show myself out :)

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact