Hacker News new | comments | ask | show | jobs | submit login
Short intro to C++ for Rust developers: Ownership and Borrowing (nercury.github.io)
195 points by ingve on Jan 22, 2017 | hide | past | web | favorite | 97 comments

Oh boy there are some serious errors here.

Returning a stack allocated value does not invoke the move constructor. It doesn't copy either. This is a separate optimization that existed even prior to C++11 called the "named return value optimization" or NRVO for short. And by "sufficient new compiler" you really should just say every compiler. No compiler I know of in existence used by actual people doesn't implement this.

Here's a demo: http://ideone.com/QRqj5P

It's VERY important that this optimization isn't confused with move construction as the latter would actually be extremely less performant than what actually happens.

Returning a stack allocated value is a move in C++11/14. It is also true that in many circumstances a compiler may optimize that move away using RVO/NRVO.

Consider this example: http://ideone.com/T872Oj

In F1 the compiler can NRVO away the move.

In F2 it cannot.

Both will fail to compile if you delete the move constructor.

Does that mean that side effects in a move constructor is undefined behavior since you can't know whether it's going to be called?

Edit: I see that StephanTLavavej clarified that below.

Also known as copy elision. This subject is surprisingly so obscure, that I literally had to read the C++ standard to have a good grasp of it. Combine it with general move semantics, and things start looking pretty complex.

See also: http://en.cppreference.com/w/cpp/language/copy_elision

> Returning a stack allocated value does not invoke the move constructor.

Unless the 'named return value' you're returning happens to be a named parameter.

Example: https://ideone.com/TddXn7

The move here actually comes from the return, not passing in the newly constructed object. The compiler is eliding Foo's move constructor in main(), but it can't elide both... even if the function is fully inlined.

Yes good point. NRVO doesn't apply here and named parameters are in fact moved or copied depending on if they are passed by value or by reference.

In your case a move would happen but if you changed the parameter to a reference I think it would copy.

Yes it would copy, unless you used an explicit std::move(). This is why most of the time you're better off just passing things around by value, and making your classes cheap to move (by, force example, making use of pointers (smart or otherwise)).

This results in less code and more manageable semantics.

I think this is due to a confusion between the two kinds of "move" in the article.

Returning a value from a C++ function is a "move" in the Rust sense. It's not a move in the C++ sense, since C++ moves involve move constructors.

It's probably this confusion that led to it being written that way, probably should be fixed.

He has a line where he writes

    return val; // val is `rvalue` here
This is the part that isn't accurate. "val" is a named reference and is not an rvalue. There isn't any "moving" up the stack that happens. He's likely confusing this with an expiring value (a subtype of rvalue) that happens in code like

The returned value from returns_something() is un-named and is about to die after this line executes (lives until foo's invocation unwinds) and so it is an rvalue. You can't have a named rvalue unless you std::move explicitly I don't think.

It's a little more subtle than that. "val" is definitely an lvalue. However, when returning a local variable like this, C++11 has a rule: "if the NRVO activates, there are no copies/moves. Otherwise, see if you can treat val as an rvalue - if so, you get a move. That almost always works, but if it doesn't, fall back to treating it as an lvalue so it compiles with a copy like it used to."

This is the one time in the language that lvalues are automatically moved from, and it's because the language/compiler knows that returning from a function will destroy all local variables, so it's safe to move from them.

I see that makes sense. I wonder if at some point we should go back to the drawing board on whether the names "lvalue" and "rvalue" make sense. I keep on going back to the mathematical LHS and RHS semantics to figure it out but I can see why other programmers and myself get confused on these sorts of specifics.

Giving the compiler the ability to move implicitly surprises me since moves can have side-effects as you've said.

I know. I'm not saying it isn't wrong, I'm saying that I can see why he wrote it that way :)

A lot of C++ resources explain moving solely in terms of rvalues so it's easy to associate the two in the wrong places.

To be fair NRVO is now in the c++ standard. prior to c++11 it was a non-mandatory compiler optimization. Now it's part of the language. It's a good think because optimizations should have no side-effects yet this one does (if your copy ctor has a print statement, it won't be printed if this optimization is deployed.)

Did you look at the generated assembly of your demo? RVO probably didn't happen there. "returns_a_foo" was most likely inlined into main and then there was no need of eliding a copy.

I'm not saying that you are wrong, only that your demo poorly demonstrates what you are arguing.

This is why I love HN.

> This is a separate optimization that existed even prior to C++11 called the "named return value optimization" or NRVO for short. And by "sufficient new compiler" you really should just say every compiler.

Aren't copy elision guarantees part of the new C++17 standard?

Yes but they've "practically" been happening already since the C++03 era

On a bunch of platforms in many cases. If in doubt, check produced assembly.

Thanks for your comment, looks like my knowledge is still lacking here. Although I hoped that it would be enough to give an overview, without going into much detail.

I have removed a bit where I say that rvalues are same as returned values, to avoid confusing other readers.

Another nitpick:

    However, if the name was a very big string, i.e. something like file contents, and it would be necessary to ensure no copies for performance reasons, the unique_ptr or shared_ptr would come to the rescue
This is also not true at all. You can choose to provide a non-templatized function that only accepts an rvalue reference. For example, http://ideone.com/055ViY does not compile.

The purpose of the smart pointers unique_ptr and shared_ptr is to annotate ownership for heap allocated objects (unique vs shared ownership). They do help alleviate the thing you're describing, but shouldn't be used so liberally in that way since they heap allocate!

You can implement explicit stack pools more easily now in C++17 thanks to PMR. Just remember that platforms have a stack length limits and that such a pool is essentially a global variable you may have to handle.

No problem :) It's still a good effort nonetheless and I'm sure your readers will appreciate the corrections. I personally also learn the most through writing

Bjarne Stroustrup rates his own C++ knowledge as 7/10, if that makes you feel better.

Even Borland Turbo C++ from 1992 did this: https://youtu.be/RWavTVo7D3M?t=496

Yes, but rust on LLVM will also perform NRVO, will it not?

NRVO is a compiler optimization, not a feature of the language itself.

Copy elision, including the RVO and NRVO, actually has to be blessed by the language (there is a paragraph in the C++ Standard dedicated to it). That's because copy elision skips calls to copy constructors and destructors, which are "observable" (they can have side effects, including printing stuff to the console, etc.). The general "As If Rule" permission to optimize "unobservable" things away doesn't apply here, so special permission to drop the calls is required. This is true going back to C++98 (C++17 strengthened things with guaranteed copy elision).

Ah and here's the man I originally learned a lot of these concepts from haha. And TIL. I remember programming back in the C++98 days and people were terrified of the copy on return. I had always assumed there was a reason for it but I suppose people might not have understood the behavior super well or I was misinformed at the time.

Part of it was surely programmer confusion, but old compilers in the C++98 days were also worse about implementing the RVO/NRVO. For example, MSVC didn't implement the NRVO until 2005.

Well, the C++11 standard requires the Compiler to elide the copy (NRVO) if it wants to, or move if it doesn't want to. The language used is very lawyerish, I had to read the section a couple of times to find it even though I knew it was there (it's mentioned in "Effective Modern C++").

So it's up to the compiler to elide the copy (making it a compiler optimisation as you say), but the standard says that it has to move if it doesn't elide. This means that returning a named loca variable by value is always the best choice. Again, see "Effective Modern C++" for more.

It's a language feature if your language has copy and move constructors.

Rust doesn't, so it's just an optimization. In C++ the code will have different semantics (e.g. if your move ctors have side effects) based on this optimization, so the language needs to define if/how it works.

I'm not saying Rust doesn't do this. I'm not familiar with Rust but many languages do NRVO as you point out. The important distinction is whether we actually run additional instructions in this variant of the optimization. In C++ at least, no additional instructions are run.

Rust doesn't have user-defined copy constructors and a return is always a move (semantically a memcpy), so I'm not sure it's so relevant as an optimisation for Rust.

NRVO is an optimization above a move, at least in C++. Most compilers simply toss a pointer to the returned to 'object' on the call stack for direct manipulation (correct me if I'm mistaken compiler peeps!), avoiding the extra potential internal surgery involved in a move operation.

You can't manually implement the `Copy` trait?

You can but it has no user-defined methods related to it, it's just a marker trait. Move and copy are both straight memcpy, the difference is whether the source remains valid afterwards, not what operation is performed.

Kinda but:

    trait Copy: Clone {}
Copy is nearly always done by memcpy/memmove. When Copy is invoked it won't use the `Clone::clone` method: https://is.gd/2hMyNP. i.e. Clone is always explicit.

Added: Essentially Copy is the same as a move, but the original value is still valid.

That makes a type copyable, but does not allow you to insert arbitrary code that must be executed during a copy.

The compiler I work with doesn't do this optmization in every case, so I can't rely on it. Compilers for embedded platforms are shit.

I don't come from a even vaguely C or C++ background, but have been studying Rust for some time; most development work that I do is executed in very high level languages. I know generally the concepts that exist in C & C++ and absolutely no idea how and why you'd use them as opposed to other tools (OK... an overstatement, but you get the idea). Much of this is because I've not tried to write any C/C++ and only even read it when I need to generally know what some feature of an application does.

This article actually helped me understand C/C++ a little bit better than I did before. For example where it explained the differences between Rust and C/C++ copy vs move behavior... I got a better sense of just how C/C++ works and why I might chose to write, say, a function parameter one way vs. another.

So in this basic sense, mission accomplished I think. I have no doubt there's lots of nuance missing, but still... not bad and I appreciate the effort.

The important thing to remember about `std::move` in C++ is that it doesn't move anything, it simply does an unconditional cast to an rvalue reference, which basically allows you to treat the object as a temporary variable (effectively transferring ownership).

  In most cases maintainability wins,
  and avoiding “premature optimization”
  is very much a necessity in C++.
I agree with this conclusion. Quite often, when you start coding you don't know, where the performance bottlenecks hide, and you don't want to waste time, thinking about allocating memory for a routine which you could write equally well in any scripting language. Unfortunately plugable automatic garbage collection for less verbose performance uncritical scripting tasks within the language is not usable.

This really puts the ownership features of Rust in perspective: they are certainly no more involved than what's going on in C++. Harder than Python, yes; excessive compared to Haskell, sure; but here we see what Rust is really comparing itself too. (This is not to say, that it's always what we should be comparing it to.)

This glosses over "rvalue references", which is roughly what Rust does when you pass a parameter without any further qualification (references, boxes).

Can you give an example?


  void foo(T&& value) {
similar to

  fn foo(value: T) {


Both are similar in the sense that `foo` owns `value`, yes.

Or at least that's the idea. In C++, it's up to the author of T to implement such semantics (through its move constructor) correctly for non-trivial types.

I wouldn't call this owning, a better description is that that foo is a sink for the value.

Rust would call that owning, though.

Therein lies the perception problem: a lot of folks think that C++ is a static target, but it's not. It has improved by leaps and bounds over the past decade or so, and will improve a lot more this year when C++17 support becomes widespread. I find anything from C++0x onwards pretty pleasant to work in if you're on a UNIX-like OS. Windows is another story entirely.

Coming from a Linux/OSX world I had to port a library to Windows the other day. That used to be an utter nightmare with C or pre-C++11 codebases. I actually found it pretty painless. The only things I found annoying were the various syscalls and interacting with the crypto libraries (and you thought openssl was ugly...).

Modern C++ is pretty darn nice to work with. It's not a "safe" language by any means, but with good tooling, design, and a good static analyzer you can prevent the vast majority of memory bugs you'd run into with C. Plus the compilers are so mature you get great performance right out of the box.

>> you get great performance right out of the box

Yup. And if you don't you can drop all the way down to intrinsics and assembly. It's pretty nice.

> Windows is another story entirely.

Actually Windows has always enjoyed better C++ related tooling than any UNIX other than Mac OS X.

And standards compliance is actually the best one among commercial C++ vendors.

That's what Microsoft wants you to believe. In reality C++ tooling is vastly better on any Unix, and it's 100% free in both senses of the word.

Also, there's no qualitative difference between C++ toolchains on macOS and Linux, since the same compilers are used on both.

In reality C++ tooling is vastly better on any Unix, and it's 100% free in both senses of the word.

Depends on what you mean with tooling I guess. CLang on windows is fine these days, so is msys. So, with possibly some quirks, that unix tooling runs on windows as well so it cannot be vastly worse tooling if it's the same.

Still, talking IDEs and visual debugging which you might or might not consider a part of tooling, unfortunately VS still runs circles around the free competition. So much that I'm betting teams are willing to give up some standard compliance in favour of using it.

Well, they're not.

It's a major pita when MSVC cannot understand a pefectly standard C++ construction.

And Microsoft is well aware of that, they made jumps and leaps recently to bring their level of C++ compiler support to the actual standard level, finally you can say MSVC actually understands standard C++.

Yet another UNIX user that never used Borland C++, C++ Builder, Zortech C++ debuggers, Visual C++...

> no qualitative difference between C++ toolchains on macOS and Linux,

Where is Instruments, XCode, Cocoa, IO Kit for GNU/Linux?

I use Visual C++ 2013 at work and it is quite bad. It is just smarter text editor. There are only most basic refactoring features, even find all references is not reliable (it is text based search). You have to buy another extension to make it usable (Visual Assist or Resharper). Project files are a mess. There is vcxproj file to define build and vcxproj.filters file for directory structure view in VS (I do not understand a reason for this). vcxproj.filters is often broken by VS. It sometimes inserts duplicated lines for no reason if you add or remove some other file from project. Debugger is quite good but still I have some issues(you can't set hexadecimal format just for one variable, it is global option). Overall I have better experience with Eclipse CDT (more features) or Code::Blocks (almost same features but faster) than with VS without extensions. Resharper makes it good IDE but it cost additional money and also it is not fastest extension.

There are reasons for bad code navigation and missing refactoring features: https://blogs.msdn.microsoft.com/vcblog/2015/09/25/rejuvenat...

There are only most basic refactoring features

Ha I always thought there were actually none.

even find all references is not reliable

Yup that sucks. It's better in VS2015

Project files are a mess.

Yeah it's xml but for the rest it just lists files and options, how is that much of a problem?

There is vcxproj file to define build and vcxproj.filters file for directory structure view in VS (I do not understand a reason for this).

Can be easily solved by ditching the filters file all together and enabling 'All Files' option in Solution Explorer. Unless you insist on having all files listed by extension, which is imo a useless complete mess for enything but small projects.

you can't set hexadecimal format just for one variable

You can, use '<variablename>, h' in the watch window

Can be easily solved by ditching the filters file all together and enabling 'All Files' option in Solution Explorer. Unless you insist on having all files listed by extension, which is imo a useless complete mess for enything but small projects.

Does not work for me, because files are located in subdirectories. I see only long list of files from all directories without filters. Maybe it is just bad project structure but I cannot change it. I found plugin for 2015 which is able to generate filters from directory structure, but I have to wait for upgrade.

I see only long list of files from all directories without filters

Not if you select 'Show All Files' in solution explorer, then it shows the entire subdirectory tree.

"Eclipse CDT or Code::Blocks" - you should have put /s at the end of the sentence. And be careful with "it's just a smarter text editor" when around vi/emacs users.

Not when using the enterprise or ultimate versions.

Of these I used all but Zortech. None of them compare to GCC6. I've also used Microsoft C++ compiler fairly recently. It's not even close. I don't use Xcode when coding C++ on OS X, vim + YouCompleteMe beats the heck of any IDE I have ever used. I prefer Linux "perf" to Instruments, too. It's not an option on the Mac, but if it was, that's what I'd be using.

How do you do these tasks with GCC 6?

- graphical debugging of parallel tasks and threads

- displaying data structures graphically with code navigation

- WYSIWYG UIs with tooling like Blend

- GPU debugging

- Code navigation across object files, shared libraries and source code

- Incremental compilation and linking with intermediate representations stored in a database.

- Out of the box representation for all STL data structures and user defined types

Tooling is much more than just a plain old compiler.

That's the first time I've seen a VS user praise MSBuild, which is utter garbage. On the unix side you won't hear much praise for GDB or LLDB, but for everything you've listed numerous tools exist. Eclipse in particular has superior code indexing/navigation capabilities. Both QT and GTK have GUI builders. NVidia provides a plugin to debug and profile GPU code.

What Microsoft compiler doesn't provide in my experience is decent codegen.

> Eclipse in particular has superior code indexing/navigation capabilities.

Not when compared with the enterprise and ultimate versions. Or with the changes done in VS 2015, improved in the upcoming VS 2017 for incremental building and database storage of symbols.

> Both QT and GTK have GUI builders.

Miles behind of what Blend + XAML allow for.

> NVidia provides a plugin to debug and profile GPU code.

How well does it work with Intel and AMD cards?

> What Microsoft compiler doesn't provide in my experience is decent codegen.

On Windows, among commercial compilers, only ICC generates better code.

Except the most important one, that is Microsoft. Only newest versions are passable. EDG does not support C++14. Embarcadero (C++ builder) is also behind. ICC I'm not sure, but should be modern. Other important commercial windows compilers I missed?



Also there is the world of commercial UNIXes, mainframes and embedded OSes, where using clang or gcc isn't always an option.

For example TI is still stuck on C++98, not even C++03.

This is probably the best explanation of Borrowing and Ownership I have seen.

One useful rule of thumb that works 99% of the time in C++11 is:

Never use "std::move"

It's really just there to allow fancy optimisations and you don't need it in most cases. You should rely on the sane defaults instead:

- Putting variables on the stack to manage object ownership/lifetime is a good default.

- Use std::unique_ptr or std::shared_ptr() for heap ownership (single ownership vs. shared)

- For a non-owning (mutable) reference, use const & (&) or const * (*) when the value may not exist.

- Always prefer passing by value (return types and parameters)

- If a parameter type is heavy, take a (const) reference.

- If the return type is heavy, the compiler's RVO will do its job.

No need to be more fancy than this

I disagree with you opinion on that. If you don't understand something doesn't mean that this is just for fancy optimizations.

Essentially you use std::move when you want to have pointer semantics without using a pointer. You don't want to use a pointers sometimes to keep objects on stack and use it to track ownership.

So instead of doing C* c1 = new C; C* c2 = c1; You have C c1; C c2 = c1; // here I have a move constructor which moves c1 to c2, c2 is now a null object.

This is what rust is able to do and you don't need expensive atomics (shared_ptr), ugly template syntax (unique_ptr), can keep everything on the stack and (!) have the objects be automatically cleaned up correctly.

> If you don't understand something doesn't mean that this is just for fancy optimizations.

You should definitely understand it, but if you find yourself wanting to use it, you should start by considering alternatives first. It's a sharp (and often unsafe) tool that should be avoided in most cases.

For your example, unique_ptr is the correct (and safe) solution and will do the same thing.

> you don't need expensive atomics (shared_ptr)

Note that shared_ptr provides no "atomicity". It is in reality exceptionally cheap.

A very good resource for C++ developers is Nick Cameron's r4cppp tutorials https://github.com/nrc/r4cppp

I have a C++ question. In the article, the author writes the Person constructor this way:

  Person(std::string first_name, std::string last_name)
    : first_name(std::move(first_name))
    , last_name(std::move(last_name))
For string parameters, do you need to use std::move()? Won't the compiler do that anyway?

And does anybody really put the comma separating initializers at the start of the line? Yuck.

> And does anybody really put the comma separating initializers at the start of the line? Yuck.

I don't, but I can see why someone would. When you copy-paste initializer lines around (say, because you've re-ordered members in the declaration), a leftover trailing comma often leads to insanely painful compiler errors. Organizing the commas this way makes that less likely.

I do. Easier to comment things out when necessary.

Hangover from SQL as well.

I wish trailing commas were more widely supported in programming languages.

It's the same. You can comment out the last line easier, but not the first one. With trailing commas, it's the other way around.

> With trailing commas, it's the other way around.

If the language supports trailing commas[0] (rather than just interspersed ones) that's not an issue as you could write:

    Person(std::string first_name, std::string last_name):
[0] as Python, Ruby, Javascript or Rust do

No, you should almost never use std::move. Instead, the parameters should be "const std::string&". This way, you avoid all the unnecessary copies and moves.

"const string&" - will always make a copy of the string for first_name and last_name, if we write "Person(std::string first_name, std::string last_name) : first_name(first_name) , last_name(last_name) {}" in case passed arguments are lvalues they will be copied, otherwise moved.

If you don't want unnecessary copies, the first tool you should reach for in C++ would be a const ref. I can't think of a reason I'd want to copy a string implicitly - better to pass a const ref and explicitly copy it so that the next guy (me a week later) doesn't waste time figuring out if that was a mistake.

That guidance isn't correct in the C++11 New World Order. For maximum performance, you must think about move semantics. Simply saying const X& everywhere will inhibit move semantics (can't move from const, can't move from lvalues, especially can't move from lvalue-ref-to-const).

For example, suppose you're writing a FancyAppend() function, that will return "LhsString, RhsString". Following your guidance, the signature would be "string FancyAppend(const string&, const string&)". While that definitely avoids copying the inputs, it is not optimal. Providing additional overloads for string&& parameters can be more efficient (if at least one input is a modifiable rvalue, it can be appended to in-place, which can avoid additional memory allocation if it happens to have sufficient capacity; for repeated appends, this can avoid quadratic copying of elements). Indeed, this is what string's operator+() does.

Note that working with rvalue references does require more understanding. For example, the signature "string&& BadAppend(string&&, const string&)" is a severe error (one that the Standardization Committee made early on, and corrected before shipping).

And if you support both const arguments, maybe whole or part of the function can be declared constexpr. (or pure in C++17 or is it even 14)

There's no such thing as "pure" in Standard C++ yet, even the C++17 Working Paper.

Isn't that specified as one of the attributes? Maybe it's just a common extension.

In theory you can achieve a lot in C++.

But C++ is like a large buffet where you can pick from a lot of different abstractions.

Rust on the other hand is not as flexible. But it rather tries to focus in safer abstractions, and provide convenience and coherence around them.

> Rust on the other hand is not as flexible.

How so? If you want the same flexibility as C++ memory safety wise, you just use the unsafe keyword when you need to. A lot of Rust programmers are scared of unsafe blocks for some reason but writing unsafe code in Rust is no less safe than writing regular C/C++ except you can choose to reenable safety whenever you want to. You have to look out for subtle bugs that come up when safe code makes assumptions about unsafe code that you don't properly implement but these are logic bugs, same as you'd find in any other language when a module breaks a contract. Just don't use unsafe willy nilly in a library crate that you expect other people to use.

As far as the rest of the language, I find Rust's traits to be far more flexible than C++ OOP, it just takes a little while to adapt to thinking in trait based composition. Once trait specialization and "impl Trait" hit stable and the Rust team figures out how to make the ambiguity rules less conservative, we'll have the best of both worlds: interfaces (both dynamic and static), trait based composition, and inheritance (you can already do regular OOP inheritance with a little bit of boilerplate like how Servo uses Upcast<T>/Downcast<T> traits for HTML object hierarchies). Both are currently in nightly (impl Trait might be beta already).

"unsafe" is expressing that you are opting-in for less safe code. Same in C#.

Many unsafe abstractions are not opt-in in C++. You can go ahead and writing them. No warning or cost.

"mut" and "unsafe" are keywords that you need to write down each time. They're keywords you can spot easily during a code review and say "how do you justify this?".

Rust's language design choice of having everything being immutable and safe by default is a great idea.

> "unsafe" is expressing that you are opting-in for less safe code. Same in C#.

Yes, but that is scoped. You can write unsafe code when designing an abstraction, and seal off the unsafety with a safe-to-use API. You then verify the abstraction, and are free to use it however you want. This is what's usually done, and this is what unsafe is for.

"mut" isn't really a red-flag keyword. Sure, you don't want unnecessary mutation and everything is immutable by default, but nobody really has issues with making things mutable. Rust is not a language where purity is important.

"unsafe" is, but for designing abstractions that's what unsafe is for, and while you still should justify it, "I need an abstraction like this and it can't be implemented in safe Rust because <reasons>" should be enough. Of course, you should be able to explain why it is safe to use, preferably in the docs/comments.

No but "mut" comes with a maintenance cost. Where it is harder to make assumptions over its value.

Rust is about safe-to-use abstractions, not necessarily safe-to-implement abstractions. While we like for abstractions to be implemented safely, it is totally okay to write an abstraction using unsafe code provided you can give it a safe API.

So these abstractions may be implemented unsafely, but once implemented you don't need unsafe to use them.

The only abstractions that C++ has that Rust doesn't are those involving intrusive datastructures (and you can make them work in some cases).

Why should "get_first_name_mut" not be simplified to "first_name"?

Mostly because Rust does not support overloading and you may want both mut and non-mut versions.

get_foo and get_foo_mut are a convention used by the rust standard library.

Why should it be?

The proper way to avoid problems with use after move would be an instrumented compiler adding a check. You can partially do it on your own, but it might be more expensive.

(That by employing pointers or sentinels.)

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact