Hacker News new | past | comments | ask | show | jobs | submit login
Former Nvidia Dev's Thoughts on Vulkan/Mantle (gamedev.net)
442 points by phoboslab on March 12, 2015 | hide | past | favorite | 155 comments



Very interesting post.

So ... the subtext that a lot of people aren't calling out explicitly is that this round of new APIs has been done in cooperation with the big engines. The Mantle spec is effectively written by Johan Andersson at DICE, and the Khronos Vulkan spec basically pulls Aras P at Unity, Niklas S at Epic, and a couple guys at Valve into the fold.

This begs the question: what about DirectX 12 and Apple's Metal? If they didn't have similar engine developer involvement, that's clearly a point against DX12/Metal support in the long run.

Also, the end is worth quoting in its entirety to explain why the APIs are taking such a radical change from the traditional rendering model:

Personally, my take is that MS and ARB always had the wrong idea. Their idea was to produce a nice, pretty looking front end and deal with all the awful stuff quietly in the background. Yeah it's easy to code against, but it was always a bitch and a half to debug or tune. Nobody ever took that side of the equation into account. What has finally been made clear is that it's okay to have difficult to code APIs, if the end result just works. And that's been my experience so far in retooling: it's a pain in the ass, requires widespread revisions to engine code, forces you to revisit a lot of assumptions, and generally requires a lot of infrastructure before anything works. But once it's up and running, there's no surprises. It works smoothly, you're always on the fast path, anything that IS slow is in your OWN code which can be analyzed by common tools. It's worth it.

I wonder if/when web developers will reach a similar point of software layer implosion. "Easy to code against, but a bitch and a half to debug or tune" is an adequate description of pretty much everything in web development today, yet everyone keeps trying to patch it over with even more "easy" layers on top. (This applies to both server and client side technologies IMO.)


In its early days, OpenGL benefited in many ways from being a "forward-looking" API that exposed far more functionality than was implemented by graphics hardware. Software that used those features got faster throughout the 90s with no rewriting needed as that functionality became available on GPUs. (Of course, software like games that wanted to be fast now just avoided those features.) When graphics hardware started to include features that hadn't been anticipated in OpenGL, things got messy and stayed messy.

Now that the feature set of graphics hardware is very stable and uniform across different vendors' hardware, the APIs are needed to solve an entirely different problem from what they were invented to do.


The best TL;DR ever, thank you for that.


About your last paragraph, I would say that that is even more important to optimize and to make low level layers for than graphical game engines (believe it or not :)).

The thing is while gaming is seen as a "performance critical" application, where it is natural to want the maximum performance to be squeezed out of your hardware, Web development is just a behemoth of inefficiency, and we think that that is the norm, an inevitable drawback of developing for the web, an inescapable trait of web applications. Web apps are dozens or even hundreds of times slower than native applications, and we think nothing of it, because it seems to be an inherent characteristic of the web about which we can't do anything.

Your comment made me reconsider that. Maybe it doesn't have to be so. What if there was a Vulkan for the web, instead of the layers upon layers of inefficiency which are all the rage these days? What if I could run a medium complexity web page or app without it bogging my computer several times harder than a comparable native app?


Maybe you're optimizing for the wrong thing?

Let's imagine a world where a browser is just a secure place to execute code. You get vulkan, io, networking, input, camera access in a cross platform way.

So, how do search engines find any info? How do you share links to content? How do you support braille readers? Browser extensions?

I'd argue those things are pretty essential to the web as we know it (well maybe not the extensions) and I might also argue that some of the inefficiencies of the web environment are a direct result of making the decisions that the most important part of the web is a common semi scannable and linkable format, HTML.


The web these days is very clearly diverging into two separate things: a portable application runtime/platform, and the document publishing system it started as. I don't see any way to accomplish a clean split between the two, but it looks increasingly necessary.


The split you speak of has already happened on mobile, that's why we have "apps."


The new medium has been born, but the old one is still being abused, so the split isn't complete and won't be until this no longer happens: http://xkcd.com/1174/


The worst is LinkedIn, which will send you an e-mail with article links it it, and if you click the links it takes you to your browser, which does the thing that xkcd comic describes if you don't have the app installed, but if you do have the app it just launches the app and leaves you at the home screen, and the context of the link you clicked on is gone.


I'm wondering about the "can't zoom" bit. I can't zoom on many websites - because Mobile Safari has sold out control to the web developers, instead of acting as my user agent.


Disabled zoom is one of my pet peeves too. I use this bookmarklet on my iPhone, you may find it useful...

  javascript:document.querySelector('meta%5Bname=viewport%5D').setAttribute('content','width=device-width,initial-scale=1.0,maximum-scale=10.0,user-scalable=1');


Thanks!


> So, how do search engines find any info?

Desktop integrated search.

> How do you share links to content?

Extensions, contracts and intents.

> How do you support braille readers?

OS Accessibility APIs

> Browser extensions?

Plugins


Hi, I'm working on a library to allow something like this (rendering web UI's with a WebGL backend).

It's in very early stages. Feedback is very welcome:

https://github.com/davedx/lustro


Good luck. Efficient 2D using OpenGL is very difficult. It will never approach the speed of a well designed 2D API. Microsoft gave a lot of insight about the hard it was to get Direct2D to be competitive with GDI/GDI+.

If you do see any performance benefits it likely will be because you support less features than what a modern day browser supports with HTML/CSS. But in my opinion you could probably achieve faster speeds with Canvas in a similar manner.


I agree it will be very difficult. I'm working with WebGL as my day gig and understand how hard it is, as we're working with embedded devices there. That's actually what motivated me to start this project, so web developers can build fast, rich and interactive UI's on the same level as those you see in current generation video game consoles.

You have to be ambitious, right? :)


> Efficient 2D using OpenGL is very difficult.

It depends on what you are doing. Rendering thousands of sprites with independent rotation and scale is easy. Rendering text is a lot of work, but once you do it you can do all sorts of things with the text. Rendering simple things like bezier curves is very difficult. There are some things in the 2D world that HTML/CSS/GDI/GDI+ are very bad at, compared to OpenGL, and vice versa.


Efficient 2D using OpenGL is very difficult. It will never approach the speed of a well designed 2D API.

All web content on OS X and iOS is rendered using OpenGL. That's not to say it was easy to make efficient, but it definitely outperforms the default CoreGraphics implementation on most content.


(How) do you intend to handle accessibility?


I don't. The target segment is people building rich, interactive video-game-like UI's, for things like mobile/embedded devices.

Though as an aside, I think voice control is very underutilized in modern GUI development...


Please don't. It sounds like you're recreating flash and we all know how annoying flash guis were.


Sounds like the guy is doing entertainment systems where a gamey UI is fine. You don't want your car entertainment system looking like bootstrap.


I hope it works out well. A whole new class of apps and performance might become real.

I just wonder how well text performance will be. Would be awesome to get near native speed with editors like Brackets and Atom.


Yes, text is a big challenge. Got to get maximum efficiency from your sprite batching and triangle strips!


Write your whole webapp in WebGL?

Edit: Seriously, how feasible is that? Creating a GUI framework on top of WebGL and creating an abstraction layer (which the original article advises against) would be a huge performance boost comparing to current webpages, wouldn't it?


WebGL should never have been invented, at least on current architectures. It's a ticking bomb, precisely for the reasons this articles explored - drivers are complex and buggy black boxes that lie in kernel space so ring 0 access is never too far from reach for many exploits. WebGL has a potential to be exploit vector much more severe than what we've seen with other auxiliary web technologies like JavaScript or Java applets. Why not confine web to (mostly) nicely behaved x86 VMs?


Personally I think browsers will become service layers that expose the underlying OS in controlled and specific ways. Not a completely sandbox, but not a free for all either.

We've started creeping that way with a lot of the newer HTML5 stuff, but we have a long way to go.


Looking at the bug bounty record from Chrome and Mozilla it seems that fuzzing-for-bounties vulnerability research has largely moved away from WebGL. Must be at least partly due to WebGL reaching a reasonable level of robustness. Bounties may be harder to claim since bugs can be hardware/os/driver ver etc dependent, but still there used to be a lot more WebGL bounties dealt out.


To be fair, I think the takeaway isn't don't make/use abstraction layers, but don't make a one-size-fits-all abstraction and then hide the lower level stuff. A targeted abstraction that correctly supports the domain you are in is still a good thing. That's what the engine developers end up being here, an abstraction on top of this low level API that gives many devs exactly what they want.

Edit: To be clear, I don't think writing a web rendering engine meant to do the same think as the browser on top of WebGL will be faster in any respect, as really the HTML and CSS are just inputs fed into super optimized low level rendering engines. I don't see how JS could compete in that domain.


Reminds me of MIT's Exokernel- make the lower level interfaces as low-level as possible (while still providing security) and let user-level libraries provide various abstractions suited to different use cases.

No file systems, no network protocols, no processes; just a very thin layer that lets userspace implement those if/how it wants to.


Parts of CSS3 are already faster to implement "by hand" in javascript rather than using the browser's "native" implementation.

HTML/CSS was designed for requirements that don't apply to many modern websites. I would not be surprised if it were possible to implement a rendering engine that performed better for relevant use cases, e.g. perhaps assuming a grid as a first-class part of the layout.


Wouldn't that kill accessibility, selectable text, being able to call system services (i.e., on OS X hitting cmd+shift+D to get a dictionary definition of a word), any extensions which modify the page hierarchy, automatic translation/localisation, and so on.


Exactly; bypass this entire service layer maintained by thousands of engineers and try to do it in Javascript?! I think that's probably not a good idea.


Someone did this with their WebGL Terrain demo [0]. You can find a detailed post-mortem in the author's blog [1].

[0] http://zephyrosanemos.com/windstorm/current/live-demo.html

[1] http://zephyrosanemos.com/


Take a look at what Flipboard did recently, with their whole mobile site built in JS and canvas

http://engineering.flipboard.com/2015/02/mobile-web/


Except that it would all still be done via a layer of javascript (using asm.js may mitigate this)


Native JavaScript performance is not generally what makes web apps slow.


Just because it isn't the bottleneck today doesn't mean it isn't slow.


It's definitely feasible! A goal for my side project is to give developers a React-like declarative UI with a WebGL rendering backend that abstracts away the graphics programming.


Isn't Google's Native Client the Vulkan of the web? Or rather wasn't it, given adoption outside of Chrome is unlikely to ever happen.



yup. i have started to see the same as well. having a simpler web solution is hard to achieve, over the time, it get messier and messier. i'm now doing small and simple examples in newer language such as go to see if it is any easier there to keep software simple to 1)run 2)maintain 3) code.


> I wonder if/when web developers will reach a similar point of software layer implosion.

Will doing things the "good but hard" way force me to write tons of boilerplate? Because as a (currently iOS) programmer, as soon as I start having to write generic, structural code, I find it difficult to sustain my interest in the project. I want to go from idea to something that works as quickly as humanly possible; having to remake the universe from scratch just because somebody somewhere needed the performance is not what I got into the game for.

(I'm currently working with OpenGL for the first time, and, ugh, all the state management stuff and shader boilerplate is driving me crazy — especially after years of the relative ease of dealing with UIKit! Am I to understand that Vulkan will only make things worse in this regard? I guess it's good for engine devs, but I would not want to code like this in my day-to-day.)


Apparently, a Vulkan "Hello, world!" program takes about 600 lines of code!

https://youtu.be/EUNMrU8uU5M?t=1h33m54s


600 lines of code really isn't that much. At my day job a single file can be 2-6x that long, and there are a lot of files to deal with. Any project that gets beyond "Hey here's a neat little snippet of code that does something" will very quickly begin to dwarf those 600 lines. Also, from the original post talking about the new API and current graphics drivers, every developer is already implicitly using a couple million lines of code by using the OpenGL/DirectX APIs. They're just hidden from the developer in a closed-source binary.


"This begs the question: what about DirectX 12 and Apple's Metal? If they didn't have similar engine developer involvement, that's clearly a point against DX12/Metal support in the long run."

There's several parts to this. DirectX 12 is going to run on Windows 10, and its adoption by developers is probably going to be entirely driven by how much of their target audience can be persuaded to upgrade from Windows 7. If Windows ships with better DirectX 12 drivers than Vulkan drivers (and I would bet that it will), then you'll probably see more AAA games written to DirectX12. Metal, likewise, is the preferred graphics API for reaching iPhone users, which are plentiful and (compared to Android users) lucrative.


> Many years ago, I briefly worked at NVIDIA on the DirectX driver team (internship).

I've been a full-time GPU engineer at a couple of major companies for about 6 years now. My colleagues at my current company have all been GPU engineers (and video, prior to that) in the 20--30 year range.

While a lot of the technical details are correct, there are a lot of red-flags in this guy's essay. For instance:

> Although AMD and NV have the resources to do it, the smaller IHVs (Intel, PowerVR, Qualcomm, etc)

Intel owns, what? 60? 70 percent of the desktop market? More? PVR, Qualcomm, etc. own 90% of the mobile market? Intel has an enormous number of GPU engineers---far more than AMD, and easily comparable to (or more than) NV.

Now, to OP's statement about DX12 & Metal: obviously they were developed cojointly with AAA game developers. The major title developers are in constant contact with every major IHV and OS vendor (less so for open-source) at all stages with respect to driver- and API- development.

Furthermore, it's not like the major architects and engineers of these driver teams are total tools; these guys live and breathe GPU architecture (and uarch), and are intimately familiar with what would make a good driver.

The impetus for these low level APIs is differentiation and performance. When those couldn't be had from GPU HW, the next place to look is "one up the stack": driver & API. It surprised no one in the industry that Mantle/Vulkan/Metal/DX12 all came out at about the same time: we've all been pushing for this for years.


Thanks for this. It makes sense that DX12 was developed alongside engine devs. As jsheard notes, Unity and Unreal Engine have already announced support for it, and we're already seeing tech demos of Unreal Engine on DX12:

https://www.youtube.com/watch?v=FIk8Q8luWsI


The Xbox One is being updated to support DX12 as well, so the major engines will adopt it regardless of whether Windows 10 gets any traction - in fact, Unity and UE4 already have.


We don't know yet whether Apple will adopt Vulkan or not. Vulkan is not just a direct competitor to Metal, but also an evolution of OpenGL ES, and so far Apple has adopted OpenGL ES.


Vulkan is purposely not an evolution of OpenGL ES. It's a clean break from that API and shares nothing in common.


I wouldn't call it a software layer implosion -- most people are going to continue to use abstractions on top of these API's for game development. Unity is to Rails, etc ...


Regarding your last paragraph, the Extensible Web Manifesto was created to encourage new APIs to expose low-level capabilities that can be extended and smoothed-over by libraries, reducing the surface area of privileged browser code and giving web developers more control.

Whether or not the Extensible Web Manifesto is successful or not remains to be seen.


Both are valuable. Plenty of people don't need the full power & debuggability of a difficult high-performance API- but plenty of people do, too.

Much like the simultaneous ubiquity of C/C++ and Python, in their respective spaces.


I wonder if we'll see opengl-old implemented on top of vulkan. Actually one thing I would love to see is vulkan on linux accessible from userspace without any involvement of a window server, so that you can write the rest of the APIs on top of it. Maybe even have graphical console running on it.


There's already work to do this sort of thing on top of GL... (eg: glamour the X driver that use GL for all it's drawing and works on anything mesa and the kernel support)

I don't think anything will stop someone from being able to run vulkan directly on to of libdrm like you can right now with GL. At least for open source drivers like Intel's.


totally unrelated, this has been sort of my experience switching to golang as well.


That last quote really resonates with my experience of software dev in general. "easy" API's generally do it by making a ton of assumptions. Modifying those assumptions becomes a huge pain point and that's if the API even allows you to modify that particular assumption.


Exactly. This is why I like Rich Hickey's Simple Made Easy [1] so much. Basically with easy constructs it becomes harder to build simple systems, even though the simple constructs are harder to learn.

[1]: http://www.infoq.com/presentations/Simple-Made-Easy


I love this talk, and Rich Hickey's talks in general, but I think this goes beyond that.

At one point you want full control of the HW, much like you did with game consoles..

On the other you want security: This model must work in a sandboxed (os, process, vm, threads, sharing etc.) environment, along with security checks (oldest one that I remember was making sure vertex index buffers given to the driver/api must not reference invalid memory, something you would make sure is not the case for a console game through tests, but something that the driver/os/etc. must enforce and stop in a non-console game world - PC/OSX/Linux/etc.)

From little I've read on this API, it seems like security is in the hands of the developer, and there doesn't seem to be much OS protection, so most likely I'm missing something... but whatever protection is to be added, definitely would've not been needed in the console world.

Just a rant, I'm not a graphics programmer so it's easy to rant on topics you just scratched the surface...

----

(Not sure why I can't reply to jeremiep below), but thanks for the insight. I was only familiar with one I posted above (and that was back in 1999, back then if my memory serves me well, drawing primitives on Windows NT was slower than 95, because NT had to check all index buffers whether they were not referencing out-of-bounds, while nothing like this was on 98).


Security is actually much easier to implement on the GPU than on the CPU. For the simple reason that GPU code has to be pure in order to get this degree of parallelism. A shader is nothing more than a transform applied to inputs (attributes, uniforms and varyings) in order to give outputs (colors, depth, stencil).

Invalid data would simply cause a GPU task to fail while the other tasks happily continue to be executed. Since they are pure and don't interact with one another there is no need for process isolation or virtualization.

Basically, its easy to sandbox a GPU when the only data it contains are values (no pointers) and pure functions (no shared memory). Even with the simplified model the driver still everything it needs to enforce security.


You are describing GPU from 1990s. Modern GPU is essentially a general purpose computer sitting on the PCIe bus and able to do anything the CPU can. It does not have to run pure functions (e.g. see how it can be used for normal graphics tasks in [1]) and can write any location in the memory it can see. Securing it is as easy/hard as securing a CPU: if you screw up and expose some memory to the GPU it can be owned just like the memory exposed to a CPU task[2].

1. https://software.intel.com/en-us/blogs/2013/07/18/order-inde...

2. http://beta.ivc.no/wiki/index.php/Xbox_360_King_Kong_Shader_...


GPUs these days have MMUs and have address spaces allocated per context. It's implemented internally to the driver though so you don't see it. And it's normally mapped differently, but the point of AMD's HSA stuff is to make the CPU's and GPU's MMU match up.


(To anwser the lack of a reply button:)

This is just hn adding a delay until the reply link appears related to how deeply nested the comment is. The deeper the longer the delay. It's a simple but effective way to prevent flame wars and the likes.


yep, I love that talk and I find myself pointing people towards it all the time :)


> "easy" API's generally do it by making a ton of assumptions.

Well, perhaps a better way of putting it is that "easy" APIs try to model a domain. Sometimes the API designer just nails it, and folks forget eventually take it for granted. The problem is now "solved". For example, the Unix userspace filesystem API (open, close, read, write, etc.) is pretty damn solid for what it is. I admit that I took it for granted until I worked at a place where someone who'd never seen a filesystem API (!!) was tasked with creating one from the ground up. You can guess how that went.

The downside case is where the API's model either never cut it in the first place, or as @wtallis illuminates above with OpenGL, where the model just ages poorly[1].

[1] https://news.ycombinator.com/item?id=9194138


No, it's about the assumptions and it's a mistake to move away from that mindset.

The Unix filesystem api isn't complete, if it were ioctl wouldn't exist. Assumptions all around, and ioctl is the way you adjust those assumptions.


Gah. At our company a bunch of early Rails apps metastasized into financial/billing modules that do DB access using ActiveRecord. With all the convenient pluralization and automatic field/method naming. And are now on the critical path for our revenue. It's a daily reminder to me of how the "easy API" thing can go wrong :-)


>I wonder if/when web developers will reach a similar point of software layer implosion. "Easy to code against, but a bitch and a half to debug or tune"

for me it was 3 years ago in web. And 6 years ago in Enterprise java.


Personally, Angular broke me. We built an app that was just a table view basically, and it refreshed the data every minute or whatever. For users whose table got to be 100 or more rows, the performance was awful, and there was nothing we could do about it besides rewrite the whole thing a different way.

I still love JS, and coding for the web. But I don't use frameworks now.


No idea why you were downvoted. Maybe people percieved it as an implication of "I knew better 3 years ago", but it's not that.

It's hard to imagine someone working with Enterprise Java and thinking "This isn't helping. This isn't making things more reliable".


for the record. I meant "with Enterprise Java and not thinking..."


So, this suits big engine and AAA studios really well, right, because they can spare that time.

Indies are going to increasingly look at the web and other interfaces because of the easy layers.


This might hold true if you're only looking at the actual design/code side of the lifecycle, but it sounds like a key point of the new APIs is to make new projects more testable, debuggable and optimizable. Given most project spend inordinate amounts of time in the those three activities (and it sounds like games moreso), this should be a win for everyone.


And even if it isn't, the video game industry just got revolutionized by most of the major engines becoming free, so indie devs don't have to drop down to a lower level of abstraction in order to have a chance at competitive performance.


Can we please stop calling Unity/Unreal "free"? Unity still sells a Pro version, and Unreal takes a cut of revenue after a certain amount of $ earned. I'm not arguing against either model, and it's GREAT for hobbyists, but these engines still cost professional studios money.


The salient point is the barrier to entry. The fact that they still have some form of revenue related to the engines doesn't elevate the up-front cost any and the royalties (where they exist) are structured in a way that cannot prevent them from being a viable option for indie studios. The engines will be making the benefits of new graphics APIs properly accessible to indie studios in all the ways that matter; it won't be a AAA-exclusive thing.

(And professional studios don't have to license the ancillary tools and services and source code access of the Pro editions in order to ship a game.)


It is already integrated into Source 2, Unity started (they also have Metal support) and Epic also going that way.

It will make it to indie engines that aren't companies funded as well eventually.

This actually gives a ton of power to current engine developers in the market so getting them involved was a big win.

I am a huge fan of OpenGL ES / WebGL (also based on OES) and was happy that we finally had mobile win that battle but with new graphics rendering layers like Metal and optimization for mobile more needs to be done. The driver mess is a big problem and limits innovation as well.


With all the free engines available these days, the situation for indies has never really been better.


The idea that nVidia and AMD are detecting your games, and then replacing shaders, optimizing around bugs, etc should be absolutely terrifying. And vice versa, the idea of having to fix every major game's incredibly broken code is equally terrifying. And it's a huge hit to all of us indie devs, who don't get the five-star treatment to maximize performance of our games. So overall, I'd say this is a step in the right direction.

However! Having written OpenGL 3 code before, the idea that Vulkan is going to be a lot more complex, frankly scares the hell out of me. I'm by no means someone that programs massive 3D engines for major companies. I just wanted to be able to take a bitmap, optionally apply a user-defined shader to the image, stretch it to the size of the screen, and display it. (and even without the shaders, even in 2015, filling a 1600p monitor with software scaling is a very painful operation. Filling a 4K monitor in software is likely not even possible at 60fps, even without any game logic added in.)

That took me several weeks to develop just the core OpenGL code. Another few days for each platform interface (WGL/Windows, CGL/OSX, GLX/Xorg). Another few days for each video card's odd quirks. In total, my simple task ended up taking me 44KB of code to write. You may think that's nothing, but tiny code is kind of my forte. My ZIP decompressor is 8KB, PNG decompressor is another 8KB (shares the inflate algorithm), and my HTTP/1.1 web server+client+proxy with a bunch of added features (run as service, pass messages from command-line via shared memory, APIs to manipulate requests, etc) is 24KB of code.

Now you may say, "use a library!", but there really isn't a library that tries to do just 2D with some filtering+scaling. SDL (1.2 at least) just covers the GL context setup and window creation: you issue your own GL commands to it. And anything more powerful ends up being entire 3D engines like Unity that are like using a jack hammer to nail in drywall.

And, uh ... that's kind of the point of what I'm doing. I'm someone trying to make said library. But I don't think I'll be able to handle the complexity of all these new APIs. And I'm also not a big player, so few people will use my library anyway.

So the point of this wall of text ... I really, really hope they'll consider the use case of people who just want to do simple 2D operations and have something official like Vulkan2D that we can build off of.

Also, I haven't seen Vulkan yet, but I really hope the Vsync situation is better than OpenGL's "set an attribute, call an extension function, and cross your fingers that it works." It would be really nice to be able to poll the current rendering status of the video card, and drive all the fun new adaptive sync displays, in a portable manner.


> The idea that nVidia and AMD are detecting your games, and then replacing shaders, optimizing around bugs, etc should be absolutely terrifying.

Actually it's par for the course, if you care about backwards compatibility. Raymond Chen has many, many stories about the absolutely heroic lengths Microsoft has gone to ensure that popular, yet broken, programs continue to work smoothly between Windows upgrades [1][2][3][4][5].

On Depending on Undocumented Behavior:

[1] http://blogs.msdn.com/b/oldnewthing/archive/2003/12/23/45481...

[2] http://blogs.msdn.com/b/oldnewthing/archive/2003/10/15/55296...

Why Not Just Block Programs that rely on Undocumented Behavior?

[3] http://blogs.msdn.com/b/oldnewthing/archive/2003/12/24/45779...

Who cares about backwards compatibility? (A lot of people):

[4] http://blogs.msdn.com/b/oldnewthing/archive/2006/11/06/99999...

Hardware breaks between upgrades, too:

[5] http://blogs.msdn.com/b/oldnewthing/archive/2003/08/28/54719...

Joel Spolsky's (somewhat dated) "How Microsoft Lost the API War" gives a good overview of the above concerns:

[6] http://www.joelonsoftware.com/articles/APIWar.html


I'd argue there's also a bigger problem. There have never been conformance tests for OpenGL until recently. They finally wrote some for ES and then finally started backporting them to OpenGL.

Worse, they rarely check the limits, only the basics. They also they didn't check anything related to multiple contexts. The point being the drivers are/were full of bugs on the edge cases.

Testing works. If there were tests that were relatively comprehensive and that rejected drivers that failed the edge cases they'd gone a long way to mitigating these issues because the dev's apps wouldn't have worked.

There's also just poor api design. Maybe poor is a strong word. Example, uniform locations are ints so some apps assume they'll be assigned in order then fail when they get to a machine where they're not. Another example, you're allowed to make up resource ids. OpenGL does not require you to call `glCreateXXX` just call `glBindXXX` with any id you please. But of course if you do that maybe some id you use is already being used for something else. So id=1 works on some driver but not some other.

I'm excited about Vulkan but I'm a little worried it's actually going to make the driver bug issues worse. If using it in a spec compliant way is even harder than OpenGL and your app just happens to work on certain drivers other driver vendors will again be forced to implement workarounds.


> but there really isn't a library that tries to do > just 2D with some filtering+scaling

While not a 2D library, https://github.com/bkaradzic/bgfx is a very sweet abstraction layer over platform graphics APIs


Would you mind sharing the scaling/filtering code you spoke of?


Sure. Here is the main index:

https://gitorious.org/bsnes/bsnes/source/1a7bc6bb8767d6464e3...

The platform abstraction layers are wgl.cpp, cgl.cpp and glx.cpp

Go inside the opengl/ subfolder to find all the platform-agnostic OpenGL code. opengl/surface.hpp gets particularly fun using matrix multiplication to compute model view / projection / texture coordinates by hand (which you need to do for the new GL3 / no-fixed-function-pipeline stuff.) Also has lots of required internal allocations.

In my own case, I allow user-defined additional shader passes, and that adds to the code a bit. But note that another thing about GL3 is that you do actually have to create and execute at least one vertex + one fragment shader.

GL2's FFP, while still difficult, was a whole lot easier. The driver did a lot of the work you have to do manually if you want to follow GL3.2-core/GL ES/etc.


  These are the vanishingly few people who have actually 
  seen the source to a game, the driver it's running on, 
  and the Windows kernel it's running on, and the full 
  specs for the hardware. Nobody else has that kind of 
  access or engineering ability.
One option is to release the source of the drivers, making it possible for motivated engine developers to do this without explicit access to AMD developers.

If they did this on Linux as well, then developers would have access to the full stack, in order to be able to learn about how the lower levels work and more easily track the problems down without simply a whole lot of trial and error on a black box.


It seems to me, based on this article, that providing support to major game studios and releasing driver updates to accommodate their games is a major revenue source for GPU manufacturers. I guess that's why only Intel releases open source drivers: their GPUs aren't really expected to provide high performance for games.

I guess with Vulcan/Mantle, the drivers are just trivial hardware abstraction layers, so the GPU manufacturers no longer have to make up for the cost of developing complex drivers by doing part of the work on every AAA title.

We might see Nvidia and AMD releasing open source Vulcan drivers, and if not, it would at least be easier for reverse-engineering projects like Nouveau to produce competitive drivers. This at least means you don't have to have non-free code running in ring-0 to use a discrete graphics card on Linux.


Yeah, one of the things this doesn't mention is that, to a certain extent, these issues are self-imposed and the GPU vendors make money from them. In particular, one of the big reasons OpenGL games are so standards-violating is that NVidia consistently refuse to enforce the specs. So software is developed against the NVidia driver's behaviour and breaks on AMD, Intel and open source drivers that enforce the specs, then people blame this on the drivers being crap and buy NVidia hardware. This was a big reason why Wine worked badly on non-Nvidia hardware for a long time.


I haven't seen in this article that studios are paying for that. It seems to me that it is in the interest of AMD/Nvidia to have the best performance for major games, I would not be surprised that they do it for free, and are even asking for getting the games in advance to be able to prepare specific drivers.


It is coming slowly, but there is a tradition of secret. Imagination allows to see the assembly for a shader and published its latest PowerVR ISA. AMD has an open-source backend in LLVM for its GPUs. Someone told me that Sony shipped nice advanced analysis tool with the Playstation, I'm not sure if you have access to the assembly though.


To me, this neatly explains why in just about every performance comparison of video drivers in Linux shows the proprietary drivers having an edge, even if only a slight one.

I've never actually dived into the source code for the open source video drivers but I'm now curious how much time the devs of the open-source drivers have to spend on anticipating and routing around the brain damage of the programs calling them. Do they similarly try to find a way to correctly do the right thing despite the app crashing if it finds out it didn't get the wrong thing as asked for? Or is the attitude more akin to 'keep your brain damage out of our drivers and go fix your own damned bugs'?


I've spent a fair bit of time in the open source radeon and nouveau drivers and I never noticed any workarounds in place solely to fix a broken client.

Open source driver developers don't have the time or resources to pull off a stunt like that!


Especially since an open source client, can be fixed on the client side. No reason to include nasty hacks for buggy clients.


Or, more importantly, motivation.


There are workarounds for broken behaviour though: http://cgit.freedesktop.org/mesa/mesa/tree/src/mesa/drivers/...

I think that in some cases the developers (at least Intel) have reached out to game developers first, to get them to fix their own bugs/out of spec behavior.


Performance gains are worth any trouble IMO, here is why:

While at GDC I saw a DirectX 12 API demo (DX12 is more/less equivalent to Vulkan from an end-goal perspective). On a single GTX 980:

DX11 was doing ~1.5 million drawcalls per second. DX12 was doing ~15 million drawcalls per second.

This API demo will ship to customers, so I am pretty sure we can easily verify if these are bunk figures. But a potential 10x speedup, even if under ideal conditions, is notable.


That's great, and for project like yours (Voxel Quest), it'll definitely help.

I'm wondering though, if the demo also uses the CPU for other things - physics, audio, collision, path-finding or some other form of ai, state machines, game script, game logic. My point is that 10x might be possible (on a 10 core cpu) if the cpu's are only used for graphics, but there are other things that come into play... But even then, even if only half the cpu's are used for graphics, it's still better.

The bigger question to me, is how would game developers on the PC market (OSX/Linux included) would scale their games? You would need different assets (level of detail? mip-mapped texture levels? meshes?) - but tuning this to work flawlessly on many different configurations is hard...

Especially if there are applications still running behind your back.

E.g. - you've allocated all cpu's for your job, all to be taken by some background application, often a browser, chat client, your bit-coin miner or who knows what else.


  The bigger question to me, is how would game developers
  on the PC market (OSX/Linux included) would scale their
  games? You would need different assets (level of detail?
  mip-mapped texture levels? meshes?) - but tuning this to
  work flawlessly on many different configurations is
  hard...
This isn't really any different from what it has been until now. All AAA games have different levels of detail for meshes/textures/post-processing etc. Even when not exposed to the user as options in a menu these different levels of detail exist to speed up rendering of for example distant objects or shadows where less detail is needed. DX12/Vulkan is not going to change anything in that regard.

Doing a good PC port is not as easy as it may seem at first glance. Different hardware setups and little control over the system cause lots of different concerns that simply don't exist on consoles, which means nobody bothered taking that into account when the game was originally built. These new APIs will help though; the slow draw calls on PC are a pain compared to lightning fast APIs on consoles (even Xbo360/PS3!).


The demo they were showing was as basic as it gets - drawing a whole lot of textured boxes (probably in individual draw calls) to the screen. From what I gathered you could not send large jobs to the GPU without blocking off smaller jobs (i.e. no thread priority) - at least according to one engineer from NVIDIA, which is something I was hoping they might implement as it would benefit applications like VQ which are attempting to generate things while running the game.

Its possible that these perf gains are actually working in a single thread, and the gains are from eliminating the default driver overhead that would be in these drawcalls. To some extent this could be mitigated by batching but it is still ideal to have the option to do far more drawcalls in a frame.


> This API demo will ship to customers, so I am pretty sure we can easily verify if these are bunk figures. But a potential 10x speedup, even if under ideal conditions, is notable.

Valve was showing DOTA 2 running on Vulkan with Source 2 at greate framerates too (with many peons on screen) running on... Intel HD graphics.

So it does seem that we will get a large performance boost with Vulkan / DirectX 12.


> What has finally been made clear is that it's okay to have difficult to code APIs, if the end result just works.

So true and yet, we have all those crazy JavaScript frameworks trying to abstract everything away from developers. There's a lesson in there.


You cannot abstract away complexity. This is a lesson people relearn every 2-3 years or so.

From the javascript frameworks - I use only backbone and jquery - first as a simple router, second for easy dom traversal.


Hearing web developers discuss the dangers of abstraction is just about the craziest thing in computer science. JavaScript and the DOM sit atop layer upon layer upon layer of abstraction that hides an absolute mountain of complexity around networking, memory management, hardware capabilities, parsing, rendering, etc.

Believing you're somehow avoiding abstraction because you only use a couple of additional libraries on top of JS and the DOM is like insisting on a 99th storey apartment instead of an 100th storey one, because you prefer being close to the ground.

You can by all means argue that all the big JS frameworks are poor abstractions. But a sweeping statement like "you cannot abstract away complexity" completely ignores the fact that web development as a field is only possible because of the successful abstraction of huge quantities of complexity.


Well you have to take something for a reasonable base, otherwise the argument becomes absurd quickly. We think of bare-metal (CPU-level) as our default base and take that for granted. But the CPU itself has many abstractions over physics, multiple levels of caching, branching prediction, some built-ins and so on. A modern CPU sits much deeper in the skyscraper that you describe but it's still nowhere near ground level.

Similarly, since with web browsers we get what we get, we might as well consider that our reasonable base for web development, it's not like anyone is going to do that part differently any time soon.


I don't think anyone is saying the browser et al aren't abstractions. Think of it like this, no amount of abstraction will save you from having a network connection go away. That sort of use case adds complexity, and you can't get around it.

The browser severely limits what you're able to do with the network connection, but at the end of the day dealing with the network is a complex beast.

If you doubt, look no further than the HTML5 caching API's. The tooling around them is absolutely terrible, but even if it weren't there's an inherent complexity with that sort of thing.


> You cannot abstract away complexity. This is a lesson people relearn every 2-3 years or so.

This is so true. You can, however, successfully abstract away a bunch of boilerplate for common actions, and enforcing patterns. The key to a framework (and skill of the developer), is knowing when and how to avoid the abstractions.


> You cannot abstract away complexity.

No, you have a selection bias, you don't notice the abstractions that have worked.


Tell that to the OS guys.


I think you can... but it's really hard to design an architecture that's both powerful and easy to use. It's important to have different layers of abstraction and allow developers to develop at the level of abstraction needed to obtain the control they need.

Often it's also a question of how things are composed, not just how much they're abstracted away. If you're use case dictates that you need to separately control something that's been composed into one thing then you're usually out of luck


JavaScript is, in and of itself, a massive abstraction.


> You cannot abstract away complexity.

Oh sure you can. It just costs something. The most common tradeoff is that you abstract away complexity in favor of performance.


> You cannot abstract away complexity

that makes no sense.

Perhaps what you meant is that there is a limit to how much a problem can be simplified.


I've been discussing this a fair amount with a colleague, as I'm curious to see further uptake on linux as a gaming platform. I think the biggest barrier for this is when games are written for D3D and all of a sudden your game cannot communicate to any graphics API outside of windows.

More companies are starting to support OpenGL, but I'm just curious as to why uptake is so slow. It seems like poor API design may be a part of it. I'd like to see more games written with OpenGL support, and I think it's happening slowly but surely. We even see weird hacks add Linux support at this point... Valve opensourced a D3D -> OGL translation layer[0], though hasn't supported it since dumping it from their source.

[0] https://github.com/ValveSoftware/ToGL


Having shipped multiple PC games with separate D3D/GL backends, and one game with GL only, a contributing factor here is that OpenGL (at least on Windows) still totally sucks.

To be fair, driver vendors have gotten a lot better. And the spec itself has matured tremendously, with some great features that don't even have direct equivalents in Direct3D.

Sadly as a whole the API still sucks, especially if you care about supporting the vast majority of the audience who want to play video games. Issues I've hit in particular:

Every vendor's shader compiler is broken in different ways. You HAVE to test all your shaders on each vendor, if not on each OS/vendor combination or even each OS/vendor/common driver combination. Many people are running outdated drivers or have some sort of crazy GPU-switching solution.

The API is still full of performance landmines, with a half-dozen ways to accomplish various goals and no consistent fast-path across all the vendors.

At least on Windows, OpenGL debugging is a horror show, especially if you compare it with the robust DirectX debugging tools (PIX, etc) or the debugging tools on consoles.

Threading is basically a non-starter. You can get it to work on certain configurations but doing threading with OpenGL across a wide variety of machines is REALLY HARD, to the point that sometimes driver vendors will tell you themselves that you shouldn't bother. In most cases the extent of threading I see in shipped games is using a thread to load textures in the background (this is relatively well-supported, though I've still seen it cause issues on user machines.)

OpenGL's documentation is still spotty in places and some of the default behaviors are bizarre. The most egregious example I can think of is texture completeness; texture completeness is a baffling design decision in the spec that results in your textures mysteriously sampling as pure black. Texture completeness is not something you will find mentioned or described anywhere in documentation; the only way to find out about it is to read over the entire OpenGL spec, because they shoved it in an area you wouldn't be looking to debug texturing/shading issues. I personally tend to lose a day to this every project or two, and I know other experienced developers who still get caught by it.

I should follow with the caveat that Valve claims GL is faster than Direct3D, and I don't doubt that for their use cases it is. In practice I've never had my OpenGL backend outperform my Direct3D backend on user machines, in part because I can exploit threads on D3D and I can't on GL.


If Vulkan solves some of these problems, does it seem likely to you that the tooling and community could grow up around it and shift the momentum?


Absolutely. Tooling & community around GL have improved over the past few years, and Valve has been a part of that. If there's a concerted effort from multiple parties to improve things, Direct3D could probably actually be dethroned on Windows.


"Former Nvidia dev" - s/he did an internship there. Maybe too weighty a HN title.


I'd agree, but they certainly know what they're talking about so I don't really mind. They obviously did some form of software development at Nvidia if only as an intern.


Right, he's also worked as a game developer, has been a moderator on GameDev.Net for many many years, and lead the SlimDx project to implement one of the only remaining DirectX libraries for .Net. He knows what he is talking about.


For reference, here's a tutorial on drawing a triangle in Apple's Metal, which follows some of the design principles as Vulkan, Mantle and DX12: http://www.raywenderlich.com/77488/ios-8-metal-tutorial-swif...

So, essentially, no validation, you have to manage your own buffers (with some help in DX12 I think), you can shoot yourself in the foot all day long. But if you manage to avoid that, you are able to reduce overhead and use multithreading.


Former intern that admittedly sucked at his job gets promoted to NVIDIA Dev by HN moderators because clickbait. Srsly?


I'm guessing the title came from the tweet that went about it as well (which Promit did correct, but that isn't going to matter when John Carmack retweets it): https://twitter.com/josefajardo/status/574719821469777921


What an horror show.

I always was interested in game programming, but never was able to really get interested enough in graphics programming, I guess having a messy API is not an excuse, but you can really sense that CPUs and GPUs really have different compatibility stories, and that's maybe why it's not attracting enough programmers.

I still hope that one day there might some unified compute architecture and CPUs will get obsolete. Maybe a system can be made usable while running on many smaller cores ? Computers are being used almost exclusively for graphic application nowadays, I wonder if having fast single core with fat caches really matters anymore.


> maybe why it's not attracting enough programmers

Game companies have, in general, never had difficulty attracting programmers, especially young programmers. That's why they have such a reputation for low pay and crappy work conditions: because they can.


> maybe why it's not attracting enough programmers

Enough for what? It's not as if the world is short of games, or game engines.


From the looks of it, it seems Khrono's API may actually be significant better/easier to use than DirectX?

I haven't heard of DX12 getting overhauled for efficient multi-threading or great multi-GPU support. DX12 probably brings many of the same improvements Mantle brought, but Vulkan seems to go quite a bit beyond that. Also, I assume DX12 will be stuck with some less than pleasant DX10/DX11 legacy code as well.


It sounds like Vulkan is not going to be easy to use. If anything, it is going to be harder to use.

For example, in OpenGL, you can upload a texture with glTexImage2D(), then draw with glDrawElements(), then delete it glDeleteTextures(). The draw command won't be complete yet, but the driver will free the memory once it's no longer being used.

It sounds like with Vulkan, you'll need to allocate GPU memory for your texture, load the memory and convert your data into the right format, submit your draw commands, and then you'll need to WAIT until the draw commands complete before you can deallocate the texture. At every step you're doing the things that used to be automatic. So it's harder to use, but you're dealing with more of the real complexity from the nature of programming a GPU and less artificial complexity created by the API.


This is already how we do it on most consoles. The CPU has to sync with the GPU to know when the resources associated with draw commands are safe to release. Having something like glTexImage2D is way too high level for these graphics APIs and would be a luxury. Instead we get a plain memory buffer and convert manually to the internal pixel format.

There is no waiting to free resources however, unless either the CPU or GPU is starving for work. We have a triple-buffering setup and on consoles you also get to create your own front/back/middle surfaces as well as implement your own buffer swap routine. This provides a sync point where we can mark the resources as safe to release.

It's definitely more complexity on the engine part, but as mentioned in the forum post it makes everything much, much easier when you get to debug and tune things. Also having to implement (or maintain) all of that engine infrastructure gives us a better perspective into how the hardware works and how to optimize for it.

However, even with Vulkan or DirectX12 I doubt NVidia or AMD will expose their hardware internals publicly which is critical in optimizing shader code. On consoles we get profilers able to show metrics from all of the GPU's internal pipeline. It makes it easy to spot why your shader is running slow without having to send your source code to the driver vendor.


Do you think -- since the XBox One and PS4 both use an AMD GPU -- that the developer tools will improve on Desktop PCs?


Hard to say, they are very different tools aimed at different audiences. Console SDKs are behind huge paywalls and all their tools and documentation are confidential. The developer networks even go as far as checking if the requesting IP address is whitelisted.

I haven't had to profile an AAA title for the desktop so far and therefore don't know much about the state of tools there. However, I heard only good things about Intel's GPA.


I wonder if we'll start to see lightweight Vulkan (or DX12) wrapper libraries that attempt to restore some of the convenience of old fashioned OpenGL, sans legacy baggage, without the complexity and opinionatedness of full-fledged game engines.


There is some interest from LunarG to reimplement OpenGL in Mesa on top of Vulcan eventually. So yes. Its the same way Gallium3d can implement Direct3d and OpenGL against one intermediary representation (TSGI) and backend hardware interface (winsys).


Coming from a very CUDA-heavy background, having much more low-level access to the GPU and being able to (having to) manage memory manually are very familiar/welcome features. :)


Honestly for well designed modern-ish renderers this sort of thing tends not to be as big of a deal as it might sound.

In general most of the additional steps are things people already do. e.g. you shouldn't be deleting textures that are in use anyway because not all drivers have always handled that well, etc, managing the equivalent of a command buffer is common, even when the actual command buffer isn't something you have access to...


DX12 is actually a complete zero-legacy overhaul, just like Vulkan. It's designed with the same philosophy of tight, explicit control over how the GPU spends its time, and thin drivers - and from the information we have so far, it seems to have a quite similar design.


DX11 already had a way better multithreading story than current-gen OpenGL (at present you can't even reliably do your swapbuffers on another thread - I was told this by a driver developer from one of the vendors about a year ago.)

It's possible Khronos will leapfrog DX12 with Vulkan in regards to threading but I find it highly unlikely. We'll know when one of the two actually publishes documentation (likely not soon, based on how long it took for Mantle to become available to the public)


It's telling about the whole graphics programming scene that this has been such a well kept secret from the public. Games you see are not in reality running on open APIs, but are based on back alley arrangements between insiders camouflaged as open API apps. I bet this post is very demoralizing eg. to hopeful indie game devs or people holding out hope for driver situation improving on Linux.


Actually I think this points to a potential area of success on Linux. One of the points the post brings up is that there is only a very small group of people who have access to the Microsoft kernel code, the driver code, and the game code. On Linux, this pool is vast. With open source kernel and open source drivers, game developers can dig all the way through the stack to understand the entire system. I would hope this leads to better APIs that expose the nature of the system directly, and not having to impose workarounds for specific games inside the driver itself.


This has never been a secret. On Windows there's even a tool for nVidia cards that will let you see all the builtin workarounds for various applications and let you define some of your own.


Whether the game is running on an open API can be moralizing or demoralizing to an open source/culture advocate, not to a indie game developer - the latter would care more about speed of development, robustness, access to sales, etc.


This is interesting, it's good to hear the opinion of somebody who's actually programmed against the APIs. I think the increase in exposed complexity is probably a good thing. The AAA studios have proved that they're able and willing to throw engineering resources at tough problems, so actually allowing them to directly interact with a lower level of code is probably a good thing.

I'm sure there's a counterargument that it raises the bar for indie game devs, but when's the last time an indie game directly programmed against DirectX or OpenGL (barring webGL)? This should let engine developers better use their development time.


> when's the last time an indie game directly programmed against DirectX or OpenGL (barring webGL)?

I don't work in indie, but my understanding is that this is still pretty common, especially for games with any budget at all.


I'm sure it still happens, but the popularity of Unreal, Unity, etc says to me that they likely wouldn't be too negatively effected.


Even still, as an Indie developer, I'd much rather develop against Vulcan than OpenGL from everything I can yet tell.


Minecraft?

Most indie games can happily use Unity or whatever, but there are cases where they are not suitable.


That's evolution. New graphics APIs are intended for engine developers, not for application programming. Application should use higher level engines, not low level APIs.


Not necessarily. If application cares about performance, it can go to lower APIs to craft parallelism model that it needs.


> Part of the goal is simply to stop hiding what's actually going on in the software from game programmers. Debugging drivers has never been possible for us, which meant a lot of poking and prodding and experimenting to figure out exactly what it is that is making the render pipeline of a game slow.

This is really sad. Imagine if someone were pushing a new file API with the justification that storage-device drivers were full of bugs.

I get a similar feeling whenever I read one of those "CSS trick that works on all browsers!" articles. Yes, it's nice that you can build a thing of beauty on top of broken abstractions, but ....


How come, that the open source drivers did not show much better performance?


Because the documentation for the hardware is not open source. You need to sign very strict papers to be able to look at the hardware documentation which means few selected people have access to it which means not much support, etc. Can't remember why I know this but it was something related to my linux open source back in the days.


I think things have shifted a bit away from that. IIRC, AMD's pretty free with the documentation for everything that isn't part of the DRM-enforcement chain, so their open source drivers are mostly lacking in manpower. NVidia just won't release the docs at all, so the open source drivers for them are reverse-engineered and plagued by problems like an inability to get most GPUs out of power-saving mode. I'm not sure how open Intel's documentation is, but their only Linux driver is the open-source one that they do significant in-house work on and it has often been competitive with the Windows driver.


Good to know it's moving towards a more open environment. Also good to know that my memory wasn't playing tricks on me. Thank you for the update.


AMD and Intel publish ISA documentation and design docs for their gpu parts. Intel usually does it same day as release, AMD usually do it a few months post release. Nvidia, Qualcomm, etc all release nothing, and therefor everything is proprietary and must be reverse engineered from scratch.


A long time without access to hardware documentation, and even now that that is sometimes more forthcoming, simply a lot fewer resources than the proprietary drivers have. The proprietary drivers have many years worth of several full-time developers, testers, access to hardware engineers to solve problems together, etc.

The open source drivers have frequently been written by volunteers, or professionals who do it as only a small part of their job, have had to spend more time and effort reverse-engineering rather than having documentation and the ability to talk to hardware engineers, and are perpetually at least a few months to a few years behind the proprietary developers who had access to all of this before the hardware even came out.

It's impressive what the open source drivers have managed to accomplish, and they are improving, but they are severely handicapped compared to the proprietary drivers.

This excludes, of course, Intel, but Intel focuses on lower end integrated graphics rather than high-end discrete graphics, so it's not quite comparable.


Reminds me a bit of what I recall as a unix principle: The API should be designed so that the implementation is simple, even if that makes the use complex.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: