Hacker News new | past | comments | ask | show | jobs | submit login
Mozilla's Connected Devices Innovation Process: Four Projects Move Forward (blog.mozilla.org)
83 points by cpeterso on March 2, 2016 | hide | past | favorite | 52 comments



I participated in Gnome and Linux kernel development in 2002-2008, that was an amazing time. Those days open source systems were successfully competing with commercial software, Firefox was on edge, Linux desktop was really promising, Linux kernel was widely used everywhere. Drivers for various devices were created by hackers. I left once I saw that corporations are taking over. Redhat and Novel decisions started to set the direction, I felt I can not contribute anything.

Mozilla foundation attitude has been changed since then, it is simply a corporation, not an open source community, I really doubt it will come up with something significant.

Just few days ago I spent half of the day cleaning bloatware from an Acer desktop, Sony Tablet and a Samsung phone, this is really crazy how every small company wants to catch you with a push button daemon that will in parallel store all information about you and sell it later. We desperately need some alternative - an open source phone at least where you can control all the components. However, I see that things like voice input or good navigation would be very hard to create for the open source community.


> I left once I saw that corporations are taking over. Redhat and Novel decisions started to set the direction, I felt I can not contribute anything.

I sought refuge in BSD land as a consequence of this. I got sick of the whole drama around Gnome, systemd and everything else precluding me from properly using my computer. Everything is perpetually half-broken, bug reports and user demands are ignored if they don't fit their world domination agenda and so on.

I still develop for Linux because they pay me to do it at $work, but it's increasingly unpleasant. Things are still fairly sane in the kernel (barring the occasional rushed device drivers or the whole ARM clusterfuck, but there are limits to how much that can be helped). Userland, on the other hand, is full of 0pointer-branded narcissism and nearly unworkable for anything that has to run for more than three months without constant attention. Great for devops, brain aneurysm-inducing for anything else.


> 0pointer-branded narcissism

It is possible to avoid that crap. Older - and much saner - software still works. Just ignore the "desktops" and run the specific tools you need in a saner windowmanager of your choice.

# e16 - from my cold, dead hands


I think the only desktop I didn't stay away from was KDE, back in its 2.x and 3.x days. Then things like Nepomuk happened and sent me back to WindowMaker, which will be pried from my equally cold, dead hands.

But it's getting pretty hard to avoid the octopuses. I managed to avoid e.g. systemd (not because unix philosophy something something but because they keep doing shit like [1]) by staying with Gentoo for a while. But the useflags gymnastics required to keep half-baked things out of my system became too time-consuming to be worth it.

Some of the problems do remain (e.g. my desktop looks like it's 1996 and everyone has their own favourite toolkit again because GTK3) but at least I'm not debugging non-booting systems and strange things happening with USB sticks on a daily basis.

1: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=815586


We seem to have much in common :-) I use WindowMaker, I moved half of my machines to BSD, the remaining is Gentoo, except one Debian server I do not dare to touch much (but systemd is pinned out).

I am very disappointed (this is an understatement) at the turn Linux and open-source took in, say, the 2010s: a mix of corporations and brats messing the existing Linux ecosystem for fun or profit; open-source considered as a portfolio to get a job (hi github, I am talking to you) and generally as a mean to make money for individuals or companies one way or another; free software bricks used by companies that have one single goal which is to lock you in as much as possible in order to extract any penny they can from you (free software used against freedom, that takes the biscuit); Linux and free software being just tools/commodities, no more spirit (yeah, I know there have always been different spirits, but the most common and general attitude was far from being business-minded).

I would not have minded if they had forked Linux/other projects and done whatever they liked on a new separate platform, even if this had meant a decline of Linux, but their insistence on affecting the existing ecosystem is... dreadful.

I will probably someday move everything I can to BSD and keep one Linux system to run specific programs I would still need, as we often kept in the past one Windows machine to run specific software only available on that platform. For me, that will tell windowisation of Linux is completed.

--

Mozilla is a joke. I have somewhere a plot I made that represents Firefox market share side by side with Mozilla's revenue along the years. The curves cross in opposite direction, like an X: market shares drop as more money flows in. They managed to build a wonderful and successful complex piece of software (and a mini-ecosystem around it) with very little money; but when hundreds of millions of dollars a year started to flow, they lost their footing.


> WindowMaker

That's my 2nd choice. It's simple, lightning-fast, with almost no memory footprint.

> bug=815586

That's a perfect example of the problem with the 0pointer monoculture. I works great, until you do something slightly more complicated. Of course, the suggested fix is to update to a newer version of systemd. I guess you're out of luck if that newer version has compatibility issues.

> their own favourite toolkit again

I guess I never cared much about looks. Looks and feel is important, but "working" and "not crashing" are far more important.

One of the problems we're seeing is that the concept of the "distro" is largely dead. The damage done by systemd/etc wasn't just the bugs like your [1]. The forced monoculture killed the idea of different distros.


> Of course, the suggested fix is to update to a newer version of systemd.

That's actually a boilerplate Debian bug closure message, mail-merged with the bug number, the maintainer's contact information, and the package's changelog. One can find it at the bottom of many Debian bug logs. The suggestion really comes from Debian.

Moreover, the bug was in the "unstable" version of Debian, i.e. a version of the package (229-1) that had not even progressed to the "testing" version of Debian (which is currently at systemd version 228). Debian's policy on bugs in "unstable" can be read on Debian's WWW site at https://wiki.debian.org/DebianUnstable and https://www.debian.org/releases/sid/ .

The Debian message is, as mentioned, simply robotic Debian bug-tracking-system boilerplate. The underlying systemd bug reports have stuff written by actual human beings:

* https://github.com/systemd/systemd/issues/2572

* https://github.com/systemd/systemd/issues/1866


The interesting, or sad thing, is that many of us that had already commercial experience saw this coming and always stated that in the eventuality of GNU/Linux becoming successful it would be bend by corporations like any other OS that went big.

But, as usual, youngsters stated we were just naysayers.

And now GNU/Linux distributions are basically a 2nd coming of UNIX wars.


>this is really crazy how every small company wants to catch you with a push button daemon that will in parallel store all information about you and sell it later

plug created the service ( https://anonimho.com/privacy.html ) as a statement against this practice exactly


Facepalm. Another doomed attempt from Mozilla, when they should be focused on getting their core competitive advantage, their browser, much better faster...

Edit: (One could argue that those are different teams. But even if so, those failed attempts, and there have been enough already, dilute their brand and send the wrong message. Besides, why spend money on such a project when you can give it to the browser team instead? Now, Rust et co I understand, because it might provide the future of the browser engine with Servo, being secure, parallel, etc. Asm.js and other browser related initiatives are fine too).


Competitive advantage? I think you mean core competency.

Being a browser maker isn't really an advantage unless you leverage that in some other business like a website, web development solutions or (with enough market share) locking people into your ecosystem.

The real competitive advantage goes to Google. They control the most popular search engine and mobile platform. They can make or break just about any app or website. Their browser just fortifies their dominance.


>Competitive advantage? I think you mean core competency.

Yeah, the latter.

That said, they should work on their competitive advantage too -- the fact that they are open source and pro-web which matters to a lot of users.


Mozilla is hardly unique there. Webkit and Chromium are also open source and Google is pro-Web as well.

The only real advantage Mozilla had was the technical advantage of having one of the few popular and mature browser engines available of which only their programmers knew well enough to work on. Since Webkit though, that advantage has disappeared.

You could say their image as a non-profit out to do good is a advantage. Maybe that is something they should work on.


> A smart home powered by Mozilla would be open and accessible to everyone - financially, technically, and creatively. No one else can do all of these.

I think it is a mistake for Mozilla to try this. Firstly, someone else can already do that - IoTivity. AllJoyn have recently abandoned their efforts and joined IoTivity too, so it is really the only open IoT protocol in town. That is a good thing - adding another competitor is just going to make people wait (again) until it is clear who is going to win.

Secondly, consumers don't really care that much about openness, and Google and Apple have a huge advantage in this space due to being able to build support into their mobile OSes. It's not just a matter of bundling a default app either - they can do things like access your wifi passwords for provisining. I don't think Apple will do that will with HomeKit, as long as it is iOS-only, but Weave is cross-platform, sort-of-open, fairly complete and seems pretty well designed.

Finally...

> A few small players are beginning to enter this gap, though their proposals are still not complete enough to solve the problems we've identified

Yes that's putting it mildly. AllJoyn never supported out-of-the-home control, and IoTivity is very much a work in progress. But I think Mozilla are hugely underestimating the amount of work involved. And given how much work is involved, why not join forces with IoTivity?


I just looked at the IoTivity website, but it looks awfully "enterprise-y"/"design-by-committee". Are you involved with it and can report otherwise? Any "real-world" examples?


It's hard not to feel angry about this. Not only do I not personally want the things described in this post, I want them specifically not to make things like this. Until we have more fundamentally secure systems and laws that actually punish negligent companies for gross violations of security and privacy, IoT is not a domain to promote. There are so many other important issues with the web and the Internet right now that Mozilla could help with. This is such a waste of time and resources. Is there really some target audience that wants this stuff?


For the moment, it looks like IoT is happening, whether you want it or not. I live far, far away from the Silicon Valley and last week, when visiting the supermarket, I realized that they are already selling IoT-enabled devices.

So the question is now: how do you want your IoT? A number of companies have already answered: silos, low quality, little-to-no hackability, little-to-no privacy. Mozilla wants the IoT to go in the right direction: open standards, open source, high quality, and the user in control.

Caveat: I'm part of Mozilla's Project Link. And I'm having fun coding it :)


Huh? They just dropped Firefox OS and Persona, their authentication system, and now they want to get into the "Internet of Things"? It would have made more sense if they'd used their authentication technology to allow devices to link up in a mutually mistrustful way.


This is mistaken. Firefox OS wasn't dropped, it was pivoted from phones to connected devices: https://blog.mozilla.org/blog/2015/12/09/firefox-os-pivot-to...

I've also seen rumors that Mozilla is working on a successor to Persona, but absolutely no substantive evidence as of yet.


No. The team was repurposed, not really Firefox OS though. Have got some insider knowledge on this one.

I mean come on. Firefox OS on embedded devices, the thing could barely run on cheap smartphones.


Having used the one of their special SPARC dev phones, the OS is actually very very good. Like every device it had limits but it just cruised along like a champion for 90% of tasks and had freakishly good call quality.


"The team was repurposed". That's one of the better euphemisms I've seen in a while. Could be worse; they could have been recycled.


Sounds strange to hear about Mozilla with language generally used for talking about Yahoo...


Sounds interesting. Do you have any references for approaches and possible benefits that we could look at?


A basic problem with the "internet of things" is who gets to talk to whom. How do devices get introduced to each other?

Living units need an identity. Devices introduced into the living units need to be introduced to that identity to pair with it. Phone-based programs may also need to be paired with the living unit identity. Each pair needs a set of security restrictions.

You want to set this up so that the homeowner can look at the house webcams, but nobody else, including the webcam manufacturer, can. A unified identity and permission system, perhaps built on Persona, lets you set up such connections without every phone having to be paired with every device. Also, with a unified permission system, you can revoke permissions. You might permit a guest access to the house systems but remove that access when they leave, for example.


I think that security will be very discussed thing once IoT will be more spread. I also think that Mozilla could be first to solve it and Persona could help with it. I imagine using it for authentication of connected devices in my home.


What does Persona offer that brings value from an IoT perspective?

The bootstrap primary (which is on life support, with the one hand ready to pull the plug) requires a functional javascript engine to authenticate a user, which is a steep cost of admission. In it's absence you have to implement the entire protocol, and Persona is less feature rich and far less supported than OAuth, and I say this as an ardent defender and promoter of Persona :)


I like the sound of that. Do you think you could come up with a blog post detailing what you have in mind? If so, this would clearly be something to discuss in Project Link (I'm one of the devs).


Project Link: What's that? And what problem does it solve? No one will understand, no one will use.

Project Sensor Web: Yeah, let's add another solution to this problem that has been solved ten times over. w(

Project Smart Home: Also crowdsourced? After Mozillas previous handling of hardware (like ffos dev-phones that were deprecated in record speed) i can only say: Dont.

Project Vasomething: ANOTHER IoT framework that is used and needed by no one?

We are about 2 years past peak Mozilla.


I'm excited to see Mozilla's experiments in the IoT space, but I'm a bit unclear on the exact nature of the projects being undertaken here. For example, is the software here intended to act as a proxy between connected devices and your access point, possibly via flashing your router?

I'm also curious if Mozilla is building in automatic updates in all of this software from the ground-up, seeing as how that appears to be the fundamental weakness of everything related to the IoT.


Good questions. At this point, we're trying to identify user value with prototypes. We're a long way away from defining things like the software update model on as-yet-unspecified devices.


Project Link appears to be designed to run on a raspberry pi and uses rust.


Note that we are still at a very early stage of exploration, so anything I write might change a dozen times before there is a first official prototype.

The current prototypes of Project Link are developed in Rust and tested on a RPi. However, we have not decided of a specific hardware platform and there are even chances that this will remain cross-platform – I'm currently running prototypes on my laptop.

In the current state of things, yes, the software is designed to act as a proxy between connected devices and connected devices and your access point.

I can't speak for other projects, but we definitely intend to have an update mechanism. This is very much not finalized yet, so any suggestion you have (currently, as a detailed blog post – once we are better organized, as a RFC) would be useful.


Why are they working on mostly useless ventures?

How about some really captivating features like:

    Tor integration/awareness
    I2P integration/awareness
    ZeroNet functionality
    IPFS functionality
Start working on hard features that will make the internet better, and also more distributed. It's one thing to have throw-away garbage that nobody will use. It's another thing entirely to support new protocols that make Firefox a go-to tool.


There is one MoCo employee helping with getting changes from the Tor browser upstreamed. It's not happening overnight, but progress is being made.

Things like IPFS, while very promising, are not mature enough to justify the effort of adding them in the core of gecko yet.

Also, I dispute the fact that supporting IPFS would "make Firefox a go-to tool" in the grand scheme of internet users, unfortunately.


I sure hope so, regarding Tor and Mozilla. Aside my scripts that make .onion resolution on a Linux machine work, it would be nicer to have a simple and clean interface here.

I certainly understand the hesitancy with IPFS. It is still too new. And they're still ironing out features and implementations. However, what gets me is that it's live right now, and I'm using it for quite a few things.

Right now, I'm getting some VM stuff up and running. My idea is that jor1k linux in JS can be run on IPFS.. Still playing around with it, but it seems very stable and fast.

Also, regarding your 3rd point: I've attributed many network effects to the 85/15 principle. The 15% is your tech userbase. They're the ones that drag everyone else into a platform or technology. The rest (85%) follow because the early adopters knew that it was where to go. Gmail was like this, as was Facebook, as was Firefox, as was Napster, etc. Adding in something like IPFS as a base support adds in "Cloud Storage" where nothing really existed like this before, well, aside 'cloud' meaning other peoples' servers.

Maybe I'm completely wrong. Time will certainly tell, but one thing I know, is that I am indeed impressed with what I'm seeing already.


Out of curiosity, how would you integrate this in a web browser?

Also, in my experience, Servo is a very good platform for experimental features such as these, so if you feel that they will make the Internet better, you should consider contributing them to Servo.


Some of those are honestly pie in the sky, I'll admit. But still seeing a statement that's where resources would be being spent would be ideal for some of those.

With regards to Tor/I2P, It would go a long way to be able to understand that a .onion or .i2p link was clicked and to use an alternate resolver. That would also call for a mode in the browser that sanitizes all user fields and makes the browsers look all exactly the same (to maintain anonymity).

For IPFS, I'd like to see a client-side javascript that handles the peer processing of a node, as well as being able to understand a /ip[n/f]s/hash is an ipfs link. So far, we have resolution via localhost:8080 or ipfs.io/ipfs/hash resolution depending if you're running the peer program or not.

I know that IPFS is actively working on a websockets/client side js for their system, so that any browser can play along, with no noisome downloads or binaries.

And honestly, I didn't know about Servo. I've already enough on my plate, that I didn't need to know about yet another awesome project :)


I really wonder what should be the difference between those Link and Smart Home projects and something like OpenHAB, besides that is newly started by Mozilla. I think the current adaption of DIY or build-something-through existing frameworks shows that most people don't really care about such solutions, they want something that works out of the box - even if all parts come from a single manufacturer.

From an engineering perspective implementing lots of proprietary protocols in a single gateway device is lots of work and will never work 100% fine. Most people that worked on such solutions will confirm you that, and I can also do that (in a somewhat different domain). You can sink lots of money is these projects, and they still might not work for some customers at all - just like universal remotes - only more complex.

One way forward could be pushing the creation of devices (gateways, sensors, actors) that are based solely on standardized and accessible protocols. I'm not the biggest fan of CoAP (because I think it's hardly possible to implement it correctly), but I would still prefer this to reimplementing Z-Wave or Hue protocols. HTTP/2 could also be a good fit. But even if you have at least the protocol layer standardized, there possibilities for API design are endless. Standardizing IoT APIs throughout the whole stack in a backwards compatible fashion could really be something worthwhile, but the effort will be big, and it's questionable if other parties are actually interested in standardization.

And just a side node: I don't think Rust is the best choice for building a prototype for a mostly event based system. EventLoop semantics are hard to implement because of ownership/borrowing issues, the necessary dependencies (websocket libraries, etc.) are all in their in infancy, and it might also not be something that allows to move as much forward in speed as you want for a prototype (where you as a tradeoff can live with lower performance). Yes - I'm sure the Rust commmunity will disagree with me on the last point ;)


I would disagree with you on the first point more; Rust is actually perfect for event loop based semantics (we use this in Servo, it's very clean). Ownership fits perfectly with message passing.

You should avoid sharing state when you're using a message passing-based system. The cross-thread communication is handled by the messaging, sharing the messages itself just muddles things further. So the borrow checker disallowing that isn't a bad thing.

> the necessary dependencies (websocket libraries, etc.)

I agree; though the websocket library Servo uses is pretty good (still, not battle-tested, so it's a very valid point).

> it might also not be something that allows to move as much forward in speed as you want for a prototype

Again, with the prototyping thing, sure, it's harder to prototype in Rust, but that's not the whole story -- you spend less time writing tests and debugging.

(Also, I'm not sure how much harder it is; I've never had trouble with doing it and before Rust I was mostly programming in Python/JS)

> Yes - I'm sure the Rust commmunity will disagree with me on the last point

Which sort of indicates that it's probably inaccurate? :)

A lot of the perceived problems with Rust (e.g. fighting the borrow checker) go away after a month or two of actively using the language.


Just for the reference: I'm building similar systems professionally (and have done that with most mainstream programming languages on the market), and I was probably the first one who explored async IO and eventloops in Rust (https://github.com/Matthias247/revbio - but I'm not really proud of it) - so I think I'm at least halfway qualified to talk about this.

And unfortunatly by experimenting with these things I really got the fealing that these problems don't expose the best side of Rust. Wrapping lots of types into stuff like Rc<RefCell<T>> wasn't the greatest experience, and the problem that this didn't work with trait objects back then did not increase that either (might have changed).

Message passing is a low level primitive, and it's probably a decent solution for communication between threads. However there is also a need asynchronous communication inside a thread (the eventloop thing), for which they are not really useful. Futures/Promises/Observables are great for both use cases (and imho even greater in a singlethreaded environment). These are definitly harder to implement in Rust - and if you don't believe me just check out how many good implementations are currently available in Rust vs. other languages.

How does the eventloop design in Servo look like? Wasn't that some integration with Spidermonkey?

I agree that in Rust you will spend less time on writing tests and debugging - but this applies also for a lot of other statically typed languages which bring in a better ecosystem for that task.


  > this didn't work with trait objects back then did not 
  > increase that either (might have changed)
I believe the feature that you're referring to is called "DST coercions", which has been available in stable Rust since 1.2. See an example here: https://play.rust-lang.org/?gist=ff8bed62730d54c329f2&versio...


Yes, that was missing back then. But also including the ability to do checked upcast/downcasts in the results, do change between Rc<RefCell<Base>> and Rc<RefCell<Derived>>. Is this now possible?


Ah, async I/O with event loops is a different thing. I thought you were talking about the event loop based model of concurrent programming.

I haven't used mio/mioco (the current async I/O solution for Rust: https://github.com/carllerche/mio/, https://github.com/dpc/mioco), but I've heard good things about it. They seem to interact with safety pretty cleanly.

Rust in 2014 was a very different language. It looked the same at the surface, but a lot of the innards (including the fact that it shipped its own async I/O solution!) have changed since then.

> might have changed

It has. Though the explicitness of Rc<RefCell<T>> is still there, since Rust prefers ownership and mutability implications to be explicit. Not much extra typing, and you can always typedef it.

> a need asynchronous communication inside a thread

Agreed. There are some coroutine libraries in Rust which are pretty nice. There is active effort towards having coroutines inside the language itself (see https://github.com/erickt/stateful for a POC plugin, which should get RfCd at some point when it's complete).

See also: https://dwrensha.github.io/capnproto-rust/2015/05/25/asynchr... (uses mio)

> How does the eventloop design in Servo look like? Wasn't that some integration with Spidermonkey?

Like I mentioned, I meant the event loop model of communication. We have a lot of message passing between threads, many of which are event loops.

The script thread does feed events into its own event loop (after all, JS is event looped, we need to mirror that), and it works pretty well. We do some gymnastics for thread safety (especially because the Javascript GC is involved) but the interface is safe and clean.

> but this applies also for a lot of other statically typed languages which bring in a better ecosystem for that task.

Sure, but these languages don't have quick prototyping either. Except perhaps Go.

Rust doesn't really end up with an additional burden on prototyping over any other similar statically typed language; the borrow checker is something you rarely tussle with once you get used to it.


> I really wonder what should be the difference between those Link and Smart Home projects and something like OpenHAB, besides that is newly started by Mozilla. I think the current adaption of DIY or build-something-through existing frameworks shows that most people don't really care about such solutions, they want something that works out of the box - even if all parts come from a single manufacturer.

Very good question. There is definitely a large intersection between OpenHAB and Project Link. Both projects share one of their objectives: letting DIY users rig together and script their devices. However, beyond this point, our objective with Project Link diverge. We aim to explore ways to put the user in control of their data, including anonymity, authentication, storage, cloud access, web access, and certainly many others that we have yet to discover. It is our impression that OpenHAB's current architecture doesn't match our experimentation objectives.

A second difference is, of course, the architecture. We use Rust, betting that it will let us execute Project Link on devices with little memory, with no supervision, and with very long uptime. In particular, in the case of memory usage, while there are no decisions on this, there are projects that could help us run Rust and Project Link on almost bare metal, should we decide to head in this direction. We felt that OpenHAB's architecture was not adapted to such explorations.

> I think the current adaption of DIY or build-something-through existing frameworks shows that most people don't really care about such solutions, they want something that works out of the box - even if all parts come from a single manufacturer.

We are also exploring out-of-the-box, with other projects. I'm not involved, though, so there isn't much I could say on that topic.

[...]

> One way forward could be pushing the creation of devices (gateways, sensors, actors) that are based solely on standardized and accessible protocols.

Yes, we would very much like to do that. But before we can do that, we first need to be actors on the field, with well-used projects and a large community.

[...]

> Yes - I'm sure the Rust commmunity will disagree with me on the last point ;)

Well, it works nicely so far :)


> [...]I'm not the biggest fan of CoAP (because I think it's hardly possible to implement it correctly)[...]

Out of curiosity, do you have any more details on what you think is impossible to implement correctly?

I've been doing a lot with CoAP and I'd agree that the RFC isn't as clear as it could be about some behavior, but I haven't come across anything that I'd say is impossible to implement or implement correctly.


I thought the base protocol was quite OK, but the complexity arrives as soon as you try to integrate blockwise access and notifications. From my look on it there are quite a lot of possibilites for race conditions there (e.g. bigger data chunks changing while transfers to clients are in progress) for which you might need to implement additional functionality (e.g. making copies of that data). But then you also need to exactly track how many clients are subscribed to that data und how long you need to keep that it (you want timers, and probably more timers than in any other protocol).

I don't recall the exact thing, but there was also something that I didn't like about the message IDs which you somehow need to keep ordered and check which ID might be used at the current point of time by which peer. A correct implementation may not send something while no ID is available, but to track all of those you needed lots of memory (much more than TCP buffers), and the open source implementations I looked at all elided that detail.


I recently uninstalled Firefox browser on Windows because it loads at least 10x slower than Chrome, even with no plugins. Mozilla should fix that first. It'd give users (like me) more confidence in other projects they do, like connected devices.


What happened to Shumway by the way? A few bugs I filed for it are now marked as referencing "Mozilla graveyard" product. It doesn't sound encouraging...



Thanks, that's what I've suspected.


Killed by patents.




Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: