Hacker Newsnew | past | comments | ask | show | jobs | submit | CGamesPlay's commentslogin

No, that link goes to the second “download our app” screen shown in the OP.

That has a link to actually applying online.

They are definitely using a dark pattern to push people towards the app, but it is possible to apply online.


This is untrue. Subpoenas, wiretapping, and other extrajudicial means can be stopped by legislation that bans them. You can't say in one breath that legislation that enables it (Patriot Act) cannot be undone by more legislation. There are many hurdles required to produce the required legislation, which may not even be broadly supported by the public, but it isn't correct to say "no amount of legislation can stop existing legislation".

That would require to repeal the FISA and the Patriot acts. That won't happen.

More fundamentally, however, the US constitution only protects Americans and American companies. Europeans would be foolish to trust the US with their data given this lack of basic protection and oversight.


> That won't happen.

Never say never.


If they could be stopped by legislation that bans them, they would have been stopped by the legislation that banned them prior to the legislation that authorised them, but we know this is not the case. They were being done on a wide scale long before they were legal.

A bad legislation is comparatively difficult to revert than a good legislation

To me, the most interesting thing about Pi and the "claw" phenomenon is what it means for open source. It's becoming passé to ask for feature requests and even to submit PRs to open source repos. Instead of extensions you install, you download a skill file that tells a coding agent how to add a feature. The software stops being an artifact and starts being a living tool that isn't the same as anyone else's copy. I'm curious to see what tooling will emerge for collaborating with this new paradigm.

I see this happening, too.

We know that a lack of control over their environment makes animals, including humans, depressed.

The software we use has so much of this lack of control. It's their way, their branding, their ads, their app. You're the guest on your own device.

It's no wonder everyone hates technology.

A world with software that is malleable, personal, and cheap - this could do a lot of good. Real ownership.

The nerds could always make a home with their linux desktop. Now everyone can. It'll change the equation.

I'm quite optimistic for this future.


I'm presently in the process of building (read: directing claude/codex to build) my own AI agent from the ground up, and it's been an absolute blast.

Building it exactly to my design specs, giving it only the tool calls I need, owning all the data it stores about me for RAG, integrating it to the exact services/pipelines I care about... It's nothing short of invigorating to have this degree of control over something so powerful.

In a couple of days work, I have a discord bot that's about as useful as chatgpt, using open models, running on a VPS I manage, for less than $20/mo (including inference). And I have full control over what capabilities I add to it in the future. Truly wild.


> It's nothing short of invigorating to have this degree of control over something so powerful

Is this really that different to programming? (Maybe you haven't programmed before?)


Fair point.

> It's nothing short of invigorating to have this degree of control over something so powerful

I'm a SWE w/ >10 years, and you're right, this part has always been invigorating.

I suppose what's "new" here is the drastically reduced amount of cognitive energy I need build complex projects in my spare time. As someone who was originally drawn to software because of how much it lowered the barrier to entry of birthing an idea into existence (when compared to hardware), I am genuinely thrilled to see said barrier lowered so much further.

Sharing my own anecdotal experience:

My current day job is leading development of a React Native mobile app in Typescript with a backend PaaS, and the bulk of my working memory is filled up by information in that domain. Given this is currently what pays the bills, it's hard to justify devoting all that much of my brain deep-diving into other technologies or stacks merely for fun or to satisfy my curiosity.

But today, despite those limitations, I find myself having built a bespoke AI agent written from scratch in Go, using a janky beta AI Inference API with weird bugs and sub-par documentation, on a VPS sandbox with a custom Tmux & Neovim config I can "mosh" into from anywhere using finely-tuned Tailscale access rules.

I have enough experience and high-level knowledge that it's pretty easy for me to develop a clear idea of what exactly I want to build from a tooling/architecture standpoint, but prior to Claude, Codex, etc., the "how" of building it tended to be a big stumbling block. I'd excitedly start building, only to run into the random barriers of "my laptop has an ancient version of Go from the last project I abandoned" or "neovim is having trouble starting the lsp/linter/formatter" and eventually go "ugh, not worth it" and give up.

Frankly, as my career progressed and the increasingly complex problems at work left me with vanishingly less brain-space for passion projects, I was beginning to feel this crushing sense of apathy & borderline despair. I felt I'd never be able make good on my younger self's desire to bring these exciting ideas of mine into existence. I even got to the point where I convinced myself it was "my fault" because I lacked the metal to stomach the challenges of day-to-day software development.

Now I can just decide "Hmm.. I want an lightweight agent in a portable binary. Makes sense to use Go." or "this beta API offers super cheap inference, so it's worth dealing with some jank" and then let an LLM work out all the details and do all the troubleshooting for me. Feels like a complete 180 from where I was even just a year or two ago.

At the risk of sounding hyperbolic, I don't think it's overstating things to say that the advent of "agentic engineering" has saved my career.


What models and inference provider?

I'm using kimi-k2-instruct as the primary model and building out tool calls that use gpt-oss-120b to allow it to opt-in to reasoning capabilities.

Using Vultr for the VPS hosting, as well as their inference product which AFAIK is by far the cheapest option for hosting models of these class ($10/mo for 50M tokens, and $0.20/M tokens after that). They also offer Vector Storage as part of their inference subscription which makes it very convenient to get inference + durable memory & RAG w/ a single API key.

Their inference product is currently in beta, so not sure whether the price will stay this low for the long haul.


You can definitely get gpt-oss-120b for much less than $0.20/M on openrouter (cheapest is currently 3.9c/M in 14c/M out). Kimi K2 is an order of magnitude larger and more expensive though.

What other models do they offer? The web page is very light on details


Oh dang I had no idea that gpt-oss-120b was that cheap these days.

And yeah, given Vultr inference is in beta, their docs ain't great. In addition to kimi-k2-instruct and gpt-oss-120b, they currently offer:

deepseek-r1-distill-llama-70b deepseek-r1-distill-qwen-32b qwen2.5-coder-32b-instruct

Best way to get accurate up-to-date info on supported models is via their api: https://api.vultrinference.com/#tag/Models/operation/list-mo...

K2 is the only of the 5 that supports tool calling. In my testing, it seems like all five support RAG, but K2 loses knowledge of its registered tools when you access it through the RAG endpoint forcing you to pick one capability or the other (I have a ticket open for this).

Also, the R1-distill models are annoying to use because reasoning tokens are included in the output wrapped in <think> tags instead of being parsed into the "reasoning_content" field on responses. Also also, gpt-oss-120b has a "reasoning" field instead of "reasoning_content" like the R1 models.


> The nerds could always make a home with their linux desktop. Now everyone can. It'll change the equation.

Probelm is, to be able to do what you're describing, you still need the source code and the permission to modify it. So you will need to switch to the FOSS tools the nerds are using.


That's a feature, not a bug.

It means normies will finally see value in open source beyond just being free. They'll choose it over closed source alternatives.

This, too, makes a brighter future.


Obligatory post: open source != free software.

There is OSS you are not allowed to modify etc.


There are source-available software one is not permitted to distribute after modification. But what source-available software prevents the user from modifying the source for use by oneself?

We’re off to a great start then with Anthropic banning users who use alternative clients with their Claude subscription.

I'm actually relieved they're doing it now because it's going to be a forcing function for the local LLM ecosystem. Same thing with their "distillation attack" smear piece -- the more of a spotlight people get on true alternatives + competition to the 900 lb gorillas, the better for all users of LLMs.

I really hope so. I moved to Codex, only to get my account flagged and my requests downgraded to 5.2 because of some "safety" thing. Now OpenAI demands I hand my ID over to Persona, the incredibly dodgy US surveillance company Discord just parted ways with, to get back what I paid for.

This timeline sucks, I don't want to live in a future where Anthropic and OpenAI are the arbiters of what we can and cannot do.


It definitely does suck. I had the same feelings about a year ago and the unpleasantness has definitely increased. But glass half full, we didn't have Kimi K2.5, GLM5, Qwen3.5, MiniMax 2.5, Step Flash 3.5, etc available and the cambrian explosion is only continuing (DeepSeek V4 should be out pretty soon too).

The real moment of relief for me was the first time I used DeepSeek R1 to do a large task that I would've otherwise needed Claude/OpenAI for about 12 months ago and it just did it -- not just decently, but with less slop than Claude/OpenAI. Ever since that point, I've been continuing to eye local models and parallel testing them for workloads I'd otherwise use commercial frontier models for. It's never a perfect 1:1 replacement, but I've found that I've gotten close enough that I no longer feel that paranoia of my AI workloads not being something I can own and control. True, I do have to sacrifice some capability, but the tradeoff is I get something that lives on my metal, never leaks data or IP, doesn't change behavior or get worse under my feet, doesn't rate limit me, can be fine tuned and customized. It's all lead to a belief for me that the market competition is very much functioning and the cat is out of the bag, for the benefit of all of us as users.


That's just because corporations got greedy and made their apps suck.

Strip away the ads, the data harvesting, add back the power features, and we'll be happy again. I'm more willing than ever to pay a one-time fee good software. I've started donating to all the free apps I use on a regular basis.

I don't want to own my own slop. That doesn't help me. Use your AI tools to build out the software if you want, but make sure it does a good job. Don't make me fiddle with indeterministic flavor-of-the-month AI gents.


> That's just because corporations got greedy and made their apps suck.

It is true for me with Linux. I code for a living and I can't change anything because I can't even build most software -- the usual configure/make/make install runs into tons of compiler errors most of the time.

Loss of control is an issue. I'm curious if AI tools will change that though.


I think there's room for both visions. Big Tech is generating more toxic sludge than ever, and yeah sure this is because they're greedy, but more precisely the root cause is how they lobbied Washington and our elected officials agreed to all kinds of pro-corporate, anti-human legislation. Like destroying our right to repair, like criminalizing "circumvention" measures in devices we own, like insane life-destroying penalties for copyright infringement, like looking the other way when Big Tech broke anti-trust laws, etc.

The Big Tech slop can only be fixed in one way, and actually it's really predictable and will work - we need to fix the laws so that they put the rights and flourishing of human beings first, not the rights and flourishing of Big Tech. We need to fix enforcement because there are so many times that these companies just break the law and they get convicted but they get off with a slap on the wrist. We need to legislate a dismantling of barriers to new entrants in the sectors they dominate. Competition for the consumer dollar is the only thing that can force them to be more honest. They need to see that their customers are leaving for something better, otherwise they'll never improve.

But our elected officials have crafted laws and an enforcement system which make 'something better' impossible (or at least highly uneconomical).

Parallel to this if open source projects can develop software which is easier for the user to change via a PR, they totally should. We can and should have the best of both worlds. We should have the big companies producing better "boxed" software. Plus we should have more flexibility to build, tweak and run whatever we want.


And then they will take away your right to boot whatever you want. For national security reasons and the children, of course.

Very good points, I agree and would add : "Interoperability" is the key to bring back competition and open the ecosystem again.

and being able to fire employees for profit gain when they already make a profit, thats illegal in other countries

What you're describing is the expected and correct outcome inside a profit-oriented, capitalist system. So the only way I see out of this situation would be changing policy to a more socialist one, which doesn't seem to be so popular among the tech elite, who often think they deserve their financial status because of the 'value' they provide, without specifying what that value is (or its second-order consequences). Whether that's abusing a monopolistic market position they lucked into, making apps as addictive as possible, or building drones that throw bombs on newborns in hospitals.

I think we're after the same goal but have a different view of mechanism.

Regulation enforcement against the anti-market behaviors would bring a lot of good.

Putting too much power in any centralized authority - company or government - seems to lead to oppression and unhealthy culture.

Fair markets are the neatest trick we have. They put the freedom of choice in the hands of the individual and allow organic collaboration.

The framing should not be government vs company. But distributed vs centralized power. For both governance and commerce.

The entire world right now suffers from too much centralized power. That comes in the form of both corporate and government. Power tends to consolidate until the bureaucracy of the approach becomes too inefficient and collapses under its own weight. That process is painful, and it's not something I enjoy living through.

If you see through that lens, it has explaining power for the problems of both the EU countries and the US.


I've been thinking about this lately too. I think we're going to see the rise of Extremely Personal Software, software that barely makes any sense outside of someone's personal context. I think there is going to be _so_ much software written for an audience of 1-10 people in the next year. I've had Claude create so much tooling for me and a small number of others in the last few months. A DnD schedule app; a spoiler-free formula e news checker; a single-use voting site for a climbing co-op; tools to access other tools that I don't like using by hand; just absolutely tons of stuff that would never have made any sense to spend time on before. It's a new world. https://redfloatplane.lol/blog/14-releasing-software-now/

I think people overestimate the general population's ability and interest in vibe coding. Open source tools are still a small niche. Vibe code customized apps are an even bigger niche.

Maybe so. I guess I feel that in a couple of years it may not be called vibe coding, or even coding, I think it might be called 'using a computer'. I suppose it's very hard to correctly estimate or reason about such a big change.

My entire career has been building niche software for small business and personal use. The current crop of AI tools help get that software into my clients' hands quicker and cheaper.

And those reduced timelines mean that the client has less opportunity to change scope and features - that is the real value for me as a developer.


even smaller?

> a living tool that isn't the same as anyone else's copy

Yes, which is why this model of development is basically dead-in-the-water in terms of institutional adoption. No large firm or government is going to allow that.


Large institutions and governments had been pushing back against open source too until it became obviously inevitable.

It wasn't "inevitable", it took Red Hat and some other key players addressing the concerns the businesses and governments had, which took the better part of a decade. If LLMs as an ecosystem don't implode in the next year or so I imagine you'll start to see some big consultancies starting that same process for them.

> it took Red Hat and some other key players addressing the concerns the businesses and governments had

Red Hat? I don't think they are involved in the moves to FOSS for government agencies, mostly because they're American, and the ones who are currently moving quickly (in the government world at least) are the ones who aren't American and what to get rid of their reliance on American infrastructure and software.


Visit Washington DC some time and ride the metro. Red Hat puts out ads about all their public sector offerings.

> Visit Washington DC some time and ride the metro. Red Hat puts out ads about all their public sector offerings.

I haven't had a single need to visit the US, and I still have zero needs for it. If I need to read subway ads to understand how a company is connected to FOSS, I think I'll skip that and continue using and working with companies who make that clear up front :) Thanks for the offer though!


An unnecessarily snarky response to someone offering you clear information.

RHEL is quite ubiquitous in the States, not everything is Microsoft Windows Server

Right, but is "the States" currently trying to migrate away from US infrastructure and choosing FOSS to do so? That was the context I was entering this thread with, since most of the organizations moving to FOSS right now are doing so to move away from US infrastructure.

The whole context was how Red Hat was historically involved in addressing the concerns that were hindering government adoption. Are you just being intentionally obtuse to denigrate the US for some reason?

> It's becoming passé to ask for feature requests and even to submit PRs to open source repos.

Yet, the first impact on FOSS seems to be quite the opposite: maintainers complaining about PRs and vulnerability disclosures that turn out to be AI hallucinations, wasting their time. It seems to be so bad that now GitHub is offering the possibility of turning off pull requests for repositories. What you present here is an optimistic view, and I would be happy for it to be correct, but what we've seen so far unfortunately seems to point in a different direction.


We might be witnessing some survivor bias here based on our own human conditioning. Successful PRs aren't going to make the news like the bad ones do.

With that said, we are all dealing with AI still convincingly writing code that doesn't work despite passing tests or introducing hard to find bugs. It will be some time until we iron that out fully for more reliable output I suspect.

Unfortunately we won't be able to stop humans thinking they are software engineers when they are not now that the abstraction language is the human language so guarding from spam will be more important than ever.


Why would this new paradigm create interesting tooling? From your description I expect wrose not better tools.

Worse it better for you when it meets your needs better.

I use a lot of my own software. Most of it is strictly worse both in terms of features and bugs than more intentional, planned projects. The reason I do it is because each of those tools solve my specific pain points in ways that makes my life better.

A concrete example: I have a personal dashboard. It was written by Claude in its entirety. I've skimmed the code, but no more than that. I don't review individual changes. It works for me. It pulls in my calendar, my fitbit data, my TODO list, various custom reminders to work around my tendency to procrastinate, it surfaces data from my coding agents, it provides a nice interface for me to browse various documentation I keep to hand, and a lot more.

I could write a "proper" dashboard system with cleanly pluggable modules. If I were to write it manually I probably would because I'd want something I could easily dip in and out of working on. But when I've started doing stuff like that in the past I quickly put it aside because it cost more effort than I got out of it. The benefit it provides is low enough that even a team effort would be difficult to make pay off.

Now that equation has fundamentally changed. If there's something I don't like, I tell Claude, and a few minutes - or more - later, I reload the dashboard and 90% of the time it's improved.

I have no illusions that code is generic enough to be usable for others, and that's fine, because the cost of maintaining it in my time is so low that I have no need to share that burden with others.

I think this will change how a lot of software is written. A "dashboard toolkit" for example would still have value to my "project". But for my agent to pull in and use to put together my dashboard faster.

A lot of "finished products" will be a lot less valuable because it'll become easier to get exactly what you want by having your agent assemble what is out there, and write what isn't out there from scratch.


To be clear I never said custom vibe coded personal software is bad. But clearly that's not the point from OP. Quoting directly:

> you download a skill file that tells a coding agent how to add a feature

This is suggesting a my_feature.md would be a way of sharing and improving software in the future, which I think is mostly a bad thing.


It is a way of sharing and improving software already today. Not a major way, yet, but I don't agree with you it would be a bad thing for that to become more common, in as much as - to go back to my dashboard example - sharing a skill that contains some of the lessons learned, and packages small parts would seem far more flexible and viable as a path for me to help make it easier for others to do the same, than packaging up something in a way that'd give the expectation that it was something finished.

But also, note that skills can carry scripts with them, so they are definitely also more than a my_feature.md.


> Instead of extensions you install, you download a skill file that tells a coding agent how to add a feature. The software stops being an artifact and starts being a living tool that isn't the same as anyone else's copy. I'm curious to see what tooling will emerge for collaborating with this new paradigm.

I build my own inspired by Beads, not quite as you're describing, but I store todo's in a SQLite database (beads used SQLite AND git hooks, I didn't want to be married to git), and I let them sync to and from GitHub Issues, so in theory I can fork a GitHub repo, and have my tool pull down issues from the original repo (havent tried it when its a fork, so that's a new task for the task pile).

https://github.com/Giancarlos/guardrails/issues

You can see me dogfeeding my tool to my tools codebase and having my issues on the github for anyone to see, you can see the closed ones. I do think we will see an increase in local dev tooling that is tried and tested by its own creators, which will yield better purpose driven tooling that is generic enough to be useful to others.

I used to use Beads for all my Claude Code projects, now I just use GuardRails because it has safety nets and works without git which is what I wanted.

I could have forked Beads, but the other thing is Beads is a behemoth of code, it was much easier to start from nothing but a very detailed spec and Claude Code ;)


Funny, I just released my dev setup as “Open prompt”

https://github.com/rbren/personal-ai-devbox


I actually look at this another way. I think we’re going to see a lot more open source. Before you had to get your pr merged into main. Now people will just ask ai to build the tool they need and then open source it.

Maintainers won’t have to deal with an endless stream of PRs. Now people will just clone your library the second it has traction and make it perfect for their specific use case.

Cherry pick the best features and build something perfect for them. They’ll be able to do things your product can’t, and individual users will probably find a better fit in these spinoffs than in the original app.


Patrick Collison said this yesterday on TBPN, "Software is becoming like pizza […] It should be cooked right then and there at the moment of use"

  Is it possible that software is not like anything else, that it is meant to be discarded: that the whole point is to always see it as a soap bubble?

  --Alan Perlis

I totally feel this. Prior I never had time for doing this but now I just do it without even thinking about contributing.

And how great it will be to troubleshoot any issues because everyone is basically running a distinct piece of software

It's like the dude who monkey-patches their car and goes to the dealer to complain why the suspension is stiff.

It's because you put 2by4's in place of the shocks, you absolute muppet. And then they either give them a massive bill to fix it properly or politely show them out.

Same will happen in self-modifying software. Some people are self-aware enough to know that "I made this, it's my problem to fix", some will complain to the maker of the harness they used and will be summarily shown the door.


I don’t want to be the one who has to upgrade this software + vibe coded patches.

It’s going to be very likely that once something is patched is to be considered as diverged and very hard to upgrade


... made minutes ago.

So everybody will be using (sometimes slightly, sometimes entirely) different software. Like mutations, these adapt to the specific problems in the situation they were prompted to be programmed.

The skill for feature thing is just horrible, it's wasteful to everyone but the maintainer. It feels like a YOLO people are getting away with because people drank some kool-aid.

Think of skills more like Excel macros (or any other software with robust macro support). It doesn't make sense for Microsoft to provide the specific workflow you need, but your own sheet needs it.

Except "skills" being worked upon by a deterministic model will result in inconsistent results than a heuristic VB macro written for Excel

> if I start the agent in ./folder then anything outside of ./folder should be off limits unless I explicitly allow it, and the same goes for bash where everything not on an allowlist should be blocked by default.

Here's the problem with Claude Code: it acts like it's got security, but it's the equivalent of a "do not walk on grass" sign. There's no technical restrictions at play, and the agent can (maliciously or accidentally) bypass the "restrictions".

That's why Pi doesn't have restrictions by default. The logic is: no matter what agent you are using, you should be using it in a real sandbox (container, VM, whatever).


But the agent has to interact with the world; fetch docs, push code, fetch comments, etc. You can't sandbox everything. So you push that configuration to your sandbox, which is a worse UX that the harness just asking you at the right time what you'd like to do.

I too would like to know what a good UX looks like here but I have doubts that the permission prompts of Claude are the way to go right now.

Within days people become used to just hitting accept and allowlisting pretty much everything. The agents write length scripts into shell scripts or test runners that themselves can be destructive but they immediately allowlisted.


Well, you are imagining a worse UX, but it doesn't have to be. Pi doesn't include a sandboxing story at all (Claude provides an advisory but not mandatory one), but the sandbox doesn't have to be a simple static list of allowed domains/files. It's totally valid to make the "push code" tool in the sandbox send a trigger to code running outside of the sandbox, which then surfaces an interactive prompt to you as a user. That would give you the interactivity you want and be secure against accidentally or deliberately bypassing the sandbox.

So you have to set up that integration instead of letting the agent do it. I suppose the sandbox is more configurable, but do you need that? I thought the draw of pi was that you didn't do all that and let it fly, wheeee!

edit: You're not making it sound easy at all. I don't have to build anything with the other agents.


Certainly not. Pi is "minimalist", so the draw is that it's "easy" to set it up yourself. You can not do that and run it in yolo mode, and you can do that with Claude Code too. Heck you can even use this hypothetical real-sandbox-with-interactive-prompts with Claude Code instead, once you build it.

Back to my original point: Claude Code gives you a false feeling of security, Pi gives you the accurate feeling of not having security.


4 times in one thread, please stop spamming this link.

Thanks for putting this together, it's very helpful.

Readers may also be interested in <https://github.com/eugene1g/agent-safehouse> which was open sourced after a recent HN conversation <https://news.ycombinator.com/item?id=46923436>.


You can rename `.gitkeep` to `.gitignore` and both be happy in that case.

Wow, the original was actually the real diagram at a thumbnail-quality resolution, so this was an AI upscaling or something like that.

https://github.com/MicrosoftDocs/learn/blob/c266367ec0eb1f7f...


From a brief look, this appears to be using LLM-as-optimizer techniques to generate and measure the impact of skills. That's a lot more involved and likely more effective than the typical "ask Claude to write a skill for the task it just did".

> Using gskill, we learn repository-specific skills for jinja and bleve with a simple agent (Mini-SWE-Agent, gpt-5-mini), boosting its resolve rate from 55% to 82% on Jinja and from 24% to 93% on Bleve. These skills also transfer directly to Claude Code: on Bleve, Claude Haiku 4.5 jumps from 79.3% to 100% pass rate while running faster; on Jinja, Claude Haiku 4.5 improves from 93.9% to 98.5%.


This looks like what I want. A few questions: is it possible to have a “mayor” type role that has the ability to start other agents, but at the same time be unable to access those secrets or infiltrate prompt data? The key piece I don’t see is the agent needs a tool for klaw itself, and then I have to be able to configure that appropriately.

Is there a unified human approval flow, or any kind of UI bundled with this? Maybe I missed this part.


Right now the controller can see secrets across namespaces, so that level of isolation isn’t there yet. It’s on the roadmap though. Namespace-scoped secrets where a controller agent can spawn agents but can’t read their secrets is the right model. No human approval flow yet either, agents create directly. Would you want something like klaw dispatch --approve that queues until a human confirms?

My instinct is that it should be built in to tool calls. So if there was a "klaw dispatch tool", it would have the same approval flow as the "publish draft" tool, which is to say some easy way for a human to review and approve (or provide feedback).

got your feedback, it can run cli commands but for tool calling / skill should be built in, will add that!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: