Hacker News new | past | comments | ask | show | jobs | submit login
T1: A Modern Programming Language for Constrained Environments (t1lang.github.io)
107 points by kick 77 days ago | hide | past | web | favorite | 65 comments



Ok, I know maybe it's too early in this projects lifecycle, but when I read the page this was my initial reaction.

As an embedded engineer I found the github page incredibly frustration, and it's tempting to just write it off. If the original author, or anyone else putting a putting together a language page, here's question I shouldn't have to dig for:

1. Where's an example this has been used professionally? (or out side an academic setting) I'm pretty risk adverse since FW/embedded systems are harder to OTA, and changes once we've approved the HW design blows up projects. If it hasn't been, be upfront about it.

2. What's the plan for the project? Is the language stable? Is a standard library next? I use a vendors language/BSP to avoid writing everything little thing from scratch, like printf()

3. Give me a comparison to other languages, why should I use T1 over any other embedded solution. Include size, and maybe the performance hit.

4. Show me some code. If I'm picking this for a project I need to think of everyone else I need to teach the language too, and how they'll accept it. It really helps for a quick glimpse at the syntax to see how foreign it is. Even if it's just Hello World.

5. How do I access peripherals/HW? For example if I need flash/nvram persistent storage, access to the serial port am I back to C to write driver or interface to the BSP? If so I now need good C skills, T1 skills and good binding skills.

Thanks for everyone for the slides, reading the hacker comments pretty much answered 1-5:

https://t1lang.github.io/NorthSec-20190516.pdf

https://bearssl.org/gitweb/?p=BearSSL;a=blob;f=src/x509/asn1...


Is this actively being developed? I don't see an update for the past 8 months.

From https://github.com/t1lang/t1lang.github.io/blob/master/North... slides, are telling for the language analysis to see why they think it's necessary to develop this:

Go:

  - Only with TinyGo*
  - Limited language/runtime support*
    - "support for goroutines and channels is weak"
  - Maps can ony have up to 8 (eight!) entries
  - GC: only for ARM, other platforms "will just allocate memory without ever freeing it" (but GC is required for proper string management)
Rust:

  - Inherits all memory-safety features of Rust
  - Heap is optional
    - But without the heap, everything is allocated on the stack
  - Supports ARM Cortex-M and Cortex-R, RISCV, and MSP430 (experimental)
    - Not other architectures that LLVM does not support
  - Typically more stack-hungry than C
  - Lots of automatic magic
Forth:

  - Many incompatible implementations
    - It's more a concept than a language (there is an ANSI standard)
  - You are support to "write your own Forth"
  - Very compact, with low RAM usage
  - Even less safe than C and not portable
Given all this, is it really cheaper to write a new language than add architecture support to LLVM? I also don't understand the "lots of automatic magic", is that referring to the generics and/or type inference in Rust? Additionally, if not using Heap (the criticism about Heap being optional), where else would you allocate memory besides the Stack? Can someone who does embedded explain that?

btw, on the magic bit the T1 design does go on to talk about Generics support and a rich type system.

Edit: this comment sounds too negative, and that wasn't really my intent. I just wanted to better understand some of the considerations that were used in deciding to make this new language.


I'd be interested to see the comparison against zig[1], which I am eying up as a quite interesting language for embedded development. I think the comparable features would be:

- No automatic magic, anything that looks like a function or memory allocation is, anything that doesn't, isnt.

- Good compile time code generation capabilities including reflection, much nicer than C macros.

- Runtime safety features, such as bounds checking, but not as intrusive as Rust's borrow checker (which makes less sense for single threaded embedded applications). These can be turned off in release builds.

- Very good C interop for use with manufacturer HAL libraries etc, includes a C compiler.

- Better error handling than C, without needing full exceptions, instead the "try" and "defer" macros nicely replace C's "goto" for error handling[2].

- No garbage collection

- Embedded is a first class target, unlike Go/TinyGo

- Working async support on all platforms.

- Sadly not as popular as others, one full time developer currently I think.

[1] https://ziglang.org/ [2] https://eli.thegreenplace.net/2009/04/27/using-goto-for-erro...


AFAIK Zig doesn't have a way to calculate maximum stack usage at build time. But perhaps this could be done in a language-independent way, in LLVM. That would probably be better anyway, since to really do it well, you'd have to do whole-program analysis, including calls between Zig and C.


it's been planned since the beginning:

https://github.com/ziglang/zig/issues/157

https://github.com/ziglang/zig/issues/1639

also I figured out how to have safe recursion, taking advantage of the stackless coroutines: https://github.com/ziglang/zig/issues/1006

this is relatively straightforward to solve in a non-llvm backend. The LLVM abstraction gets leaky fast when you try to tackle this problem.


Interesting, GCC has '-fstack-usage' to output the stack usage of each function into .su files, I didn't realise LLVM was lacking this feature.

There is a tool in rust[1] I just found that makes LLVM emit this information for rust code, but it stores it within[2] the .stack_sizes section in the emitted .elf instead. Possibly a similar trick could be added to Zig

[1] https://github.com/japaric/stack-sizes [2] https://llvm.org/docs/CodeGenerator.html#emitting-function-s...


I was referring to T1's ability to guarantee a particular stack size for the whole program (if I understand correctly), not individual functions.


I’m familiar with Zig. All of those features you list are also Rust’s capabilities. Is there a reason you don’t see Rust as an option in this space?


There are a few. Rust is a very complex language; it's essentially a modern take on C++, and it fully adopts C++'s design philosophy. This has several implications for embedded development. For one, it makes it hard to write custom compilers and direct backends (AFAIK Rust currently only targets LLVM and WASM). For another, it makes certain formal and informal analyses and guarantees hard (this is perhaps ironic given Rust's emphasis on sound safety, but sound static analysis tools are easier to write for C). It appeals to those who already like C++, but embedded developers aren't crazy about C++; C still rules that space.

Zig is a very simple language. Its strength is not in the features it has, but in the features it lacks. Also, Zig encourages stricter control over allocation than Rust, and it's C interop is virtually seamless.

Zig is very young and may well end up disappointing, but its careful selection of features that provide what seems to me to be a great combination of power and simplicity (a single feature represented by a single keyword does the work of generics, value templates, constexprs and macros, all without the complications macros introduce). Anyway, I find it to be a very interesting take on what modern low-level programming can be.


> Rust is a very complex language; it's essentially a modern take on C++, and it fully adopts C++'s design philosophy.

This is a false and inflammatory statement, and unhelpful. It's more complex than C, but has very different set of features and type capabilities from C++ that makes it far simpler. Even for C, I would say it really is a different set of complexity. Rust brings that complexity to the fore-front, whereas C and C++ both have a significant amount of hidden complexity.

> It appeals to those who already like C++

Perhaps, but I absolutely abhor C++ (after years of working with it) and completely love Rust, so :shrug:

> Zig is a very simple language

It is. It also doesn't make many of the same safety guarantees as Rust. So there are trade-offs.

> I find it to be a very interesting take on what modern low-level programming can be.

As do I! Though, I do think people keep looking for greener fields unnecessarily.


> This is a false and inflammatory statement, and unhelpful.

It's complex enough that we don't have a formal model of the full language, which is definitely an issue. Traits are perhaps the most complex feature (ignoring things like macros, etc.) and they've caused quite a few unsoundness problems in the past.


> This is a false and inflammatory statement

I think it's neither false nor inflammatory, but a neutral, objective observation. I think that if you asked people what languages in more-or-less regular use are about as complex as Rust (or more), they'll give you a very short list (I can think of no more than three: C++, Ada and Scala). That does not make it bad or unsuitable -- in fact, it might well be great for some -- but it does make it less attractive to those who value language simplicity.

> It also doesn't make many of the same safety guarantees as Rust. So there are trade-offs.

Sure, and if a low-level programmer's primary goal is to write software that's free of certain technical bugs that correspond to C's undefined behavior, then a language that can more-or-less guarantee they don't exist (with some caveats) may be the best way to achieve that. But I don't think that is most people's primary goal. Even for those who value correctness in high-assurance domains, the elimination of undefined behavior, which always comes at a cost, is not necessarily the best way of achieving correctness. Zig, too, puts an emphasis on writing safe code, it just goes about it a different way than ensuring certain guarantees, soundly, in the language itself.

Now, let me take this opportunity to also explain why it's uncertain that Rust's approach to correctness is the better one. I saw some report from Microsoft that 70% of their security bugs are due to undefined behavior in C++. Let's assume that all UBs are equally to blame, we'll assume that reducing security bugs is the only thing we care about, and we'll assume that Rust completely eliminates all UB (which isn't necessarily true because of unsafe and because there could be bugs in Rust and/or LLVM, but I'm willing to accept that Rust's approach is the best at eliminating UB). By choosing Rust and developing the software at cost X (it doesn't matter for now whether X is smaller or greater than the cost of developing in C++) we eliminate at 70% of security bugs. Now, there can certainly be a different approach that doesn't eliminate all UB and still reduces security bugs by more than 80%. I'm not saying we know for what that approach is, but what I am saying is that it is not at all certain -- it has neither been justified theoretically nor confirmed empirically -- that soundly eliminating all UB, which comes at some cost, is the best approach of reducing those bugs.

I, for one, believe that Zig's approach -- of language simplicity plus some strong safety features -- is a better approach to correctness than Rust's. I don't know that, I just believe that, but those who believe that Rust's approach to safety is better don't have stronger grounds to stand on.

So Zig does not trade off correctness; it trades of the sound guarantee for lack of UB, which isn't a goal in an of itself. It's a means to achieving some goal, and some may believe it's the best means, but there is so far little evidence to suggest that it is. Yeah, Rust is better than Zig at eliminating UB, but that doesn't mean it's better at correctness. It could well be worse.

> Perhaps, but I absolutely abhor C++ (after years of working with it) and completely love Rust, so :shrug:

Rust is an improvement over C++, but C++ isn't a hit in embedded development, either.

> I do think people keep looking for greener fields unnecessarily.

Why unnecessarily? You don't like C++ but found what you want in Rust; I'm not crazy about C++, either, but Rust doesn't even begin to solve my main problems with C++ (a "better C++" is not what I need), so why is my hope for greener fields less necessary than yours?

I would be happy if people gradually shifted from C++ to Rust -- at least we'll have fewer UBs -- but I don't think it will ultimately make a big change in low-level programming; maybe in some sub-niches of it.

I think Rust's emphasis on sound safety guarantees came at a cost elsewhere, and that cost is too heavy for some. Some like it, some prefer a different approach. I have no problem with people who like Rust, and they shouldn't have a problem with those who don't. Anyway, you asked why some might not see Rust as the answer to their wishes, and I answered.


> think it's neither false nor inflammatory, but a neutral, objective observation.

I think we're going to disagree on that take, but you're not wrong that in terms of language syntax, it's often compared to C++.

> Why unnecessarily?

You're correct, much of this is personal taste and preference, "unnecessarily" is probably the wrong word I was reaching for. But there are many who quickly move on from languages at even the slightest discomfort.

> you asked why some might not see Rust as the answer to their wishes, and I answered.

Yes, I did. I'm trying to understand the difference between language features and capabilities vs. language preferences and personal taste.

It is of course valid to state that one language isn't to your taste, but I often see that interlaced with statements that imply that the language isn't capable because of those personal tastes.

It's the features and capabilities I'm more interested in, less the personal preference.


Well, a simple language has some capabilities that are important in the embedded space that a complex language lacks, like the ability to quickly write compilers and to do both formal and informal analyses. It can also be learned more quickly, which is very important for adoption. I don't think that any of those is an immediate deal-breaker (Ada has had some small success in the embedded space despite being complex), but some preferences tend to be more common in some industry niches.

BTW, I'm not disappointed with Rust for some "slight discomfort," but because it doesn't even begin to address my main discomfort with C++. Rust isn't my cup-of-tea for the same reason C++ isn't (and I program C++ all day, every day).


(We’re four and a half years after 1.0, not six)


You're absolutely right. I removed the error.


I don't know Rust very well, but it strikes me as quite, complex, with a lot of language-level features. I think a lot of that complexity isn't needed in embedded, e.g. the borrow checker and "fearless concurrency" makes less sense for single-threaded applications, which is by far the majority of embedded development.

The original post lists rust as having a lot of "hidden magic", which zig explicitly states it does not have. I am unsure how much Rust actually has of this tbh.

Also, are you sure Rust has "a C compiler built in?[1]" If I look at the official guide, it recommends compiling your C code to a static library[2], though there are crates to add this to the build process. Rust requires bindings to be generated too, which can be automated [3], but in Zig it just requires including the header file[4] and optionally converting the error codes into proper errors, no extra packages.

Finally, I don't rule out Rust as an option, especially due to its much higher popularity, mindshare and hence ecosystem. But I think a comparison is interesting when discussing other languages for embedded development.

[1] https://ziglang.org/#Zig-is-also-a-C-compiler [2] https://rust-embedded.github.io/book/interoperability/c-with... [3] https://rust-embedded.github.io/book/interoperability/c-with... [4] https://ziglang.org/#Integration-with-C-libraries-without-FF...


> the borrow checker and "fearless concurrency" makes less sense for single-threaded applications

Memory safety, safety from reference invalidation, async etc. etc. There are lots of reasons to care about the Rust featureset even in single-threaded code.


> Also, are you sure Rust has "a C compiler built in?[1]"

It depends on how you look at it I guess. Rust can represent C FFI directly in the language and has many great tools for incorporating C into the language elegantly, like bindgen, which you allude to. I'm not sure I would call this a gap having worked with incorporating Rust into C projects and also C into Rust projects.


> Everything here is provided under the MIT license. This means that you can use as you wish; you don’t have to credit me

Doesn't the MIT license require that it be reproduced with all derivative works, including the copyright line (which includes the author)?


Yes, it does require that, even for binaries. MIT and [23]-clause BSD are probably the most often violated FOSS licenses because most people aren't aware of that.

BTW, that requirement is the reason why Boost, 1-clause BSD and 0BSD licenses are a thing.


You can also use CC0 for software to dedicate something into the public domain.


CC0 has a mildly problematic patents clause, so if you want to avoid questions about that you might prefer 0BSD. There is a longer explanation on the OSI FAQ https://opensource.org/faq#cc-zero


BSD is unclear about patents. How is that better than explicitly stating that the license is just about copyright?



In many countries, you can't release your works into the public domain. You can only license them. Thus you should generally avoid CC0.


CC0 is explicitly designed with such legal issues in mind and includes a license for legislations that do not have direct way of putting works to PD

> If the waiver isn’t effective for any reason, then CC0 acts as a license from the affirmer granting the public an unconditional, irrevocable, non exclusive, royalty free license to use the work for any purpose.

https://wiki.creativecommons.org/wiki/CC0_FAQ#How_does_it_wo...

https://creativecommons.org/share-your-work/public-domain/cc...


Alright thanks for pointing that out, I misremembered it then.


The point of CC0 is that it puts things in the public domain if possible, and provides the most liberal license if not. See the "public license fallback" - https://creativecommons.org/publicdomain/zero/1.0/legalcode


Yes, 0BSD is the generally recommended no obligations version https://spdx.org/licenses/0BSD.html


I wonder if this could end up taking more space than the code itself…


For anyone looking for examples, there are some here https://github.com/t1lang/t1lang.github.io/blob/master/North... .


T0 mentioned there is already used and available there: https://bearssl.org/gitweb/?p=BearSSL;a=tree;f=T0;hb=HEAD

Example code: https://bearssl.org/gitweb/?p=BearSSL;a=blob;f=src/x509/asn1...

The slides are also much more educative: https://t1lang.github.io/NorthSec-20190516.pdf


I would argue that a language targeting bare metal type applications should at least be minimally aware of that underlying hardware. A single stack "virtual machine" type language is generally a terrible fit for the 2 and 3 operand register-based processors which dominate the landscape.

Every interview of Chuck Moore that I've read has contained zero push-back for his rather wild claims. It's entirely possible for an industry to do go down the wrong path for a while, but at some point, if Forth and stack processing were the giant killers they were cracked up to be, you would see them enter and dominate at least some portion of the mainstream. You can't say they haven't been given enough time.

It seems there are many "Forth curious" programmers out there, but they aren't being given the full picture with the various puff pieces and vanity projects floating around that never really go anywhere. It's almost a culture of victimhood.


I’d say calling a stack-based VM terrible is overstating the case a bit. In [1] the number seem to bear out that converting from stack-based to register-based yields an average, adjusted performance increase of approx. 25%. Also the transition results in an increase of code size of approx. 45%. That speed up is a non-trivial amount, but so is the code size increase, which is a valid area of concern for embedded/resource constrained engineering. Also, while I tend to STRONGLY agree about awareness of hardware at the language level (although I don’t think this should be limited to just embedded environments) the methods in [2] are automated and simple ways the Compiler can take stack-based code and create optimized register-based instructions.

As an aside, there is probably a different argument to had about whether a stack-based VM as the mental model of a language is beneficial, but as a said, that is a vary different argument than discussing the technical ability to transpire a stack-based VM to a register-based one.

[1] ‘Virtual Machine Showdown: Stack Versus Registers’. Ertl, Gregg, et al. 2005. https://www.usenix.org/legacy/events%2Fvee05%2Ffull_papers/p...

[2] ‘Implementation of Stack-Based Languages on Register Machines‘. Ertl, Dissertation. 1996.


I briefly skimmed [1] and their more sophisticated translation to a register VM yielded a 25% increase in code size, and not the 45% you stated? Regardless, VM to VM really isn't my point.

I still don't get the positives of why the programmer should be presented with a stack machine SW model when there is no stack machine type stack to be found in the HW. Programmers with no stack machine / language experience probably think (as I did at one point): "the stack on my HP calculator stack works great, why not base a language on that?" but it scales poorly, it gets really hairy if the stack is the only place to store things, and it's a major headache keeping the stack from going out of sync (hence Forth's clear I/O comments regarding subroutine stack use).


You are correct about the 25%. That was a mistake (for anyone else looking, the 45% was a decrease in the executed instruction amount from stack to register.

Correction aside, I, at some level, agree with you. The question of appropriateness/benefits of a SM model is different than considerations of the effectiveness of the code. In terms of the Forth SM model and its explicitly imperative model, I am familiar with the various complaints/arguments/critiques for varying reasons. I certainly acknowledge there is at least some burden placed due to the stack mechanics. I happen to be particularly attracted to the approach and enjoy programming that way, but that is just a preference of mine. However, I also tend to look at Forth as imperative language with which to program in what Backus describes as ‘function level programming.’ I find the benefits as listed by Jon Purdy, author of the Kitten language, in his talk about concatenative code to be compelling enough that exploration and refinement of this idea space to be valuable.

All that said, I am perfectly willing to consider that, as presented in Forth and presumably in the proposed language, SM mechanics and models are a hurdle for users of the language.


In almost every meaningful interview of Moore, he explains the concept of virtual stacks. In his most recent interview, he literally explained how he designs all of his chips to have 8 physical stacks, while in the past he used (I believe) 16 stacks. You're misrepresenting Moore's claims in a way that can't be taken as anything but uninformed.


I believe there has been a wide variety of successful Forth projects including NASA probes.

Forth makes more sense when you can be like Chuck and design your own hardware. At that point Forth is more of a minimalist stack-based philosophy that consists of the minimal hardware + software design to accomplish something.

Is his way better? It likely has somewhere near the bare possible code and is very efficient system wise. If you wanted a minimal solution and could devote years to the project, then this is nice.

Does it make the best usage of developer time? Obviously not in many cases. In today's business world, you just slap some components together, do minimal acceptable testing, push it out and go to the next thing. Cost is important to the customer, so as long as somebody is willing to do that kind of work, everyone has to. Of course the downside is that we've accreted all this tower of abstractions and complexity going from OS to JVM to libraries and source. We already have tons of COBOL that can barely be maintained. Next will be the large Java and Python codebases.


The Forth success stories tend to be really, really ancient, and therefore almost irrelevant. Much like Chuck's arguments for the merits of stack languages / machines. Processor pipelines have to be at least deep enough to do a wide multiplication or you're basically looking at a toy.

I actually have designed my own FPGA soft core barrel processor, it's a special blend of register and stack machine. The blend occurs by placing stacks under the registers themselves. I believe this allows a low register count, 2 operand architecture to be more efficient than it otherwise would be, which minimizes opcode size, and sidesteps most the crazy you get when trying to shoehorn most processes into a single stack environment.

But has clear downsides as well, the main one being the stacks can become easily corrupted by any process using them - this is true of any stack machine but you strangely never hear it come up in conversations with Forth types. Stack processors can eliminate much traditional processor state, but the stacks themselves contain state, which is often overlooked.


Have you seen his Green Arrays? Not sure what the best use case is, but primitive it ain't.


Multi-core F18A technology, each of which is a simple 18 bit processor. IMO, anything less than 32 bits (with internal 33 x 33 = 65 bit multiply) falls into the primitive category abyss.

Moore is an incredible salesman, I'll give him that.


Moore is an awful salesman but a skilled technologist, you have it reversed. He only recently got buyers for his recent processors, and he doesn't handle sales in any of his business endeavors.

Also, the chips he made before GA were 32-bit. He deemed it unnecessary, and the GA chips run miles around them.


I meant that he's a genius at the whole stack machine / language shaman thing.

32 bits are unnecessary?!? I suppose a 18 bit machine would run a lot "faster" than a 32 bit machine given certain data sets and loads, but I wouldn't want to do any audio DSP with it.


Actually, audio DSP might be a bit better on them, too. They have the best power/instruction ratio on the entire planet, and given each chip has 144 entire computers on it, audio DSP should be no problem for them.


"...and given each chip has 144 entire computers on it..."

Entire computers? What decade is it again?


Not just a processor, but peripherals as well.


This!

A core and a computer aren't the same, and while Intel processors have multiple cores, sometimes many, they don't have multiple computers within.

http://www.greenarraychips.com/home/documents/greg/PB002-100...

COMPLETE SYSTEMS: We refer to our chips as Multi-Computer Systems because they are, in fact, complete systems. Supply one of our chips with power and a reset signal, and it is up and running. All of our chips can load their software at high speed using a single wire that can be daisy chained for multiple chips; if desired, most can be bootstrapped by a simple SPI flash memory. Application software can be manufactured into a custom chip for a modest cost to further simplify overall system design. External memory is not required to run application software, but our larger chips have sufficient I/O to directly control external memory devices if desired.

Contrast this with a Multi-Core CPU, which is not a computing system until other devices such as crystals, memory controllers, memories, and bus controllers have been added. All of these things consume energy, occupy space, cost money, add complexity, and create bottlenecks. Most multi-core CPUs are designed to speed up conventional operating systems, which typically have hundreds or thousands of concurrent processes, by letting a handful of process execute in parallel as opposed to only one. They are not, typically, designed for significantly parallel processing, and they are even less well suited for simple applications than are their less expensive single-core progenitors.

It's meant for hyper-parallel applications, unlike Pentium cores.


Sounds promising! I like the idea of open source projects being an ‘extraction’ from something mission-critical like BearSSL, similarly to Rails’ birth. Also agree totally re the comparisons of other languages, none are really ideal for embedded. Although I suspect that embedded CPUs will simply get more powerful for a given pricepoint, to the stage where we can use basically any language eventually. I suspect the syntax might be too alien for most though.


A code sample on the landing page would have been nice.



From t1spec.pdf:

"Postfix source code can be readily serialized into an executable format running on threaded code, a well-known code generation method that can allow for a very small compiled code footprint"

What's that method?


You take each symbol in sequence and append a reference to the compilation output. It's super easy and it results in a very small code footprint (it can often be smaller than compiled C for equivalent code).


For me the most interesting part of T1 is the ability to strongly guarantee maximum stack usage, and the accompanying ban on general recursion. Are there any other languages or compilers that do this?


Resource Aware ML is a research language which analyzes resource usage, including memory bounds

http://raml.co/



I think Ada, in its SPARK dialect, should be able to do this.


It would be interesting to try this or Forth out in some serious embedded system project. I wonder if I would get the "stack based" style to work in my head.


I'd love to see a statistic on the number of new languages announced on HN.


I don’t quite get the target of this language. The landing page says it should be used for embedded systems, yet the compiler roadmap for v1 contains “ functions for reading and writing files”.

What files? This will run on top of an operating system? Why bother with a new language then.


A lot of embedded systems use file storage even while not having an OS.

A very cheap way to add storage to low volume productions is to add a microSD card. At that point it’s easier to go for something like FAT and work with that


and yet that sounds like a library and not a core language construct.


Some people like larger languages that have more of the basics rolled into core. Not sure if that fits here though.


Tons of embedded devices use files. For example digital cameras, network routers, and handheld GPS all use files as an explicit part of their setup, output, or operation.


Hu, yes, sure, but do you think it makes sense that your _language_ of choice implements a FAT file system and a file abstraction on top of it in its standard library?!

I sure have never seen that




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: