Hacker News new | comments | ask | show | jobs | submit login
I Quit My Job to Live on Donations to Zig (andrewkelley.me)
399 points by AndyKelley 8 months ago | hide | past | web | favorite | 236 comments



There’s a huge positive-feedback-loop problem in technology right now that needs to be solved to make projects viable.

Never received even one donation/rating/comment/etc.? Be prepared to wait approximately forever.

Got a few, especially publicly-visible? Suddenly things trickle in faster.

Well-known project with 5000 stars? Here, have more stars. Here, have more donations. Success breeds success.

Meanwhile, wasn’t it shown that crucial infrastructure like SSL had between 1-2 unknown developers? How many other projects are there? How many developers can you name? How many different projects have you donated to? How many “not well known” apps have you balked at paying even $0.99 for?

It’s a problem. There needs to be better ways to vault critical “boring” projects into the well-known category. And, there has to be a way to keep positive feedback loops from ballooning ratings and income for projects simply because they sorted to the top and stayed there.


> There’s a huge positive-feedback-loop problem in technology right now that needs to be solved to make projects viable.

> Never received even one donation/rating/comment/etc.? Be prepared to wait approximately forever.

> Got a few, especially publicly-visible? Suddenly things trickle in faster.

That's not tech specific; there's a reason that places with tip jars preseed them with an initial stash of visible tips, and advertising (for consumer products, charity, and even B2B productos) features endorsers known to the target audience. Human social behavior is strongly based in imitation.


The tech-specific differences are things like:

- Compared to setting up a coffee stand, it can take years to build really good technical solutions. (Longer, if you actually care and put in “invisible” effort like making it not crash or delete data regularly.) Yet, that app can still be lumped in with all the rest in the begging-for-scraps models of many monetization schemes.

- Unlike a coffee shop, where you can probably assume some amount of traffic and tips on a regular basis, software is often firmly set at ZERO. Or negative, if you consider any other cost. There isn’t a “software neighborhood” where people might walk by and see your project alongside only a couple others in the area.

- Unlike any “natural” recognition in the real world, tech projects are recognized in entirely unnatural ways. No one may know about an app for months; yet if/when knowledge finally spreads, the message is generally LOUD and everywhere. Say, being featured in the App Store: you go from “never heard of it” to “almost everyone in the world with an iPhone who checked top-10 lists this week”. That is far, far beyond the reach of normal ads, and also insane: it pretty much guarantees over-rewarding and under-rewarding developers across the board.


Thus the crazy amount of money that is spent on advertising and its silent twin sister, public relations. And bot nets. Don't forget the bot nets clicking on things.


Human social behavior is strongly based in imitation.

It's called social proof. It is a handy metric if you aren't that knowledgeable and can't judge quality directly for yourself. You look for evidence that other people judge it to be valuable.

Yes, it's manipulable. It isn't perfect by any stretch of the imagination. But in the absence of better ways to make such decisions, it beats having nothing to base it on.

That's not to be argumentative per se. Just elaborating on why it gets done that way.


> there's a reason that places with tip jars preseed them with an initial stash of visible tips

With software donations there's also the initial hump to get over. When you're at a coffee shop and you've just paid $5, it's not much of a stretch to toss another dollar into the jar. You already have your wallet out, and you've already spent money.

When you come across a donation link for some free software you've started using, you don't have any of that.

I even balk at $0.99 for a mobile app, even though I know that a dollar is inconsequential to me (and I believe I can even "return" it within 24 hours if I don't like it!), even though paying for it is only a single confirmation tap beyond what installing a free alternative would be.

It's astounding how much psychology and emotion governs what we do in this space. Very little has to do with logic or rational evaluation.


Ah, the cruelty of power law distributions. Problem is they come up everywhere, all the time -- as often, if not more often, than Gaussian distributions.

https://en.wikipedia.org/wiki/Power_law


As a little aside here, gaussian distributions aren't really that common. Even if we put ignore splitting hairs about "true" Gaussian versus approximate Gaussian (which would just be pedantic), Gaussian distributions seem to show up frequently for two reasons. The first reason is because they're attractive and useful as a model, so many measurements are specifically designed to approximate a Gaussian. These are usually artificial, not naturally occurring. The second reason is because it's easy to abuse the term to refer to anything that looks decently uniform visually, if not rigorously, and hand wave away the details as long as it gets the rhetorical point across.

The first phenomenon leads to many natural things being interpreted as Gaussian when that's just an artifact of measurement (as an example, see the BMI scale). The second is very similar to the way you can reliably hear people use the term "exponential" to describe a large absolute increase, even if that increase isn't even quadratic.


Well, the Gaussian shows up a lot because of the Central Limit Theorem. A lot of additive processes will converge to the Gaussian.


the distribution of distributions that you find in textbooks is power law distributed (or pretty close, iirc). at the top of that power law distribution is, ironically, the gaussian.


This is soul crushing when starting B2B companies. The valley of death between "first customer who breaks everything" and "ten customers happily using your product without customization" takes a (physically) painful amount of time. Much better to do pre-investment and get it out of the way.


[Disclaimer: that happened in France, not sure it applies in all countries] A friend of mine is a freelance worker, getting paid in public grants for doing 100% OSS dev for medical software. He has a skill that I have not: he spends 50% of his time filling forms, making proposals and hunting grants that often fail. The ones that succeed, however, pay him several months of time.

There are public funds out there, in France and in EU, at least. Getting them require a bit of networking (you usually need to be back up by a partner either a lab, a company or an institution) a lot of paperwork but it allows to get financed in a way that charity or donations will have a hard time achieving.


It's a business opportunity that someone should tackle anytime soon (like patreon). A company to be the umbrella for these important projects that no one care but use. Companies and people using some of these projects donate to this main company and it distributes to its "incubated" projects.

Does something like this already exist?


I know and contribute through https://opencollective.com, but I wouldn't be surprised if there are more platform like this.


Sounds kind of like Apache. Or less similarly, but more successfully, Mozilla.



ISC comes to mind - https://www.isc.org/


>There’s a huge positive-feedback-loop problem in technology right now that needs to be solved to make projects viable.

As others have said in this thread, it is not specific to technology.

Other words to describe it: Hipsters, teenager-(like)-wannabe-with-the-"in"-crowd-mentality (I see that a lot in adults), winner-take-all, network effects, (going back) "no one ever got fired for buying IBM" (or insert other popular bigco name here), ... , all the way back to the fundamental point - human nature.

Not very many people are discerning and try to make independent judgments/decisions. Much easier to follow the crowd or herd.


This isn't only a problem with technology: https://ofdollarsanddata.com/why-winners-keep-winning-4e7f22...

> Both King and Rowling’s foray into undercover writing reveals a harsh truth about success and social status — winners keep winning. This idea is formally known as cumulative advantage, or the Matthew effect, and explains how those who start with an advantage relative to others can retain that advantage over long periods of time. This effect has also been shown to describe how music gets popular, but applies to any domain that can result in fame or social status. I discovered this concept by reading Michael Mauboussin’s The Success Equation where he writes:

>> The Matthew effect explains how two people can start in nearly the same place and end up worlds apart. In these kinds of systems, initial conditions matter. And as time goes on, they matter more and more.


I feel like HN is an imperfect but not terrible partial solution. Yes, it is still based on likes and involves no small element of luck, but you can have a project that is not very well known, post it on HN, get 50-100 views minimum and if your project resonates and gets a bit lucky, it can be a good kickstart

It's still based on the same social feedback loops and marketing that causes the problem you describe, but it makes it easier for unknown projects to get seen and evaluated by a knowledgable, generally objective audience


>Well-known project with 5000 stars? Here, have more stars. Here, have more donations

agree with getting more stars, but donation? not yet :-/ :( https://github.com/learnbyexample/Command-line-text-processi...


It shows the importance of self-promotion. If you don't self-promote then no one will know that you exist no matter how good or even how significant your project is.

It's easy to forget to self-promote when you're busy doing actual work though.

That was my problem as well because I created a popular open source project and I was focused on driving all the attention towards the project and none to myself. I created a Twitter account for the project, the GitHub repo was under an organization instead of my own personal profile, the blog I made was also named after the project.

The lesson I learned is that people don't care that much about projects and concepts, they need to see faces and names. Even among highly technical developer communities... So imagine how bad it must be in mainstream communities.


You could always hold the industry ransom. Proprietary everything, strict paid licensing, unions/guilds, etc. See how much people are willing to pay once the things they take for granted are gone


You live in a competitive society, and that is the logical result of that. We should (especially the creative ones) fight for a society with wealth redistribution, so we wouldn't have to spent the whole day working just to survive and more people would be able to work in projects that are meaningful.


That problem has been described in sociology and is called a Matthew effect: https://en.wikipedia.org/wiki/Matthew_effect


I agree with everything you say, and would like to note this:

You have just written a simple and alternative specification for "government" :)


Here's a story about Andy.

Two years ago, I was writing a modest Lisp interpreter in C. The evaluator was more or less working, but I was totally stuck on parsing user input (in other words, I had the E in REPL, but not the R). After spending an entire night banging my head against the parsing logic without making any progress, I began to despair that it would never work and I would have to throw the whole thing in the trash.

Well, that morning I met Andy. Groggy and frustrated, I told him about my problem. He said "Why don't you try tokenizing the input before parsing it?" This was a revelation to me. Yes, it turns out that reading a string character by character and trying to recursively convert it into a Lisp object all at once isn't the best way to do it. Who knew (aside from anyone who had ever learned anything about parsing)?

The resulting parser code: https://github.com/nickdrozd/lispinc/blob/master/parse.c

Anyway, I don't know anything about Zig, but Andy sure helped me out of that jam.


Andrew, if you're listening, add some higher thresholds on Patreon, you'd be surprised. Someone who's willing to give you money (any amount, even $1) is probably also willing to give you $5 or $10 (impulse spending, basically).


By default when I choose to support a creator I like I start at $20.


Yeah, I did this for my Patreon as well (some people donate less but it's not the majority) https://www.patreon.com/henryzhu (I also quit a few months ago to do open source on Babel ). Maybe also depends on audience and type of Patreon (open source vs video/content creation)? I also tried to get a little creative with my tiers based on I what I liked: it's hard to justify making rewards for people when you already provide a service through working on open source already without it turning into more of a 2nd job though. Lots to talk about with that.


Some amount of people always go above the number. I have someone on my own $5 level paying $10. And it's perfectly common for people to put $3 in a $1 level. I wouldn't be surprised if the first person to take me up on my $200 level doubles it.

That said, because it's normal for people to pay more than you ask, it's not a bad idea to have a higher level or two if you have lofty goals for it.

I don't think it's really impulse spending though. Most of the people on my Patreon are people I talked to for a year or more on social media before they signed up.


I second this. I don't have a publicly advertised way to donate to my project, but I've had multiple $100 donations with the average of around $20.

Note, the total number of donations I've received is ~20.


He has 92 patrons giving him $596/month.

Methinks you don't understand how Patreon works.


If he made the default $5, it's quite possible that the average and total would go up further.

When you set your default, you are implicitly saying "this is what I think my software is worth to you". That's an anchoring point. Most people who decide to donate less than the default will probably feel slightly bad for doing so. Some of those people who would just stick with the default at $1 will also stick with the default at $5. Some people will generally always give more than the default, and that "more" might be even more if the default is $5 than if it's $1.

It's probably incredibly rare that someone would click through, see a $5 default (including the possibility to donate less), and say "wow, this person is full of themselves thinking $5 is reasonable; I'm not going to donate at all in protest". So there's little to no downside risk in raising your default by a non-crazy amount. Obviously some defaults are absurd; $1000 would be crazy and probably off-putting. But $1 is certainly a low-ball.


I don't know how likely it is, but there's a risk that this could backfire:

> Most people who decide to donate less than the default will probably feel slightly bad for doing so.

If the default amount is too much for me, then my choices are to donate less and feel slightly bad, or not donate at all. It will be pretty tempting to keep my money, rather than give some up and feel guilty anyway. Paradoxically I might even feel less guilty for not donating, because that's a passive act and I can tell myself I'm only postponing the decision, rather than definitively making an ungenerous choice.


That's a completely different suggestion from the one I replied to.


The link on his site says $1. I’d make the default $5.


I think the point is that the donors are already averaging at an amount much higher than $1 each.


Do you understand how Patreon works? He can always raise his pledge.


Some, enough to know that if it says "$1 pledge" that in no way limits the patron to only pledging a single dollar.

If people are willing to give him more than that without him offering additional incentives, there is a downside to providing other levels in that there is administrative overhead on the back end to make sure people get the incentives they paid for and distinguish those who gave $5 without wanting other incentives from those who gave $5 because they did want those incentives.

Etc

If the point is to free up his time to focus on Zig, more pledge levels potentially distracts from that without necessarily increasing his take, given that the developer community is already willing to give him more than a dollar just to support letting him work on Zig, without anymore incentive to them than that.


What's amazing to me is that he's sustaining living in NYC on currently $423 a month. He must be pulling from a pretty large savings fund. Even his "fully sustain working on zig" goal is only $3000.

https://www.patreon.com/andrewrk/memberships


It was ~$350 when I looked at it this morning, $423 when you posted, and $569 now.

If current trends continue... :P


> What's amazing to me is that he's sustaining living in NYC on currently $423 a month.

I don't think he is?


According to his Patreon I linked he is.


That is.. quite impossible. He's living off additional savings or a mooch or homeless.


He answered that in another comment https://news.ycombinator.com/item?id=17257056


Well the article makes it seem that he has only just begun doing this, and likely has savings from his job before quitting.


He mentions having a girlfriend, maybe they live together?


OKWS author here. I have to take some issue, we made a lot of improvements to the infrastructure over the years. The Tame system for instance was a big one. [1]

[1] https://www.usenix.org/legacy/events/usenix07/tech/full_pape...


Hi Max! Amazing, I thought that tame was there since the beginning. BTW my "hack week" project was going to be replacing tame with cpp-coro [1] but I quit before hack week started.

[1]: https://github.com/lewissbaker/cppcoro/


Tame had only just been applied in earnest when I started in 2007. Pre-tame code was pretty unwieldy!


Having the courage to take the plunge is admirable, and that's enough to get $1/month from me. Congratulations on making such a big decision!

One thing I'm curious about: the current ~$400/month from Patreon (minus fees) isn't close to enough to cover a similarly comfortable standard of living in New York, and I imagine that this would affect your "runway", if you will. Are you able to live and work off of savings, or will you be relocating for a lower cost of living?

Apologies if this comes across as prying into your financial details. I'm more just curious about the process. Again, congrats!


Thanks for the support :)

As much as I love NYC, it may be practical to relocate somewhere with a lower cost of living. Alee & I will reevaluate when the lease is up. She's in grad school, so who knows where that will take her? I'll probably just live where she needs to for her career.


Hey Andy, dev/ops nerd turned marketer here. First of all, AWESOME that you quit your job and are attempting to make a go at this! Huge congrats!

I would recommend you put in a couple higher levels on the Patreon and think about what people would want from you. For instance, I ran a popular blog and asked my readers what they would pay monthly for. Turns out one of the biggest requests was just a private webinar (I hate that word, so you could call it something else, like "knowledge brain dump") once a week. I ran them for 60 or 90 minutes and took questions at the end. In fact, I may do this again in the future, since I continue to get requests for it long after the fact.

People here have expressed how articulate you are when describing the language. Setting up a group at say $19/mo or even $49/mo where you deep dive into software dev, practical applications of Zig, and/or programming languages in general "live" once a week might be something people are willing to pay for, and would get you "out of the red" more quickly.


Totally agree. I happily have my company pay good money if the software shows to be really useful. And I refer to give that money to small devs instead of Atlassian, Oracle or others. Yearly plans are also easier for me to pay. Dealing with monthly is a little bit of a pain.


Andy congratulations! I decided to support you a while ago on Patreon, because I believe in the philosphy behind Zig and all of the real work you have put into it. Your other projects bolster this in me. I want you to succeed, and I think success will also come to me indirectly by committing to Zig in my future projects. I want to write an array language, like J [0] that I use, in Zig, and a Zig library for geometric algebra along the lines of the C++ library Versor [1], and the JS library ganja.js [2]. Also rewriting your DAW in Zig may make me try and write a modest imitation of Extempore [3] in Zig! Now I have to devote more time to learning Zig. Maybe I need a baseball cap with Zig on it ;)

  [0]  jsoftware.com
  [1]  http://versor.mat.ucsb.edu/
  [2]  https://github.com/enkimute/ganja.js
  [3]  https://extemporelang.github.io/


That's what I did. I used to work remotely and followed my now fiancee to Yakima, Washington. I saved so much money there...


Nyc can be cheap if you share an apartment with someone and live in any borough but manhattan.


He writes in Patreon that reaching $1000 will allow him to "work on Zig full-time for 3 times longer", so I'm guessing he's burning some savings and every extra dollar will just extend that for some time.


Follow up question: What do you do about health insurance? The cheapest (read: catastrophic coverage) on Oscar is several hundred per month last time I checked.


For anyone like me who's hearing of Zig for the first time, I totally encourage y'all to watch the video that's attached on his Patreon page.

It's rare to see a technical presentation done in a way where the concepts are so clearly understandable. Makes me want to take a peek into Zig and see what it's like in there and consider supporting the author just because. Hope it works out well enough for him to live his dream :).


I'm tempted to spend a good while studying Zig just because it says "Zig competes with C instead of depending on it" and folks here are not thrashing it ('Why compete with C at all?' would be my guess of general reaction)


Based on what other posters are saying about the language design and the interface design, it's built in such a way that you don't have to do a ton of boilerplate to integrate Zig code with C/C++ projects. That, in turn, means that you can start new sections of your code base in Zig and gradually start eating your legacy C/C++ code until you have a project written in Zig instead of C. And it seems like it does this by not being too opinionated about your use of memory, unlike Rust.

I would be really interested in using it if I had more Clang support in bare metal systems. I think the consensus among embedded developers is that we would all really, really like a modern language with modern type-safety and type-checking, but we also still want to do crazy crap with memory at times.

It doesn't matter so much that there's a way to do something, but it matters a great deal how you are expected to do it. Keeping it simple matters, and Zig looks pretty simple compared to other compiled systems languages that I've seen, so that's wonderful.


> you can start new sections of your code base in Zig and gradually start eating your legacy C/C++ code until you have a project written in Zig instead of C.

http://tiehuis.github.io/iterative-replacement-of-c-with-zig


> not being too opinionated about your use of memory

That's a bit hard to tell, isn't it? When I checked out the docs after last week's Zig thread, the entire section on memory allocation was a big fat TODO.


I think this may be exactly what I've been looking for. I've been wanting a "Better C" type of language, but all the existing candidates fall down on some point like - too complicated (Rust/D/C++), no manual memory management (Go), etc. I've even made some half baked attempts at writing my own language.

It seems that the design goals of this language are exactly what I'm after. Going to have to give it a try later.


What part(s) of Rust do you find too complicated, and why?


I'm not the person who you asked, and I'm an absolute rust lover, but--

Rust is for people with C-level problems who don't like C's ergonomics. Zig appears to be for people with C-level problems who do like C's ergonomics.

Some of that is from essential complexity that we've been lying to ourselves about (memory safety, mutability, safe parallelism) and rust makes explicit.

C has a simplicity for it, for better _and_ worse.


In general I prefer languages to be small and simple - like Go and C. This doesn't seem to be a design goal for Rust, which puts me off. Some specifics:

Trying to do anything non-trivial with ownership/lifetimes/borrowing always ends up in a tangle, fighting with the compiler.

Everything in the stdlib seems to have a giant api with pages and pages of functions. Example: https://doc.rust-lang.org/std/string/struct.String.html Scrolling down through those methods is overwhelming.

The overhead imposed by the goal of maximising safety is big, and not that important to me (as I'm not writing Very Important Crypto code or whatever, I don't find it hard to write good enough code for my purposes in C).


Click the [-] at the top right of that doc page. You won't find it so overwhelming.


Why not Ctrl-F instead of scrolling ?


One place they're thinking of might be integration with C. zig's C integration is crazy good from what I can tell toying around with it. It was as easy as

@cImport({@cInclude("header.h")})

The language is also evolving rapidly, this may have changed, but if it did, it probably changed for the better.


Zig might be a good fit for you. Nim also seems to answer all your requirements.


Nim doesn't meet all his requirements, it's garbage-collected. Complexity is hard to measure and rather subjective, but Nim in my opinion seems to be more complex than Zig.


Nim is optionally garbage collected. Most of the standard library opts to use the garbage collector, true, but here's an example of a beginning of a tiny OS kernel in nim[0], that does not.

I do not know enough of Zig to compare the complexity, and Nim is indeed not simple. To borrow from the zen of Python, it is complex but not complicated; and most of its complexity is of a kind you can almost entirely ignore until you need it, and when you need it, you're in trouble if your programming language doesn't provide it. e.g. operational transforms, macros, compile time execution. pragmas.

[0] https://github.com/dom96/nimkernel


> Nim in my opinion seems to be more complex than Zig

That may be so, but Zig is still in a very early stage of development. A language may look simple and fresh in the beginning, but the real test comes when 1.0 is right around the corner. Nim is at this stage now, I am one of the Nim core devs and I will even admit that it is by no means simple, there are features that I would like to see removed to reduce the complexities, but how do you do that when a vocal percentage of your user base loves the feature?

Do keep this in mind when trying to compare a language like Zig (which is just starting out) to a language like Nim (which has been around for a while). Perhaps Zig will do a better job than Nim in this regard, but only time can tell.


You can also point out that people can use subsets with minimal features. They can do it for extra safety, consistency, or even low-level control. Safety-critical field has done that with both C++ and Java for some time now.

Regarding that, does Nim allow one to turn off GC, mess with pointers, and twiddle bits like in C? As in, is there an optional set of C's features in there that one can use? Or improve with macros, type-safe wrappers, and so on?


> Safety-critical field has done that with both C++ and Java for some time now.

Every company/project uses a subset of C++ for a variety of reasons. The problem is, all those subsets are different, and as a result, if you want to rely on the ecosystem, you eventually have to deal with the entire language. Maybe you avoid exceptions/pointers/new&delete/template-meta-programming/the-coprocessor, but the libraries you need don't.

This is visible already in Nim, where the GC is optional, but most libraries (and most of the standard library) does depend on it.

It works well without GC, but you lose out on most of the existing libraries and many parts of the ecosystem.

> Regarding that, does Nim allow one to turn off GC, mess with pointers, and twiddle bits like in C?

It lets you twiddle bits without having to turn off the GC - it has GC-tracked pointers ("ref"s) and non-GC ones ("ptr"s). the GC is per-thread, with time limits but it can also be turned off.

> Or improve with macros, type-safe wrappers, and so on?

Nim macros are not quite at the Lisp level, but they are extremely powerful.

There are various features for type safety that DON'T require wrappers, e.g. you could define "meters" and "yards" as two distinct 64-bit double types, which means you can't add them or assign one to a variable of the other kind, even though they are still essentially a C-style typedef. This is a much more elegant solution than the C++/Python/SmallTalk/Java/C# crowd, where you have to define a class with a lot of methods, which is still clunky, and hope that the compiler will be able to optimize it back down to a plain old double.


> Nim macros are not quite at the Lisp level, but they are extremely powerful.

I'm not fully familiar with Lisp macros so I'm curious, what is Nim missing that Lisp has in terms of metaprogramming?


Two things in common use:

Lisp has reader macros that can alter lexical analysis and parsing; correct me if I am wrong, but I think that’s not possible in Nim. E.g. things like JSX are trivial to implement in Lisp.

Also, lisp macros let you e.g. write new control structures with multiple “body” parts - iirc, in nim only the last untyped macro arg can be a code body (you can put a block in parantheses, but that’s not as elegant)

I’m sure there’s other stuff that fexprs and other [a-z]exprs can do that nim can’t, but i’Ve never seen them in use (or used them myself)

Also, personally I think Nim’s model is more practical; lisp’s power essentially requires lispy or REBOLy syntax to be usable. Nim is pascalythonesque, and though complex is not complicated; much like Python, and unlike C++, you can make use of the ecosystem without being afraid of obscure details biting you - but it has all the capabilities when you need them.


> Regarding that, does Nim allow one to turn off GC, mess with pointers, and twiddle bits like in C? As in, is there an optional set of C's features in there that one can use? Or improve with macros, type-safe wrappers, and so on?

Yep, and in fact I would say that Nim is a perfect "better C" language. You get things like modules, a better type system and metaprogramming. Of course you'll have to forgo a lot of the stdlib, but if you're already using C that won't be a tough pill to swallow.


People here are commenting that NYC is quite an expensive place to live off donations.

I'm not sure what advantages that the location brings when working on a programming language (networking?), but I can suggest moving to South Africa where cost of living is low and quality of life is high. Yeah there's "crime" -- but it's not too bad as long as you're a little smart about the way you conduct yourself.

Cape Town is an obvious first choice and is really cosmopolitan with a good dev community, but I can suggest Durban as a good runner up; it was recently voted "Most Livable City in South Africa" [0]

For New York level "hole-in-a-wall" rentals, you can live like a king in South Africa :) If you're willing to share a spot you can really get a palace for yourself, and enjoy some of the best beaches in the world (in Durban).

[0] https://www.iol.co.za/mercury/news/durban-is-most-liveable-c...


You can also live in Anywhere, USA if you're not willing to be that adventurous. It's cheap, too. I worked remotely in Yakima, Washington, which means I could still fly home and visit Seattle from time to time.

Then again, some people love the adventure of moving to another country, and I encourage them to pursue that if that's what they want.

That's all moot though because the Zig guy said elsewhere he'll probably be following his girlfriend who's about to graduate.


Well, then you have to live in Yakima! ;)

There are excellent inexpensive places to live in Washington that offer a lot more than Yakima does. Spokane, for example, is still very inexpensive compared to Seattle and offers a lot more in terms of both city life and accessible outdoor activities. It's too bad the dev scene there is pretty limited (Itron and Seven2 are really the only companies that I can think of off the top of my ahead employing devs, and the salaries are nothing to write home about). Of course, if you're doing remote work anyway, this isn't really an issue.


Yep, I was following my now fiancee.

There are a lot of great mid-size cities out there--a hundred of them to choose from--and they're all cheap.

There's this prevailing attitude by city dwellers that living outside of NY, SF, Seattle, or <insert your tech hub here> is death, but that is an attitude borne from a lack of experience about life.

Life is sometimes great in midsize cities, and in many ways they offer more for less.


As someone who has only ever lived in dense urban metropolitan cities, can you tell me about some of the things midsize cities offer more of?


More apartment for your money, plentiful parking, more elbow room at restaurants, more say in civic life (if that is important to you). You can just "breathe" easier and aren't constantly "wound up" as tightly as a defense mechanism against being jostled, hustled, dealing with long waits. It's a little hard to appreciate until you're in it. Not the best if you're looking for a wider dating pool, need top flight arts and entertainment every weekend, or if you can't/don't keep a car.


As someone who is leaving Austin soon, this hits... home. And it just took enough life experience to arrive at this conclusion.


Austin is a midsize city.


From an outsiders perspective with all the legislature and politics governing it S.A wouldn't be my first choice. Midwest U.S is still very comfortable and you don't have to move countries or as a last resort maybe I'd go somewhere in Latin America.

If he is confident that he can manage to quit his job while making less than 1k a month for now, then even $10/hr part time job of less than 25 hours a week would be a great boost


I don't know about that. As a freelancing expat, you could probably avoid truly being impacted by South African politics. The politics of much of the Midwest aren't exactly stellar. Especially in the most affordable parts. Lastly, as cheap as the Midwest is, bang for the buck in South Africa is far more exciting. I'm not an expert on S.A., but I have been there. And I spent the first 21 years of my life in the Midwest.


Durban's crime appears to have a homicide rate per capita equivalent or a bit higher to that of NYC in 1990 during the height of the crack cocaine epidemic.


Most people who live in cities don't realize how inexpensive it can be to live in the US.

For example, I lived a spartan but comfortable life in Rochester, NY for well under $20k/year. That was renting an expensive-for-the-area apt in a nice neighborhood. When I had roommates my rent was <$250/m. You can buy a house in Rochester for under $20k.


It can be inexpensive in terms of rent but you will need to pay for transportation (i.e. car and gas) to live in the US.

I lived in Budapest and Chiang Mai, Thailand before and the advantage of these cities is that you can pretty much walk anywhere or pay little for private/public transportation. Food is everywhere and you're not stuck in the middle of nowhere.

The cost of owning a car is part of the living expense... so it's not just rent.

And FYI, I had a nice studio in Chiang Mai for less than $250 a month. It was professionally cleaned every week and the place was just steps away from cafes and grocery stores. No cars required.


Given the weather in Rochester, what were your utility costs?!


It depends on the home and the method you use to heat. Many houses in Rochester pre-date modern insulation and cost a lot to heat. That said, there are plenty of reasonably priced newer homes with good insulation available. The last place I lived was well insulated and cost <$150/m to heat in the worst conditions.


Well if we're gonna give Andrew suggestions on places to emigrate, S.A. -- like N.Z. -- has the advantage of English as a first language. BUT if Andrew and S.O. have any Spanish, they shouldn't overlook the often-overlooked country of Uruguay. From[1],

> Uruguay is ranked first in Latin America in democracy, peace, low perception of corruption, e-government, and is first in South America when it comes to press freedom, size of the middle class and prosperity... Nearly 95% of Uruguay's electricity comes from renewable energy, mostly hydroelectric facilities and wind parks...

[1] https://en.wikipedia.org/wiki/Uruguay


New Zealand is also making a big push to bring in tech workers, with some amiable visas. I see their immigration department at tech conferences all the time.

From personal experience, I can tell you that Taiwan is a fantastic place as well. Lots of engineers, absurdly low cost of living for the civic amenities (which blow away even the best that San Francisco or New York can offer), and gorgeous nature if you're into that. If you manage to learn Mandarin it'll greatly expand your access to the programming community there, but if not there's plenty of English speakers and just expats.


New Zealand’s cost of living isn’t nearly as great as it used to be (it’s actually slightly higher than the U.S.), primarily thanks to the influx of Chinese investors.


I was in Taiwan recently and it is a lovely country, but talking to some friends who work there (in non-tech jobs) they said there is a large brain drain to mainland China. Has this been your experience? I'd imagine salaries are substantially lower than what someone would make in the US, even after factoring in cost of living.


Speculation among my Taiwanese friends is that the "brain drain" rumor is raw propaganda from the PRC government.

Very few Taiwanese people would give up the freedoms they have in Taiwan for a move to China, just because they might get paid more. There probably should be a (unbiased) study done, but it does not match my experience.

Taiwan does get shitloads of Chinese people coming in to work, though.

Taiwanese salaries are substantially lower, yes, especially for Taiwanese. Furthermore, they have an unhealthy, ineffective work culture reminiscent of Japan. These issues need to be worked out and it is a long term plan of mine to tackle this head on and very publicly when I start a business there.

For expats, there's no need to participate in this work culture if you don't work at a majority Taiwanese company, and if you do work at such a company, you can simply annoy your coworkers by leaving at 5, you most likely won't be fired over it but will need to withstand peer pressure. Visibly Asian-heritage expats will experience this even more.

The best case scenario is the one we're discussing in the main thread - expat working internationally, but remote.


Cape Town, in the recent past may have been a good choice, but not so much this year.

https://news.nationalgeographic.com/2018/02/cape-town-runnin...


There is plenty of economical cities to move to in the States, without packing up everything and moving to another continent. Even just heading up state to somewhere like Rochester is a smart choice. Cheap cost of living, great lakes, finger lakes, amazing cocktail and food scene.


@Nsomaru are you living in Cape Town/South Africa?


I've lived there a few years. Move around more now.


I envy the kind of confidence that would lead someone to decide that a new programming language is their ticket to riches.


Sounds more as if he thinks a new programming language might possibly be able to let him eke out a living while doing work he enjoys. No riches in sight, so far as my reading of it is concerned.


That's a little mean. The author writes:

> 100% of donations I receive go towards paying rent, buying food, and generally attempting to live a modest, but healthy life.

He doesn't expect to get rich from this.


Even so, at some point if the language "takes off", a company out there may want to become a full-time sponsor. It's no riskier than a startup - and nicely bootstrapped with very low cost of maintaining it.

Kinda awesome actually he's doing what his passion is.


Also, who was the last person getting rich from a programming language?

Programming languages are more of an open-source thing. You can't make much money from them.


Anders Hejlsberg got $1M for joining Microsoft: https://en.m.wikipedia.org/wiki/Anders_Hejlsberg


To get the principal engineer behind the languages Turbo Pascal, Object Pascal (Delphi) and then C# and TypeScript that seems like the deal of the century if you factor the business value of all of the above.


Indeed, and at the time I was surprised that they got him for so little. There's so many design decisions in C# that resemble Object Pascal and Turbo Pascal, that it's pretty obvious that Anders=C# (not to take anything away from what are surely some of the best compiler engineers in the last few decades, I just don't know their names off-hand).


As someone who programmed heavily in Delphi back in the day their absolutely are and it's one of the reasons I like C# so much, it feels like home.

So does TypeScript.


I also used all languages designed by Anders, his practical approach is clearly visible in all of them. Unfortunately there are just a few interviews with him but they are all high quality: https://www.artima.com/intv/anders.html


Which is hardly "getting rich" if you are as skilled as Anders Hejlsberg.


I’m pretty sure Anders’ salary at Microsoft qualifies him being called rich.


I think the GP assumed that Anders was already rich before that so joining Microsoft didn't 'get him rich'. But if that was the case, based on his work history he may have still 'gotten rich' via programming languages, just from before Microsoft.


I think GP is saying that $1M isn't enough money to "get rich", and furthermore isn't particularly "for creating a language", for someone who could easily make $200K-$1M working on whatever Microsoft wants.

But GP forgot that the $1M was in 1997, before Google and Facebook and friends made non-founder engineers rich.


Netscape?


By that standard, I would guess that Chris Lattner did OK when joining Tesla.


I have a feeling some of the people involved with kx systems probably did well out of it. https://kx.com/about-kx/


Open source can open many avenues of profit. I wonder what the creator of laravel makes with conferences, forge and paid support.


Not only does he probably have a good amount of savings, he can easily get a new job if this experiment fails. It’s a gutsy move I’d never be able to do but not really something he can’t recover from.


Yeah, as a former senior engineer with a well known company, who took time off to develop his own programming language all he has to do is throw his resume into the recruiter feeding frenzy and immediately get a job.

While it's admirable, I can't really say it's risky. Risky is quitting in a field where people stay laid off for like a year.


It IS risky unless he has a cushion to fall on (lots of saving or trust fund). Life tends to pull out expensive rabbits out of hats, more of them as one gets older.

At $1000/month, a broken or stolen laptop or bicycle is ~two months. In the US (which he is) Any medical issue, or a baby, is somewhere between three months and bankruptcy.


He's taking a sabbatical, not retiring permanently. Sure, he needs to have some money saved up.


Well, I assume he's going to be smart about it and have at least six months' income.


If he’s spending $2K on a bike then he’s not living a “modest” life.


Baloney. If your bicycle is a serious mode of transportation that's not unreasonable at all.


...to riches.

There are riches besides money.


Yes, but they end up being difficult to chase if you can't make rent or buy groceries.


There's a big bit in the middle between "homeless and starving" and "actually rich".


Of course there is, but my point is I could not do this without an expectation of penury.


He's probably not going to get rich off patreon donations. The only developers that are doing well on patreon are being sponsored by real companies (or making adult games).

For example vue.js has 100k github stars and the creator is still only getting 15k a month on patreon and a good chunk of that comes from only a small handful of $2000 tier company sponsors: https://graphtreon.com/creator/evanyou. So that's probably a rough upperbound on how much the donation model can support.


_Only_ 15k/month? Sounds like a pretty epic deal to me, getting paid a very healthy salary to set your own hours and work on your baby.


As someone who writes programming languages, let me tell you that people who do so don't do it to get rich. We do it because we see great inefficiency in the world of systems that needs to be corrected. It's like people who are social justice activists. They're not getting rich either, but they're doing what needs to be done.


Reminds me of the fellow that took a year off to create an amazing new version control system. He made svk, which was pretty good except for being followed too closely by git.

However good Zig looks, there's also Nim, ATS, Rust, and a dozen more. Seems like a good time for programming languages -- which means a dangerous time to quit your day job for one.


realistically, i doubt the author would have trouble finding another job if this doesn’t pan out.


I doubt that the svk guy had that problem either.

His problem was, instead, "I took a year off work to produce a VCS and then saw the throne taken by git. My work has been forgotten and nobody uses it--not even me."


That's the experience of 99.999% of all startups.


Thanks for clarifying, I see what you're saying


Cool, good for you :). I like the world where the internet enables niche creators to live and engage with invested supporters/patrons.

Aside, have you seen the language Jonathan Blow's working on (jai)? Curious how you'd compare the two, goals or implementation-wise. (Given he hasn't released anything yet, no worries if this is unanswerable :))


There is at least this comment in the docs:

> Zig is taking cues from Jai's stance on allocators, since that language is being developed by a high-performance game designer for the usecase of high-performance games."

https://github.com/ziglang/zig/wiki/Why-Zig-When-There-is-Al...


He also have a liberapay account: https://liberapay.com/andrewrk/


Best of luck. Is there any place you wrote about why you belive Zig is better than all the other alternatives to C? Why should I look at Zig, insted of looking at Rust?

Not trying to be dismissive, genuinely curious what your thinking is.



Do you see a niche you could conquer with zig?

For ruby it was rails. Python profited from data science. Etc.


Systems programming, I think? Also, much less complexity in the type system than Rust, one of the main new contenders along the old gang of C and C++.


Looking for more on Zig I found a talk [0] (~1 hour) on the language by Andrew Kelley from just a few months ago.

Edit: I see there already questions about Rust in this thread. There's an audience question on Rust in the talk [1] and the creator states that Rust is Zig's biggest competitor.

[0] https://www.youtube.com/watch?v=Z4oYSByyRak

[1] https://www.youtube.com/watch?v=Z4oYSByyRak&feature=youtu.be...


IMO we're about to see a robotics renaissance.

That embedded world needs a simple, performant, low-level language that isn't C. If Zig focuses on providing that for the robotics domain, I could see it becoming popular.


What makes you think we're about to see a robotics renaissance?


Michael Bay's documentary Transformers


I want this to work so much (because this is exactly the "business model" by which I'd like to live.)

I know of another example: the guy who draws and writes the web- (and now print too) comic "Kill Six Billion Demons". I believe he's currently living off of direct fan support (through Patreon too, as it happens) although he has published a physical book collecting the first arc of the story. (I guess that is also fan support..?)


In Asia, several dozens of authors are earning millions (in USD) by releasing chapters of novels on a daily basis. And English translators gets about ~$20-$100 per day from Patreon donations:

Check

  - https://www.wuxiaworld.com/ (note that this is a very large community)
  - Earnings of Tang Jia San Shao and I Eat Tomatoes
  - Article on New York Times: https://www.nytimes.com/2016/11/01/world/asia/china-online-literature-zhang-wei.html


Image published a couple of collections of K6BD. Which is a very different proposition from self-publishing - if you do it yourself you can expect to have to spend at least two or three months dealing with supply chain management, running a Kickstarter, and so on, instead of drawing more pages. And that’s assuming you don’t make any mistakes, there’s a lot of things to learn the hard way.

There’s a lot of artists living off of Patreon. I got close near the end of my last graphic novel, but the Kickstarter was a big learning experience that took a while, the new one takes longer to draw and rent always goes up, not down.

(And this is why there was so much screaming when Patreon tried to change their fee structure late last year.)


Props to you buddy! Glad people like you are always willing to take risks to move the software community forward. Whether it works out or not, I give you my thumbs up!


Has anyone tried Zig? From reading the project page I don't see anything that it has/plans over something like Rust that's much further along.


Rust seems like a spiritual successor to C++. Zig seems like a spiritual successor to C (except it's a little closer to C, as you can directly run things from C in Zig)


Came here to say the same thing. I think there is probably room for both. Seems like the are valid use cases where you might want something with completely manual management like C, but without footguns like undefined behaviour and c-style strings.


Error management is different, and quite uniquely done. Zig deals with failure modes in a much more fine-grained way than Rust, which favours the algebraic-wrapper approach (Result / Option types). If you're writing actual, low-level systems code, your code might need to react differently depending on the failure mode: you need to know if it was an I/O issue, or out-of-memory, or disk-full, or etc. At the call point, Zig lets you enumerate all of the possible failure reasons comprehensively, and handle each one specifically if you choose to. This enumeration is produced and checked by the compiler, so you can't miss any cases. (It also lets you disregard the specifics if you don't care.)


That sounds quite similar to Rust, except less fine-grained than Rust. The Result type has an Error parameter which is usually an enumeration of the possible failure conditions, e.g. https://doc.rust-lang.org/std/io/struct.Error.html . That is, an IO function returns only IO errors, as compared to Zig's strategy which unifies all declared errors in the program into a single "error" enumeration.

(To be clear, there's a lot of cases when this precision isn't so useful, but the contention here is how fine grained they can be.)


There's definitely there's a distinction here between the approach you've described (Rust can indicate that a function returns only IO errors -- but doesn't tell you which subset of "all IO errors" might actually occur at this call point) and Zig's (the function call can result in an arbitrary set of specific errors, some of which might be IO and some might not; and only the actually possible errors are included in the set).

Is one better than the other? I doubt that question has an objective, one-size-fits-all answer. I'm happy to see experimentation in this domain! My point really is just that Zig is different from Rust in how it handles errors, even if there are some superficial similarities.


I think you haven't understood my point or maybe I haven't understood yours, and maybe it is because I have misunderstood Zig's approach.

My understanding is that something like `fn f() -> %T` returns a T or an `error`, and the latter can be literally any error value that exists anywhere in the program. It isn't restricted to the actual possible errors that might occur inside f, and thus has the same failings of Rust that you point out (don't know which subset of errors might be returned), except the unknown set of possible errors is likely to be larger.

(And, you can still get all the niceties of Rust's error handling infrastructure like the ? operator if you manually define an error type that is restricted to the exact set of possible errors: there's not a privileged type for how to represent errors.)


If there's any misunderstanding, it's probably mine. :) I'm reasonably familiar with Rust, and I understand your argument; but I only read about Zig a couple weeks ago, so my knowledge there is based only on a little reading and experimentation.

My reading of the release notes:

https://ziglang.org/download/0.2.0/release-notes.html

...is that error sets can be inferred (at both call-site and in the function declaration); and that inferred error-sets on return types are exact, in the sense that an inferred set will only include the actually-possible errors encountered in the call-tree.

To be fair, I don't know whether a function can declare that it may fail with a given, explicitly-declared error-set, where that error-set includes errors that the function actually cannot produce. (E.g., I say I can throw a network-error code, but I perform no network calls.) If this is permissible (i.e, the compiler doesn't complain), then that undercuts my argument a bit. :) On the other hand (again from the release notes), most functions are encouraged to simply declare their return type as !T, where the ! represents "the inferred error-set for the function," and this seems to support my claim.


ErrorSets have been in the language for some time now. The syntax you are referencing is pretty outdated now too, when considering the pace of language changes here in Zig's pre-1.0 days.

https://ziglang.org/documentation/master/#Error-Set-Type


Ah that explains it: I was looking at the now-old https://andrewkelley.me/post/intro-to-zig.html#error-type


Zig's errors are actually different and the variants can be statically known. Error type returns are not usually specified since they can usually be inferred. They can be explicitly specified if needed [1].

For example, the following code

    fn errorOrAdd(a: u8, b: u8) !u8 {
        if (a == 2) return error.FirstArgumentIsTwo;
        if (b == 2) return error.SecondArgumentIsTwo;
        return a + b;
    }

    pub fn main() void {
        if (errorOrAdd(0, 0)) |ok| {

        } else |err| switch (err) {
        // will force a compile-error to see what errors we haven't handled
        }
    }
emits this error at compile-time:

    /tmp/t.zig:13:18: error: error.SecondArgumentIsTwo not handled in switch
    } else |err| switch (err) {
                 ^
    /tmp/t.zig:13:18: error: error.FirstArgumentIsTwo not handled in switch
    } else |err| switch (err) {
                 ^
You can use the global `error` type which encapsulates all other error types if needed, replacing `errorOrAdd` in the previous `main` with the following function

    fn anyError() error!u8 {}
now requires an else case, since it can be any possible error.

    /tmp/t.zig:13:18: error: else prong required when switching on type 'error'
        } else |err| switch (err) {
                     ^
This works pretty well and is very informative in most cases. The tradeoffs are it can make some instantiation of sub-types a bit clunky [2] and you need to avoid the global error type everywhere. The global error infests all calling functions and makes their error returns global as a result. You can however catch this type and create a new specific variant so there is a way around this for the caller at least.

Do remember that Rust allows passing information in the Err variant of a Result while Zig's error codes are just that, codes with no accompanying state.

[1] https://github.com/tiehuis/zig-bn/blob/3d374cffb2536bce80453...

[2] https://github.com/tiehuis/zig-deflate/blob/bb10ee1baacae83d...


> Zig's errors are actually different and the variants can be statically known. Error type returns are not usually specified since they can usually be inferred. They can be explicitly specified if needed [1].

Yep, as a sibling points out, I was working with a long-ago version of Zig's error handling.


Let me introduce you to the concept of ErrorKind in Rust (i.e. in the IO module of the std): https://doc.rust-lang.org/std/io/enum.ErrorKind.html

There is also the "failure" crate which has patterns for the kind of thing you mentioned:

https://boats.gitlab.io/failure/error-errorkind.html https://boats.gitlab.io/failure/custom-fail.html


I don't believe that either ErrorKind or 'failure' gives the same degree of automated enumeration and checking that Zig provides. I'm not a Zig apologist... I just think you should read through the Zig documentation and examine the differences for yourself.

https://ziglang.org/documentation/master/#Errors

https://ziglang.org/download/0.2.0/release-notes.html (see "Error Sets")


If I'm understanding correctly, you can use enums in rust to recreate the same, though it would be slightly more verbose.

    enum MyErrorSet {
        NotFound(ErrorKind),
        SomethingElse(ErrorKind),
    }
If you don't care about propagating the original ErrorKind then that's just

    enum MyErrorSet {
        NotFound,
        SomethingElse
    }


Yes, I think you're right. But the burden for generating and maintaining the enums is on the developer; whereas in Zig, they can be automatically inferred by the compiler.

I don't think the distinction is super important in application work (for example). But for systems code I can see the benefit of having the compiler tell you, with certainty, why a given function can fail (and why it can't). E.g., a good device driver should handle all its failure modes with care, rather than with a shotgun approach.


Is there something about Rust that precludes it from reacting to failure modes as you describe?


I'm guessing, but I think the tricky part for Rust might be the automated construction of the Error-union type. E.g., at a certain call-point, the full union of all possible errors is ErrorSet<E1, E2, E3, ..., En>. The compiler determines this union, not the programmer. At the next call point, the union may be entirely different. To my best knowledge, Rust doesn't currently allow for variadic type constructors (i.e., where the number of parameters may be knowable at compile time, but the arbitrary-length parameter list cannot be expressed in the abstract syntax of the language).

https://ziglang.org/documentation/master/#Errors

edit: when I say "full union of all possible errors", I just want to be clear that this means "all the possible errors that could occur at this call point, and no others." It's not just a bag of "all possible errors that the language/library declares to exist."

edit #2: On second thought, I guess it might be possible for a Rust tool to build these types, although I think it would require changes to the language implementation. Variadic type constructors aren't necessarily needed. Another approach would be that every (type-specialized) function returns a Result<T, ES> type, where ES is some type that implements an "ErrorSet" kind, and where ES is very possibly unique to the function (that is, no other function might return Result<ES>). For example function A() might fail due only to two cases (say, out-of-memory or disk-full, but never network-error), and function B() might fail only on disk-full. Then the ErrorSet type for A has two constructors (OOM and DiskFull), where the ErrorSet type for B has only one (DiskFull). You could then use Rust's (excellent) pattern matching to exhaustively handle all cases (i.e., the specific constructors of that ErrorSet type) at the call site.

So the tricks would be:

1. Instrument the language to tag each specialized function with its error-set; this is computed recursively by examining the failure modes in the function body, as well as the error-sets of every function it might itself call, and taking the union of those sets. (Side note: I believe that, in Zig, a type-specialized function maps to a single error-set -- i.e., the error-set doesn't depend on runtime arguments in any way. But I could be wrong about this.)

2. Generate a potentially new ErrorSet-ish type for each function, or more spefically, for each error-set that is generated using the process in step #1. (I suppose you would have to give the type a name, but maybe it could be an anonymous internal type if there is language support for that.)

3. Then you could naturally "match" on the function's return value, and know that you've covered (exactly) the possible failure modes.

But this is just armchair-quarterbacking on my part. I don't know the internals of the Zig error system, nor what changes Rust would actually need.


I write a lot of Zig. In fact, all my hobby projects are in Zig since I've started using it because I like it so much.

I haven't tried Rust, but I've seen enough of it to know I'd hate it. Where Zig has a clear advantage is that it is vastly smaller and simpler and does not enforce a memory management paradigm on you. There isn't even a default allocator.


Since Zig is pretty unknown language, I would actually buy swag. Cap/t-shirt. It's not like a monthly donation but if it can get you extra cash...


This is brilliant. Create a $10 tier to receive swag perhaps


I've looked at the guide[1] to writing Brainfuck interpreter, and the Zig language indeed looks very practical. In particular the test "..." {} block is neat, and the love for stack backtraces makes me feel at home.

Definitely a language to keep an eye out on.

[1] https://blog.jfo.click/how-zig-do/


I am curious whether it's inspired by Elixir's ExUnit test macro or the idea for that sprung up independently. Greatness is 99% transpiration nonetheless.


Zig, or making a better C sounds like a genius idea. C is so enormously successful that a better C could be huge.


Hi Andy! Wanted to let you know that I appreciate the niche you're exploring. Pledged on patreon. Keep up the good work. Please do consider the comments suggesting a periodic video / podcast/ etc., that would allow you to share your process. Thanks!


What exactly are people getting for their $1/mo? We have a donation page on our website that we send to people who ask but we haven't taken the plunge to exposing it yet out of fear of backlash.


Try "softening the blow" by selling a limited line of $SOME_PRODUCT_OR_SERVICE and releasing a donation page at the same time... And retain the donation page once the limited supply runs dry.


Supporting someone or an idea they believe in.


I would love to see zig compile to C so I can use it on platforms which clang/llvm do not support.


I'd think it would make sense to do this as an LLVM backend so that everyone could use it. This used to exist [1], but I don't think it's maintained, which suggests there wasn't much demand for it.

[1] https://news.ycombinator.com/item?id=12279770


Hey op your andrewkelley.me/donate link is broken! might wanna look into that.


Eh, it says he's getting $720 per month? Won't last long on that...


Absolute awesomeness. I can't wait to see it to become next big thing!


I run a small church. It's pretty much the same idea, except for YHVH.


Go kick some butt man. - Nifty


[deleted]


We detached this subthread from https://news.ycombinator.com/item?id=17257758 and marked it off-topic.


that he isn't a d*ck and made fun of him cause that was "compiler 101", but helped a brother out!


I hate this elitism attitude people have like everyone should have taken a 101 for every topic in the universe. If all bugs are shallow given enough eyes, all problems are "X 101" problems given enough people.

"So and so helped me fix a hard problem"

"Isn't that just [problem category] 101?"


In agreement with the others, it tells us that he isn't the kind of ass who would say, "isn't that like compilers 101 stuff".


It's still a nice story. If nothing else, we learn he's helpful. Some who are knowledgable about PLT and compiler design would've sneered and said 'go read the Dragon Book'.


That he's good enough to not bash on people when they need help. And instead help solve problems while also inspiring.

zeth___ 8 months ago [flagged]

I've been shat on HN before for saying this but I stand by it: you're not a programmer/developer/"whatever it's called now" until you have done a parser for something at least as complicated as lisp in whatever the lex/yacc equivalent is in your language of choice.

The difference between people who get regular languages, pretty much everyone who codes, and the people who get context free grammars is more than $100k a year.


We detached this subthread from https://news.ycombinator.com/item?id=17257758 and marked it off-topic.


>I've been shat on HN before for saying this

Gatekeeping: when someone takes it upon themselves to decide who does or does not have access or rights to a community or identity.


[flagged]


Could you elaborate on what you believe this adds to the conversation?


[flagged]


There are obvious differences between groupings based on immutable characteristics, binary status, and those where criteria is less clear. Gatekeeping as a pejorative applies only to the latter.

What you posted doesn't factor that in, and so doesn't appear to raise any substantive point on which to reflect.


I don't understand, are you saying that writing a language parser is a litmus test that that only real programmers can pass?

What makes it so special compared to other tasks of similar difficulty?


Parsing is a fairly simple and common activity with a huge background of research. Anything for which that is true in your field is something you should understand.

Parsing, databases, concurrency, floating point numbers, optimization, common data structures, regular expressions, networking, operating systems, recursion, complexity theory, etc., are all things you should understand to be considered a competent programmer.

zeth___ 8 months ago [flagged]

The difference between parsing and everything else there is that every single program longer than 200 lines will need to take a string from somewhere and change it somehow. If you don't know how to parse a context free language you will write a a thing that accepts a recursively enumerable one and someone will make full use of every corner case you can't even imagine.

It's pretty funny that next to no one here will even understand what I'm saying.


Would you please stop doing this sort of flamewar on HN? It's off topic and tedious.


Probably because what you're saying isn't true.

I have written programs with thousands of lines that do nothing with strings at all, as have a great many other people. Parsing is simply not the universal requirement you claim it to be, and declaring that only real programmers can do it just makes you look arrogant and ignorant.


Did your programs take input? If you had studied parsing theory, you would know that "strings" do not mean ASCII/Unicode text strings, they mean sequences of symbols from a set. All communication protocols, any kind of input, can be described in terms of grammars, and that is important:

https://www.youtube.com/watch?v=3kEfedtQVOY


I have a computer science degree, and have studied parsing, compilers, etc. I have written many parsers for text and binary formats. I know exactly what I'm saying.

Not all programs need to take external input to be useful, and many or most of those that do can just use an existing library to read it, with no need to write a new parser. Writing parsers is simply not the universal programming requirement you people are trying to claim it to be.


You still don't understand what "input" means. Do your programs literally have all the data that they need to produce their output hard-coded into the source code? If not, they are taking input from somewhere.


I understand exactly what "input" means. Please stop assuming that I'm an idiot and that you're infallible.

Yes, all "data" is right there in the source code for the programs I'm talking about. (I do also write a lot of code that deals with external data, for which I write parsers or use existing ones.)

For example, I did some consulting for a company where I made a mathematical model of their physical product and simulated the static and dynamic behaviour of it in various situations. The model and situations are completely defined by a few parameters in the source code. There's no need for external input and definitely no need for any kind of parsing. Thousands of lines of code.

As a simpler example, the other day I helped a high school student write a program to do basic numerical integration. No external input needed.

I have university students who do an entire course on numerical methods without external input (partly because we're stuck in Matlab where IO and string processing are horrible). The few times they need to process a small amount of data it's just pasted into the code.

For part of my PhD I wrote programs over a thousand lines long to derive mathematical functions. No external input, and nothing that can even be called data.

Generative programs in general, for art, sound, video, etc. often have no external input. I have written many of these.

For another part of my PhD I wrote thousands of lines of code to reconstruct surfaces from point clouds. For real purposes it uses external data, but for testing and development it can simulate its own (so that the true answer is known).

You are simply incorrect.


Those are excellent examples. I was debating putting in a disclaimer about hard-coded numerical programs in Fortran or similar, but decided not to go into the detail I will write below, nor give you the benefit of the doubt.

Compound data types, and procedure argument lists, describe the grammar of the data that your programs operate on. That this data is in the source code and the parsing from text to intermediate representation is done by the compiler only offloads the first steps of the parsing. Obviously if you deal with simple data types like vectors and matrices there will not be a lot of grammar there. If you think about typing your program and more intricately structured data into a REPL, this becomes apparent (just because your compiler accepts something as valid input, may not give many guarantees about how your program will behave - this is the crux of type theory).

I am not sure how you can claim generative programming as a counter-example to needing to learn parsing theory. The entire field started out from formal language theory.


Perhaps "generative" was the wrong word. I just mean programs that generate things. For example, I make programs that generate illustrations and animations for teaching maths, programs that generate exam questions that have suitable properties (appropriate difficulty, answers that come out nicely, etc.), and so on (pretty pictures, music, fold patterns...), and few of them need any input data beyond the code itself.

I agree that the programs are doing work to assign meaning to the numbers and data structures etc. in the code and put them to use in context, though I wouldn't have called that part of the parsing myself. I think the grammar of my "input" data structures is usually pretty simple, as you suggest, and that more complicated source structures are likely to be kind of self-organised, e.g. involving dictionaries with meaningful keys or calls to class constructors. Complex data structures can then be built from the simpler source ones if needed.


>As a simpler example, the other day I helped a high school student write a program to do basic numerical integration. No external input needed.

So how are the functions fed into the integrator? The second you want your program to integrate more than the one function you hard coded it with you need a parser to understand what the function is and then translate it into the internal language representation. Something that is very far from trivial.


Sure, but for most purposes, that kind of generality is just not needed. I have hundreds of useful small to large programs that don't need any external input - it's all in the source code.

Besides, even if you want to take in a general function in this case, there's almost certainly no need to write a bespoke parser by hand. In fact, this student was able to handle that requirement just fine (he had this in there first, and we made the program more useful by taking it out):

  exec('f = lambda x: ' + input('Enter function: f(x) = '))
Writing parsers is simply not the universal programming requirement you claim.

This will be my last reply in this thread.


It's hilarious when people are so wrong they don't even realize they are wrong. We have a whole generation of people who work as coders but don't understand the first thing about computers. And there are so many of them that they even manage to convince themselves they are right.


Parsing is the fundamental task of programming: taking information, performing a transformation on it, passing it along.

If you can't do it you might use computers but you're not programming.

Tuning DE solvers might take as much skill, or more, than getting context free string parsed, but it's not a skill that is used outside of numerical computing. DBAs aren't called programmers for a reason, but they are no less valuable for it.

The same applies for every niche skill that takes years to develop.


>The same applies for every niche skill that takes years to develop.

so you just defeated your own argument, instead of hyper-focusing on writing parser why not just say "years of experience" if that is what really matters?

no wonder you got shit on.


If you spend decades being a better juggler you won't be a better programmer, the same is true for all the meta-programming jobs.


What's the fundamental task of a DBA if not "taking information, performing a transformation on it, passing it along"?


Back in the real world, complex problems are solved all over the place by highly paid professionals with software. Written in regular languages.


Really? What's a commonly used regular language ("Regular" in the Formal Languages sense, not in the colloquial English sense.)

Regular Expressions are used to solve problems, but almost no one makes a career of programs written exclusively in Regular Expressions.


Regular expressions should only be used for either interactive search or a tokenizer. Everywhere else they will be abused and lead to sphagetti code.


The difference should be more than 100k a year. Sadly that's not something people interview for.


We should change that.

If you can't parse reverse polish notation to infix notation you're going to be writing monstrosities like the mediawiki markup parser.

It doesn't help that 80%+ of developers these days are script kiddies.

Even if it comes a fad getting more people to learn something as basic as that would make the code I have to read so much better.


You've been shat on because you assert a test for "programmership" that bears little to no relevance to the vast majority of challenges programmers will face, and you do so in a militant, absolutist manner that pretty much everyone finds off-putting and uncivil.

I didn't magically get a $100k raise after I wrote my first parser, either. Companies care about how you can make money for them, not that you aced "CS405: Compilers and Interpreters" in college. And that latter bit isn't at all correlated with the former... unless you're applying to work at a company that makes compilers.


[deleted]


Why? For some people traveling is their fulfillment of life. For other people it can be hacking, gaming, watching all TV shows that are on the planet, etc.


Living the drean then. Trying to get through everyshow this summer.

More

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: