* The Zig Software Foundation now exists
* My total income including ZSF Salary, GitHub Sponsors, Patreon is now about 4000 USD per month. Legally, my ZSF salary is 160K per year, but I have been consistently donating it back to the org since frankly we just don't have that much revenue at this point in time, and I've been prioritizing getting that money to other Zig contributors.
* The $150/month to Rich Felker now comes from ZSF instead of my personal account.
* Zig 0.5.0, 0.6.0, 0.7.0, 0.8.0 have all been released. 0.9.0 is coming soon.
* I no longer really have an opinion about V; it's been two years since I paid any attention to it.
Though I agree, it'd be silly to continue working or to declare higher income purely for that.
I guess it’s really three questions:
1) For each dollar to SS taxes, do I expect to get more than a dollar from SS in the future?
2) Same question as 1, but inflation-adjusted future dollars.
3) If the expected return is positive, does it beat reasonable market returns?
Guessing the answer depends on age, income levels, etc. I just don’t know how to do the math myself.
You need at least 40 "credits" over your working life to be eligible for social security benefits. Each calendar year you can earn up to 4 "credits", which in 2021 corresponds to $1470 in earnings that are taxed. So it definitely makes sense to earn the credits every year e.g. $6000 in 2021. SSDI looks at the last 10 years of earnings if you're older than 30, which still incentivizes hitting that minimum target.
As for the actual income in retirement, that is a complex subject. For example, the optimal money move is to continue working to full retirement age, but you may want to take the early retirement option (even if the payout is smaller) to enjoy the extra healthy years.
Only your highest-earning-of-SS-income 35 years matter, so it's conceivable (and probably not too rare for many HN readers) to have had 35 max-income years before FRA.
It's not very hard to make a spreadsheet to play with numbers if you want. I'm considering retiring soon, decades before eligibility, and I've enjoyed playing with the numbers, as I have the opportunity to contribute more or less in the coming years fairly easily.
That being said, SS has some benefits that make it more attractive than other investments with that return (in that it's an inflation-indexed annuity backed by the US government), especially when mixed with my other investments.
Still not a fantastic deal for me and it will only get worse in the likely event I get more SS income over the years, but like all welfare programs it isn't meant to help someone in my current situation. Part of getting into my current situation was benefiting from welfare when I was part of a household wasn't in such an amazing situation financially.
Of course it does. Social Security is social, not individual. If you're a rich man - and a person who makes 160 KUSD / annum probably is - it makes a lot of sense to contribute more to social security, whether voluntarily or involuntarily.
That doesn't mean they don't think that people should pay taxes for welfare programs like social security or for other governmental functions, it just wasn't what the question was about.
Certainly if you're after the second bend (very realistic for a lot of folks) there is minimal return on contributing.
1) Some situations like mortgage applications and child adoption will require some sort of income verification. It's a lot harder when you aren't showing as much top-line income.
2) If the project goes south and the author has to find another job, showing a "market rate" salary is better than the true rate (sadly many people and companies still talk in terms of percentage increases)
Employers should pay what they value the position at, not what someone else was paying for it.
Different markets will bear different negotiation tactics. California, for example, made it illegal for a potential employer to ask a candidate this information.
Thinking about it - I do know/can infer a little about the US - everyone does (or has someone do) their own taxes there, so it's not needed. The purpose of this in the UK is that the second employer within a tax year can set up the new employee correctly so tax isn't underpaid as a result of ignoring the previous income.
(PAYE - Pay As You Earn - income tax, national insurance contributions, and student loan repayments are handled by employer. People only 'do taxes' (self-assessment) here if self-employed, have capital gains over the allowance or sale price over the reportable amount, etc. extra curricular stuff like that that, especially because of allowances before the tax kicks in, most people don't need to. I would guess the most common reason is pension contributions made with post-tax money (employer's pension scheme is of course paid into primarily through payroll, not that) for those earning enough to have a higher marginal rate of income tax - you get the basic rate back on contributions automatically, since everyone pays it, but any extra requires self- assessment.)
Paying yourself a high salary (as long as it's reasonable,
so as not to arouse suspicion of hidden profit distributions) allows the business entity to not make any profit on paper and therefore allows you to move to another country without having to pay this "exit tax".
Disclaimer: This is my understanding of the situation, I'm neither a lawyer nor a tax professional and this is not financial advice.
And just about nothing works as advertised.
Also, something I'ld like to know that would otherwise be really dishonest on V devs' part, is their memory management innovative? Oortmerssen, creator of Lobster language, said that in their Discord they mentioned that the memory model was taken from Lobster.
Good to see that you finally got rid of your jealousy
I will be honest, seeing everyone getting "mad" at V project owner was very "dishonest" move, to stay polite and within HN 's rules
so far V is the only one that delivered on their promises, a safe, self hosted language that compiles super quick
i'm yet to see self hosted zig or a fast zig compiler
so far zig has been only blabla, while V delivered
On a separate note, I'm learning a lot about how different software foundations operate and I'm very surprised to see that very few prioritize the goal of paying their core developers. I encourage everyone to check out the mission of the Zig Software Foundation and to compare it with how we're spending the money we're getting. Everything is available here:
In particular, their top four recommended charities are managed by "Effective altruism funds", with the 3rd best-ranked charity spending money to prevent "potential risks from advanced artificial intelligence" which I strongly consider a waste of money.
Of course, everyone's values are different. I'd just encourage people to take 5 minutes and choose their own charities instead of blindly choosing their suggestions.
In particular, they also list where your money is going, so you can just pick and choose which of their causes you want to support (although I personally donate directly to them so they can decide which charity they think currently maximizes impact-per-dollar): https://www.givewell.org/charities/top-charities
> I'd just encourage people to take 5 minutes and choose their own charities instead of blindly choosing their suggestions.
The problem here is that some organizations have a very well proven track record of solving real problems; some organizations instead focus on very nebulous "social messaging and awareness"; and some organizations exist primarily to pay their executives well.
A quick 5 minute search won't really yield a lot of information on effectiveness, but an organization like GiveWell can afford to hire actual researchers to answer these questions.
(I'm sure there's other decent sources to help evaluate that question, of course. GiveWell is just the one I'm most familiar/fond of)
It's per purchase though so you have to update your bookmarks to point to smile.amazon.com. Uninstall the app to help you remember.
This is absolutely not what they are doing. Most of their charities are working on immediate problems and are evaluated on their short term impact. The obvious exceptions are the Long-Term Future category and the Climate Change category, which are looking at around 50-100 years from now. They are not multiplying some tiny utilitarian value by an extreme number of years.
Their top 4 charities by total donation amount are Against Malaria Foundation, GiveWell, Global Health and Development Fund, and GiveDirectly. All are fighting immediate problems.
There are different categories of charities because people will never agree on which categories are most important. Does animal suffering matter? Is preventing a life of extreme poverty better or worse than preventing an early death? Everyone has different answers to these questions, so the site gives donors a variety of suggestions. They suggest charities that are well evaluated for their category. E.g. Climate change charities are evaluated on their climate impact per dollar donated, and human health charities are evaluated by lives saved and disease prevented.
Peter Singer (who also runs The Life You Can Save), has lots of thoughts about why one should go with "effective" altruism instead of always donating to your local charity, and I would say they are worth listening to.
Do you consider all research on AI ethics and bias to be a waste of money? The EA folks are doing some of the best work in that field, and have been focusing on it for quite a while. People used to consider biosecurity and pandemic preparedness a waste of money too, but hopefully we know better.
Biased AI algorithms are a problem for the 10% of countries where AI is part of their daily decision process. For the other 90%, "I don't have enough nutrients for my brain to develop" and "I'm severely disabled due to a preventable but unprofitable illness" are much more pressing issues.
I support basic research, even in areas whose economic benefit may not be clear. But if someone asks "how can I help humanity the most?", I consider "AI research" to be a really bad answer, to the point of being almost a lie.
I'm a bit skeptical too (and I haven't looked into that much), but it's not obviously stupid. They'd point out that even if there's just a 1% chance of humans being wiped out in the not-so-distant future, that's a big deal (and a large expected value for number of deaths) and we should work on reducing those odds.
A lot of this rests on the assumption that super-intelligence is really powerful. Like, take-over-the-world type of powerful.
Having said that, I repeat what I said at some point: once they show that they can stop the dumbest DDOS, then and only then I'll listen to what they have to say about a super-intelligent AI. If they can't do even that, then I don't know why I would listen to anything else they have to say.
Cloudflare prevents DDOS attacks. Why would effective altruists work on DDOS attacks?
So, my challenge to them is: if you think that you can stop a super-intelligent AI from taking over, show me that you can stop the dumbest possible malicious intelligence that we know of, which right now would be a DDOS. And the incentives to develop either are pretty much the same, too.
If they can't, and my guess is that they can't, then I don't see why I should believe that they can stop anything bigger.
It's weird to demand AI safety researchers should stop working on AI safety and prove they can mitigate DDOS attacks. The two are nothing alike. A DDOS attack isn't an intelligence. DDOS protection services work largely by having more resources and infrastructure than the attackers. Anyone can do that with enough money.
They also aren't researching how to fight malicious AIs. They'd be researching how to program safe AIs. Largely the stuff discussed here: https://en.wikipedia.org/wiki/AI_control_problem
If someone were to argue "our strategy for survival is not to fight our enemies, but rather to convince them to use non-lethal weaponry", you would read their research in the rubble of their country after an enemy laid waste to them. I feel the same way towards that line of research: why would the US Army program their AI to be less aggressive when they could... not? How about the Russian army? China? Iran? You point out that anyone with money can make a DDOS attack, and I feel exactly the same way here: malicious AI could come up from anywhere.
If those researchers truly believe that a malicious AI is possible (and again, the website that started my comment chain puts it as a top-3 priority), they have to assume that it will be developed by someone with no interest in playing nice, just like that factory that started spewing ridiculous amounts of CFCs in 2019. Why would anyone use anti-bias correction in their NN embeddings when the biased ones exploiting harmful stereotypes gets them higher profits?
If those researchers cannot stop the most likely scenario, then I consider their research little more than wishful thinking. And we have seen how good "everyone will surely play nice" has worked - spam, pop-ups, phone scams, the list goes on. That's why I like DDOS as an application: it's the world's stupidest AI causing a lot of trouble. Can you outsmart that? Good, then now we can talk about outsmarting Skynet.
I'm not so sure about AI safety research myself, because it's very hard to evaluate how effective it is (I guess if an AI ever wipes us out, we'll have a data point), and because I don't know very much about AI safety research.
But let's suppose that super-intelligence is really powerful and dangerous (suppose there's a 10% chance it could kill all humans, despite our intentions). Now what? Is there anything more productive we could be doing to prevent that than just waiting?
Let's further suppose that the US and Chinese militaries want to make aggressive AIs that will subdue, or worse, exterminate their rivals.
As outlined in the AI Control Problem wikipedia article, there's a lot of concern that we'll make a super-intelligence that harms us completely by accident. For example, if you build one with the goal of protecting citizens, it might reason that to protect people, it must continue to exist. Therefore it undermines and outwits anyone who wants to shut it down. Worse, it might reason it could better protect people if it had more political power and more physical resources. Or even worse, if we really mess up how we programmed its goal, it might reason we're best protected if we're all put in a permanent coma and stored in a concrete bunker.
So even if the US and Chinese militaries were fully evil and wanted to exterminate all other countries, they might still want to use results from AI safety research, just to ensure they don't accidentally destroy themselves. And to some extent, for humanity's sake, making an AI with safety that exterminates every country except the one that created it is still slightly better than an unsafe AI that exterminates everyone.
I don't think the US or China or Russia would want to exterminate other countries. But whatever they choose to do with their AI, the AI safety research is there to ensure they don't lose control of their own AI. If anything, those safety features should be even more desirable if you're building an aggressive military AI.
Finally, how do you fight a hostile super-intelligence? I think I know what these AI researchers might say: you need to have an even smarter friendly super-intelligence. What's more, a friendly AI could look for and destroy other AIs while they're still in development. So maybe the key to avoiding hostile super-intelligences is just to be the first to make a super-intelligence and ensure it's safe and friendly. And if we ever get in an arms race to build the first super-intelligence, we better hope AI safety is well researched and understood by then. Because if not, someone may create an unsafe AI just to have it before their rivals.
There's a big difference between a threat that has existed for all of human history (pandemics), and a completely novel threat model that might exist in the future (AI)
Joining a community of people who also pledge to participate in charity and promote charities to one another could be a way to eliminate the distasteful part of signaling to others that you donate to charities. It can also work as a way to integrate charities and donations as a routine part of your expenses and budgeting.
Edit: To add the above, the median annual donation amount for someone earning $60k-$80k in 2010 was $107. Roughly 0.15% of their income. It feels weird to criticize people donating 10% for having not-pure-enough motives while letting the majority who donate almost nothing escape criticism.
"Our pledges are in no way legally binding. They are commitments made voluntarily and enforced solely by your own conscience."
"Our pledges do not restrict you to give to registered charities, specific organisations, or organisations working in a particular cause area. The only requirement is that you give to the organisations which you sincerely believe to be among the most effective at improving the lives of others."
So, if you think you genuinely believe working in FOSS results in a better world than pulling in a paycheck and donating to charity, I'd say: yes, absolutely?
It's certainly not the default expectation, mind you. I think you'd probably get less out of the community since they focus in a different direction. But if you still see value in being part of the pledge, I'd definitely encourage you to take the leap and count that FOSS work :)
> However, Zig Software Foundation will never have big tech companies on the board of directors. We are grateful to Pex, for example, for donating $5,000/month, even though they have no board seats. Our goal is to maintain independence by keeping the mix of donations balanced among many parties.
Seems relevant given the recent hub-bub over Rust leadership/governance.
Another interesting model is LF and Linus Torvalds. (NS)BDFL who has the best interest of the project in mind, but doesn't seem improperly influenced by LF supporter companies.
Two very deserving projects got funding (musl and Zig). I have used musl, mainly through Alpine Linux. I have not used Zig, but it seems it is a very interesting answer to the question of what does a systems programming language that is not C or C++ look like. We have one recent answer in the form of Rust, and Zig is another answer with another set of priorities and tradeoffs.
It is a very exciting time in terms of real-world language development.
All the best to the open source contributors of all these languages and libraries!
(And no, Rust is not it. It's a great language, but its design philosophy is very different.)
Ironically, today, Zig is also one of the best C/C++ compilers around, chiefly because it "just works", even for advanced scenarios such as cross-compilers:
(Yes, of course, the actual compiler is Clang. But Zig wraps it in a way that makes it so much easier to install and use, especially on Windows.)
I owe you 1 beer, this is so true. If I can just get something that treats a string politely I'll be over the moon. If that's Zig, it has earned my contribution, simple as that.
I hope Clang and Rustc learn a few tricks from them.
So, there's that...
edit: we charge no fees since our ops are supported by our donors - the only fee that gets taken out is that charged by our brokerage for selling the crypto
Likely the latter?
Also, skimming the terms:
> If circumstances make Giver to that particular Nonprofit impossible or inappropriate (such as if the selected Nonprofit ceases operations or loses its tax-exempt status), we will attempt to contact you first for additional preferences on where you would like your Donation to go. If we do not receive your preferences before we must disburse the funds, then we may in our sole discretion select an alternate Nonprofit to receive the Donation. We will do our best to direct your Donation to a Nonprofit in a similar space and/or with similar goals. Except in cases of fraud, we do not issue refunds.
If I can get Andy or ZSF representative to say they'll claim it from every.org I'd consider that. I can't find any obvious disclosure about margin/overhead from Every.org but I might still consider it despite that.
This not a surprising term. By accepting donations to be passed on to someone else, they become trustees, which comes with hefty legal constraints. By having terms saying "we will try our best, but if we fail, we will try to honour your intentions", they can avoid winding up with little piles of money that can't be repurposed and simply ... exist.
Did you just solicit donations without them even being signed up? If so, that feels weird, and I'd appreciate at least (as I cant speak for others) if you didn't do that without some heavy disclaimer text making clear that they've shown no interest yet. I love the concept of what your service does, but you can get users in other more honest ways than this.
I'm honestly really impressed.
I have a text message from just today after lunch with a link to click to claim my funds. No name, no context, just a random text that I can only imagine isn't in fact free money waiting for me.
That being said, Network for Good, our partner for disbursing to nonprofits whom haven't connected directly with us, also handles disbursements for a lot of Facebook Giving and other large donation platforms, so I don't think the checks coming from there are necessarily unexpected by many nonprofits - see their support article on this at https://networkforgood.zendesk.com/hc/en-us/articles/1150073...
But bottom line they'd have gotten the money delivered to ZSF anyways.
I also love that you will mail checks to organizations even if they do not sign up for electronic disbursement. However I will sign up ZSF now of course :)
I'll match other HNers donations of as much as $100 to a total of an additional $1000. Not sure how to substantiate every.org donations but maybe I'll take HNers at their word.
EDIT: scoreboard as of 22:20 UTC: $400 out of $1000 to be matched: easymuffin, slimsag, tav, _hl_.
Because we need some kind of time bound, I'll leave this matching window open for the next 48 hours, so ~21:20 UTC on Saturday (this is ~5pm US Eastern time IIRC).
slimsag up to 500: https://news.ycombinator.com/item?id=28792172
So I think that means if someone donates and then replies to the parent comment noting the amount, that amount will be quadrupled!
And just to make it interesting, I'll also match donations up to $100 each to a total of an additional $500 - starting now for the next 48 hours. :)
Toaster King: $50
Total match: $565! Great job, team!
Unfortunately this work has never been brought into main (a couple of attempts have been made subsequently).
I don't _think_ this is true.
This looks like an easier Rust to me.
The core features of Rust are the borrow checker, traits, and algebraic data types, and Zig is missing all of those. On the other hand, Zig has a lot of things that C doesn't have, and a few that Rust doesn't have as well - and some people gravitate towards the C-like simplicity of the language.
Is a place to get a feel for things.
There's also a generated doc set that is similar to rust's, but I just recommend looking at the source if you are curious about something, it's concise and readable.
If you have more questions you can pop into irc or the discord, they'll have better resources at the ready.
>One year ago, I quit my day job to work on Zig full time.
So is it the case he only makes $1500 a month? I mean it's none of my business what he makes but he did bring it up and I am curious to know how much a prominent individual like him earns for working on his own open source passion project. I find it commendable that someone would work on their own project for such a low salary owing to a belief and desire to get their product out there.
That said if he makes $15000 a month that's cool too and great for him, but it's not clear to me what the truth here is.
> Now if I'm being honest about my motivations for this blog post, it's that I want to prove that open source funding is not a zero-sum game.