Hacker Newsnew | past | comments | ask | show | jobs | submit | HippoBaro's commentslogin

> Rust invented the concept of ownership as a solution memory management issues without resorting to something slower like Garbage Collection or Reference Counting.

This is plain wrong, and it undermines the credibility of the author and the rest of the piece. Rust did not invent ownership in the abstract; it relies on plain RAII, a model that predates Rust by decades and was popularized by C++. What Rust adds is a compile-time borrow checker that enforces ownership and lifetime rules statically, not a fundamentally new memory-management paradigm.


Thank you for saying that. It makes me crazy every time I read that rust invented the concept of ownership…

> It actually took a lot longer to re-write the game in C++ than it took me to write the original machine code version 20 years earlier.

Is the most interesting quote IMO. I often feel like productivity has gone down significantly in recent years, despite tooling and computers being more numerous/sophisticated/fast.


It's possible that "it took several years and a small team of programmers to re-write the entire game in C++" because ⓐ those programmers were not as good as he was, and/or ⓑ they had to duplicate the behavior of an existing program exactly, rather than enjoying Bob Ross's happy little accidents, as long as the game was fun.

I'm pretty sure most programmers who are comfortable in both C++ and assembly language can add working functionality to a program faster in C++ than in assembly. Of course, certain C++ libraries will eliminate that advantage, but choosing to use those libraries isn't essentially different from many other bad decisions you might make about how to write a large program.


> it took several years and a small team of programmers to re-write the entire game in C++. It actually took a lot longer to re-write the game in C++ than it took me to write the original machine code version 20 years earlier.

Expanding the quote because the word "team" is probably relevant to why it took longer to rewrite. At a certain scale there just is a huge advantage in everything being inside one head...


Communication overhead is a big thing in teams. If you have a struggling team, halve the size. It's crazy how well that works. It's not the people but the number of them. Once your people are consumed by the day to day frustrations of having to communicate with everyone else and with all the infighting, posturing, etc. that comes with that, they'll get nothing done. Splitting teams is an easy to implement fix. Minimize the communication paths between the two (or more) teams and carve up what they work on and suddenly shit gets done.

In this case, they probably were trying to not just rewrite but improve the engine at the same time. That's a much more complicated thing to achieve. Especially when the original is a heavily optimized and probably somewhat hard to reason about blob of assembly. I'm guessing that even wrapping your head around that would be a significant job.

Amazingly enjoyable game btw. Killed quite a few hours with that one around 2000.


>Communication overhead is a big thing in teams. If you have a struggling team, halve the size. It's crazy how well that works.

I wish my managers would get this. Currently our product shit the fan due to us being understaffed and badly managed due to clueless managers, and what they did was add two more managers to the team to create more meetings and micromanage everrying.


I'm sorry you have to deal with that. "The Mythical Man Month" should have been required reading for your managers.


For all managers and all staff beyond entry level!


I would be so extremely out of there.


I think it’s just unique for Christ, who obviously a genius who can think in assembly code.


The Adams bros / Dwarf Fortress anyone?


Chris Sawyer lived and breathed assembly, earlier in the article he states that he just felt more efficient writing it than higher level languages. Then you've got the modern team of devs who probably haven't worked with asm since university, and it becomes difficult for them to review the original source code. Also Chris probably wasn't doing a lot of the actual programming, so instead of one guy working on a passion project, you have a team of devs doing a job.


Found this part strange because in other interviews he seemed to imply (for RCT classic) that there was almost some kind of VM-like structure that was running the original code underneath as-is


Expectations have gone up accordingly.

I think the real constraint must be market timing - as much work as people can do to meet the market (eg. Have the thing done by Christmas), that much will end up being done.


I’m very excited for Zig personally, but calling it “ultra reliable” feels very premature.

The language isn’t even stable, which is pretty much the opposite of something you can rely on.

We’ll know in many years if it was something worth relying on.


It was nighttime in Singapore when the ruling was announced. My husband and I scrambled to find a flight back. The best we could find, at any price, lands 25mins after the deadline.

We are on our way there.


I feel for you. I just wonder, at this point why would someone look to go back to the US "At any price", given how bad are they being treated? From what I can see, it seems most of us non-US people are "persona non grata" in the US.

I myself am and live in a so called "shithole country ". But specially because of my Technical skills, I've got plenty of opportunities over here. I would never think on living in the USA. Even though I easily could via TN visa. But it's clear US people dont want me living there.


Consider they have a life, a house/appartment, all things they own,… there. Would you give all that up without at least trying to get back?


And even if you decide it's time to leave, you'd still want to come back and settle your affairs and plan a proper move. You wouldn't want to leave everything behind, especially if you only brought enough for a brief trip.


Over potentially my life? Yes, I would give up. For now. I can ask for my assets after the fascist regime is overthrown.


That's an increasing consideration for people thinking about moving to the US or those who aren't settled there yet. But, of course, people who already have family and belongings there will want to get back in to at least sort those things out before leaving for good.


All your things are there, your entire life. Maybe other family members, children's schools etc. Not easy to just never go home.


I'd do it so that I secure my "life" that I built there, and then plan my exit while it's still optional.


I did move away from the US because of these reasons, and it's been a good decision in retrospect. But no one likes uprooting their entire life and it takes years to build a new one somewhere else.

The calculus on immigrating to the US today is clearly negative, but many people immigrated 5/10/20+ years ago before all this shit and have lives there. They did not know any of this would happen.


>lands 25mins after the deadline

I'd rather just have waited until an injunction or something next week. The guidance from my company is either make it back before the deadline, or stay where you are until further notice.


"Further notice" can easily be a firing notice.

I understand your position, but it's a bit of a privileged one. Not everybody will have that option.


Rushing to the US and getting detained by border patrol in a foreign country isn't exactly a shinier alternative at this point. I'd take my risks with my job over my life in those shoes.


Universal injunctions were nerfed by the Supreme Court this year: https://en.wikipedia.org/wiki/Trump_v._CASA


[flagged]


the whole US visa morass is complicated and volatile enough that a lot of large companies have dedicated teams who help advise their employees on visa issues and how to best navigate them. this "guidance" is basically saying "this is our lawyers' best guess as to how to stay safe over the next few weeks"


I was confused, because it does not apply to current Visa holders: https://news.ycombinator.com/item?id=45316226

I thought maybe the company was doing something shady, for it to apply to them.


that was not at all certain yesterday, and even now there's the constant fact that a border agent can decide to be nasty and use this as a pretext to deny you entry, with no real recourse on your part.

as a parallel example, trump recently decided that you could no longer get your visa stamped in a third country (which a lot of indians did as a matter of course, because wait times for an appointment can be very high back home). there was an explicit carve out for people who had already made appointments at some third country embassy, but a lot of those people went to their stamping appointment and not only did not get the renewal but had their existing visa cancelled (which is apparently within the powers of the embassy official), so they could not even return to the US while waiting for an appointment date in their country of residence, and are basically on unpaid leave right now (best case scenario, would not be too shocked if some of them lose their jobs if they are away for too long).


I don't understand your comment. My reply was today, in response to a comment that was posted less than an hour before mine, both hours after this was announced. How is yesterday relevant?


microsoft sent their letter out when it was highly likely the new diktat applied to existing visa holders too. they had very little time to respond if they wanted to make sure people got home before the absurdly short deadline.


Companies have rights in the US, you can't just keep their employees out.


That's not generally true, of course. It requires they're legally employed, and have the proper work visas. I was confused. I thought the company was doing something shady, for it to matter, since it doesn't apply to current visa holders.


companies are contacting employees who are out of the US, for whatever reason


Check for charter jets, you may be able to jet pool.


> The best we could find, at any price, lands 25mins after the deadline.

Scheduled landing or historical landing time? Flightera.net will show you landing times for 2 years of flights


The guidance has changed already. Existing holders don't need to do that


Best of luck! Keep us posted.


i understand playing safe with this administration, but why?

h1b requires that one company signed as responsible sponsor on form i129. they are the ones on the line for the payment.


Via what mechanism? Will they be ready to accept the payments a few hours from now? Ready to process the re-entry with procedures that aren’t even developed yet?

Getting into the country before the deadline is the only safe way to avoid the uncertainty and ensure you don’t get stranded out of the country or in an airport for days or weeks while the process is developed.

This hastily constructed and implemented executive order is a terrible way to run a country


Let me guess, you're not an immigrant?


lived 5yr on L1A. It's a week to leave the country if laid off. But at the same time, most of the penalties/costs fall on the sponsoring company for all cases.

ICE black shirts make it more uncertain on enforcement, but there's still laws.


I don't know about this. As a brown student on F1 visa, I even used to walk into Walmart different than I do now.


Because when it comes to immigration, the downside to getting it wrong are life-altering.


The expected value for immigrants is rapidly shifting into it being more favorable to be an illegal, because ICE/CBP is mostly going after low hanging fruit of easy to catch people that they know about with homes and salary jobs / university and a visa. People that are off paper and 100 miles past the border are as good as gone. So basically what we get is the exact opposite of what we want.


False.

ICE is about to have a ginormous work force

They’ll snatch whoever


Your own premise destroys your argument. If they're grabbing 'whoever' it's at least as easy to grab immigrants with a paper record as those that don't.


> they are the ones on the line for the payment.

And they decide if you keep your job.


Much good luck. Hope you sail trough without harm.


That's terrible. Best of luck to you both.


What is your nationality if you don’t mind me asking?


It’s really the hardware block size that matters in this case (direct I/O). That value is a property of the hardware and can’t be changed.

In some situations, the “logical” block size can differ. For example, buffered writes use the page cache, which operates in PAGE_SIZE blocks (usually 4K). Or your RAID stripe size might be misconfigured, stuff like that. Otherwise they should be equal for best outcomes.

In general, we want it to be as small as possible!


> It’s really the hardware block size that matters in this case (direct I/O). That value is a property of the hardware and can’t be changed.

NVMe drives have at least three "hardware block sizes". There's the LBA size that determines what size IO transfers the OS must exchange with the drive, and that can be re-configured on some drives, usually 512B and 4kB are the options. There's the underlying page size of the NAND flash, which is more or less the granularity of individual read and write operations, and is usually something like 16kB or more. There's the underlying erase block size of the NAND flash that comes into play when overwriting data or doing wear leveling, and is usually several MB. There's the granularity of the SSD controller's Flash Translation Layer, which determines the smallest size write the SSD can handle without doing a read-modify-write cycle, usually 4kB regardless of the LBA format selected, but on some special-purpose drives can be 32kB or more.

And then there's an assortment of hints the drive can provide to the OS about preferred granularity and alignment for best performance, or requirements for atomic operations. These values will generally be a consequence of the the above values, and possibly also influenced by the stripe and parity choices the SSD vendor made.


I've run into (specialized) flash hardware with 512 kB for that 3rd size.


Why would you want the block size to be as small as possible? You will only benefit from that for very small files, hence the sweet spot is somewhere between "as small as possible" and "small multiple of the hardware block size".

If you have bigger files, then having bigger blocks means less fixed overhead from syscalls and NVMe/SATA requests.

If your native device block size is 4KiB, and you fetch 512 byte blocks, you need storage side RAM to hold smaller blocks and you have to address each block independently. Meanwhile if you are bigger than the device block size you end up with fewer requests and syscalls. If it turns out that the requested block size is too large for the device, then the OS can split your large request into smaller device appropriate requests to the storage device, since the OS knows the hardware characteristics.

The most difficult to optimize case is the one where you issue many parallel requests to the storage device using asynchronous file IO for latency hiding. In that case, knowing the device's exact block size is important, because you are IOPs bottlenecked and a block size that is closer to what the device supports natively will mean fewer IOPs per request.


Just to add my two cents—I’ve been writing Go professionally for about 10 years, and neither I nor any of my colleagues have had real issues with how Go handles errors.

Newcomers often push back on this aspect of the language (among other things), but in my experience, that usually fades as they get more familiar with Go’s philosophy and design choices.

As for the Go team’s decision process, I think it’s a good thing that the lack of consensus over a long period and many attempts can prompt them to formally define a position.


Yeah once you've been using it long enough for the Stockholm syndrome to set in, you come to terms with the hostile parts of the language.


I suspect a lot of us don’t have strong feelings either way and don’t find the verbosity “hostile”. No need for Stockholm syndrome if you don’t feel like a prisoner.

Of course you may have been joking, in which case “haha”. xD


If you say so. For me it's always been the opposite - I'm excited at the start about all the cool features, then slowly get disillusioned because of the warts.


This, it’s always the new people complaining about error handling.

I have many things to complain about for other languages that I’m sure are top-tier complaints too


I appreciate the argument that things can often be difficult for noobs but actually fine or better than alternatives once you get used to them.

But on the other hand, people who are "used to the way things are" are often the worst people to evaluate whether changes are beneficial. It seems like the new people are the ones that should be listened to most carefully.

I'm not saying the Go team was wrong in this decision, just that your heuristic isn't necessarily a good one.


This logic mostly only makes sense if your goal is primarily to grow the audience and widen the appeal, though. I think at this stage in the Go programming language's lifespan, that is no longer the goal. If anything, Go has probably started to saturate its own sweet spot in some ways and a lot of complaints reveal a difference in values more than they do opportunity for improvement.

To me, it makes sense for the Go team to focus on improving Go for the vast majority of its users over the opinions of people who don't like it that much in the first place. There's millions of lines of code written in Go and those are going to have to be maintained for many years. Of utmost priority in my mind is making Go code more correct (i.e. By adding tools that can make code more correct-by-construction or eliminate classes of errors. I didn't say concurrency safety, but... some form of correctness checking for code involving mutexes would be really nice, something like gVisor checklocks but better.)

And on that note, if I could pick something to prioritize to add to Go, it would probably be sum types with pattern matching. I don't think it is extremely likely that we will see those, since it's a massive language change that isn't exactly easy to reconcile with what's already here. (e.g. a `Result` type would naturally emerge from the existence of sum types. Maybe that's an improvement, but boy that is a non-trivial change.)


It’s fun, because when a newcomer joins a team, people tend to remind them that their bison is fresh and they might be seeing pain we got accustomed to. That’s usually said in a positive manner.


I'm intrigued as to whether "bison" here is a metaphor (for what?) or a cupertino (an error introduced by auto-correct or predictive text)


bison -> vision/point of view.

Bad auto-correct, my bad


I have a similar level of experience with Go, and I would go so far as to say it is in fact one of the best features of the language.

I wouldn’t be surprised that when the pro-exception-handling crowd eventually wins, it will lead to hard forks and severe fragmentation of the entire ecosystem.


To be honest, I really don't believe that will happen in the future. All of the proposals pretty much just add syntactical sugar, and even those have failed to gain consensus.


That's just survivorship bias isn't it? The newcomers who find Go's design and/or leadership obnoxious get a job that doesn't involve doing something that they dislike.


That's okay. Not everyone needs to like Go. Pleasing every programmer on the planet is an unreasonable thing to ask for. It's also impossible because some preferences conflict.


After over a decade of people bringing up the issue in almost every single thread about Go, it's time to give the language what it deserves: no more constructive feedback, snarky dismissals only.


Not infrequently by people who are not even Go programmers. And/or the same people who hijack every other Go thread to rant about how much they hate Go.

In a quick search, you seem to be one of them: https://news.ycombinator.com/item?id=41136266 https://news.ycombinator.com/item?id=40855396

You don't see me going around $languages_I_dislike threads slagging off the language, much less demanding features. Not saying anything is an option you know.


I’m amazed by the ambition, technical brilliance, and relentless dedication behind some personal projects on display here.

All of this for a clock! I don’t get it, but I’m in awe.


Eminently pragmatic solution — I like it. In Rust, a crate is a compilation unit, and the compiler has limited parallelism opportunities, especially since rustc offloads much of the work to LLVM, which is largely single-threaded.

It’s not surprising they didn’t see a linear speedup from splitting into so many crates. The compiler now produces a large number of intermediate object files that must be read back and linked into the final binary. On top of that, rustc caches a significant amount of semantic information — lifetimes, trait resolutions, type inference — much of which now has to be recomputed for each crate, including dependencies. That introduces a lot of redundant work.

I also would expect this to hurt runtime performance as it likely reduces inlining opportunities (unless LTO is really good now?)


They mention that compiling one crate at a time (-j1) doesnt give the 7x slowdown, which rules out the object file/caching-in-rustc theories... I think the only explanation is the rustcs are sharing limited L3 cache.


The L3 cache angle is one of our hypotheses too. But it doesn't seem like we can do much about it.


It would be great to know a bit more about the protocol itself in the readme. I’m left wondering if it’s reliable connection-oriented, stream or message based, etc.


I am not sure I buy the underlying idea behind this piece, that somehow a lot of money/time has been invested into asynchronous IO at the expense of thread performance (creation time, context switch time, scheduler efficiency, etc.).

First, significant work has been done in the kernel in that area simply because any gains there massively impact application performance and energy efficiency, two things the big kernel sponsors deeply care about.

Second, asynchronous IO in the kernel has actually been underinvested for years. Async disk IO did not exist at all for years until AIO came to be. And even that was a half-backed, awful API no one wanted to use except for some database people who needed it badly enough to be willing to put up with it. It's a somewhat recent development that really fast, genuinely async IO has taken center stage through io_uring and the likes of AF_XDP.


Make os thread runs more efficient is like `faking async IOs (disk/network/whatever goes out from the computer shell) into the sync operations in a more efficient way`. But why would you do it at first place if the program can handle async operations at first place? Just let userland program do their business would be a better decision though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: