Hacker Newsnew | past | comments | ask | show | jobs | submit | wahern's commentslogin

> The distinction of stack vs heap objects is an old distinction that is deeply encoded in the semantics of C. It's not obvious that's the right choice.

Nothing about C requires a contiguous stack, and there are perfectly standard C environments where the stack isn't contiguous, where call frames are allocated dynamically and managed (singularly or in groups) as a linked-list, e.g. some mainframe environments, gcc's segmented stacks, etc. C's automatic ("stack") variables are defined in terms of their lifetime, which is basically lexical.


Nobody should be using an API like time(2) or clock_gettime(2)+CLOCK_REALTIME to measure event durations with sub-second precision or accuracy, especially when controlling mechanical equipment. And I'd be absolutely surprised if anybody was when controlling equipment, at least in a regulated industry.

On unix this is what CLOCK_MONOTONIC is for; for one thing, the real-time clock can be reset at any time. Technically even CLOCK_MONOTONIC could jump forward. Real-time embedded systems either provide other timing APIs, or make additional guarantees about CLOCK_MONOTONIC's behavior.


I feel like most of us don't know what we're actually doing when we do t2-t1 to get a duration. Feels like it'd be in a lot of places and a negative number is going to cause havoc. Even worse if it's an unsigned int and you roll over to some massive duration.

Chile. At least starting from the second decade onward Chilean growth significantly outpaced South America generally.

Taiwan. South Korea. Many others. Generally, right-wing governments almost by definition tend to be more free market oriented relative to leftist governments, while leftist governments tend to be more populist. You can get alot of graft and corruption either way, but the path to growth and out of poverty, if you can get there at all, is generally more right-wing, certainly at least for developing economies.

In poor countries, left-wing and right-wing, the rich hoard wealth, and they generally see the competition for wealth as a zero sum game. Leftism tends toward always seeing a zero-sum game, i.e. class struggle over a fixed pie. It's only certain strains of right-leaning governments that figure out you can grow the pie so rich and poor alike become wealthier. (Second-order inequality, i.e. growing wealth gap despite everybody becoming wealthier, is a thornier problem, but relatively recent in historical terms, and I'm not sure the old left/right dichotomy of political economy schools is useful here.)

But relative to historical exemplars, I'm not sure any advanced economy can truly be called leftist, rhetoric notwithstanding. Full throated leftist governments end up like Venezuela. New Zealand is hardly leftist by comparison.


> It's only certain strains of right-leaning governments that > figure out you can grow the pie so rich and poor alike become > wealthier.

Credit to a few. Roger Douglas in New Zealand. Contemporary Peter Walsh in Australia, the Hawke finance minister, also got it. Keating somewhat got it, and put his neck on the line for politically-difficult but structurally-easy growth-pie macro reforms as treasurer, but did not follow through for the politically-difficult and structurally-hard reforms, like wholesale sales tax, and then became a fixed-pie prime minister. Walsh was gone by then.


> leftist governments tend to be more populist.

This might have been true once, it’s not true now.


This reads like word games. The article basically argues for optimizing the calling discipline on a per-function basis, using the function body to guide the optimization. That's not a calling convention and definitely not a standard ABI. What they're arguing for is a kind of static optimization mid-way between targeting a calling convention and inlining. That's not a bad idea on its face, but has nothing at all to do with the C ABI. As to whether it would actually improve anything, frankly, I'm half-surprised compilers don't already do this, i.e. for functions where it's deemed too costly to inline, but which aren't externally visible, and the fact that they don't suggests that maybe there's not much to gain here.

I've yet to read an article criticizing the so-called C ABI that doesn't end up effectively changing the problem statement (in this case, into something utterly incomparable), as opposed to providing a better solution to the same problem. Changing the problem statement is often how you arrive at better solutions overall, but don't try to sell it as something it isn't, insinuating that the pre-existing solution is stupid.


  > for functions where it's deemed too costly to inline, but which aren't externally visible
in LTO mode, gcc does

> I remain unclear what authority the federal government has over such a matter

It's actually an enumerated power under Article I, Section 8, Clause 5:

> [The Congress shall have Power...] To coin Money, regulate the Value thereof, and of foreign Coin, and fix the Standard of Weights and Measures; ...

https://constitution.congress.gov/browse/essay/artI-S8-C5-1/...


I'm surprised that would be interpreted to include time zones. Units of time, arguably (measures), but time zones? Time zones are not a measure of anything. Time zones do not follow on from definitions of units of time, any more than road speed limits follow on from the definition of a mile.

I would be less surprised if it were the commerce power used to uphold time zone coordination - for the promotion and regularity of interstate commerce etc etc. Tenuous, but consistent with a lot of the other nonsense that's been hung from the commerce power over the years.

Then there's the actual enforcement angle - time zones are just a social convention whereby people in a given area pretend that the time is slightly different than it 'really' is (local solar time). There's no reason local / state government and businesses can't post / operate on different hours, and leave federal bodies to operate on whatever 'federal time' they want. This already happens in parts of the world where the official time is locally inappropriate, such as Eucla in Australia or Xinjiang in China.

Obviously the optimal solution here is to coordinate a time change at all levels of government, but failing that there are other options.


I don't think there've been many court cases exploring the meaning of "[to] fix the Standard of Weights and Measures", and probably none as it regards time. And in the modern era the Commerce Power would probably be sufficient on its own. SCOTUS has suggested that under exiting precedent Congress would have the power to grant copyrights and patents under the Commerce Clause, and so the Copyright and Patent clauses today act more like restrictions on Congressional power.

But I don't see a problem relating time zones to measurement. Part of the authority to standardize measurement is the ability to dictate the manner and means of determining a quantitative value. Under the Weights and Measures clause I think Congress can regulate things like scales, including their precision and accuracy, at least in so far as they claim to provide a measurement of a Federally standardized unit. You might intuitively think the only reasonable end to such power is using it to improve and mandate ever greater precision and accuracy. But sometimes too much precision and accuracy is a bad thing--it can create transactional friction. Case in point, when 12PM noon varied between every town it become increasingly problematic as the speed of long-distance transportation improved, i.e. the rail roads. So the solution was to mandate worse accuracy.

Relatedly, there's a whole separate question of what time means. Most HN readers understand time in the scientific sense, and think of time in the sense of the SI second. But civil time used for general daily life has a slightly more nuanced meaning. That said, UTC/TAI time is very much like time zones in the sense of fudging accuracy. Modern clocks and gravimeter, even the kind regular people can buy for a few hundred or thousand dollars, are precise enough to be able to distinguish local time dilation. So the time passing in your living room is actually different from UTC/TAI. But think of how complex and, for the most part, useless it would be to try to "solve" that discrepancy by trying to integrate that reality in the general definition of civil time.

Also, AFAIU the authority to standardize measurement, and time specifically, operates more as a prohibition on states imposing their own mandates. See, generally, the Legal Tender Cases for the push and pull between various powers allocated between the federal government and the states.


They measure the number of minutes since sunrise.

Which changes every day.

GCC supports specifying endianness of structs and unions: https://gcc.gnu.org/onlinedocs/gcc-15.2.0/gcc/Common-Type-At...

I'm not sure how useful it is, though it was only added 10 years ago with GCC 6.1 (recent'ish in the world of arcane features like this, and only just about now something you could reasonably rely upon existing in all enterprise environments), so it seems some people thought it would still be useful.


> But it turns out that files commonly do fit in memory

The difference between slurping a file into malloc'd memory and just mmap'ing it is that the latter doesn't use up anonymous memory. Under memory pressure, the mmap'd file can just be evicted and transparently reloaded later, whereas if it was copied into anonymous memory it either needs to be copied out to swap or, if there's not enough swap (e.g. if swap is disabled), the OOM killer will be invoked to shoot down some (often innocent) process.

If you need an entire file loaded into your address space, and you don't have to worry about the file being modified (e.g. have to deal with SIGBUS if the file is truncated), then mmap'ing the file is being a good citizen in terms of wisely using system resources. On a system like Linux that aggressively buffers file data, there likely won't be a performance difference if your system memory usage assumptions are correct, though you can use madvise & friends to hint to the kernel. If your assumptions are wrong, then you get graceful performance degradation (back pressure, effectively) rather than breaking things.

Are you tired of bloated software slowing your systems to a crawl because most developers and application processes think they're special snowflakes that will have a machine all to themselves? Be part of the solution, not part of the problem.


I'm not sure about the most recent successful charge, but this is one of the charges used against James Comey: https://en.wikipedia.org/wiki/Prosecution_of_James_Comey

Here's a 2018 article listing some prosecutions: https://www.nbcnews.com/politics/congress/5-people-who-lied-...

Because it's the DoJ prosecuting, I think it's uncommon for the government to prosecute administration witnesses, even from previous administrations. Once upon a time Congress would prosecute and impose punishment itself: https://en.wikipedia.org/wiki/Contempt_of_Congress#Inherent_...


In the case where you're using the top of the stack as a, well, stack, I don't see the problem. It would only work if you're not interleaving processing of dynamically-sized objects and function codegen works out. It's similar to TCO in the sense of maintaining certain invariants across calls (e.g. no temporaries need be preserved), and actually in languages with TCO, like Lua, you can hack an application-level stack data structure using tail recursion (and coroutines/threads if you need more than one) that can sometimes be more performant or more convenient than using a native data structure.

There's been a least one experiment (posted a few years ago to HN) where someone benchmarked a stackful coroutine implementation with hundreds of thousands (millions?) of stacks that could grow contiguously on-demand up to, e.g., 2MB, but were initially minimally sized and didn't reserve the maximum stack size upfront. The bottleneck was the VMA bookkeeping--the syscalls, exploding the page table, TLB flushing, etc. In principle it could work well and be even more performant than existing solutions, and it might work better today since Linux 6.13's lightweight guard page feature, MADV_GUARD_INSTALL, but we probably still need more architectural support from the system (kernel, if not hardware) to make it performant and competitive with language-level solutions like goroutines, Rust async, etc.


Recent article on the younger Bundy, "Ammon Bundy Is All Alone. The anti-government militia leader can’t make sense of his allies’ support for ICE violence." https://www.theatlantic.com/ideas/2026/02/ammon-bundy-trump-...

Ammon Bundy has held relatively libertarian opinions on immigration for a long long time. Since at least the days of the standoffs. His political ideals are closer to the old time westy classical liberalism (something like founding era anti-federalists with a view of the law that essentially mirrors Bastiat) than they are to neo-conservatism.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: