Hacker Newsnew | comments | show | ask | jobs | submit | aidenn0's comments login

My automatic watch contains an electronic circuit, so I'm not sure what your point is.

Interesting. What is the circuit for ?

Are you sure it's not powered by a standard watch battery ?


Yes, it is a quartz movement with a capacitor that is charged by a mechanism similar to a mechanical auto-winding watch.

https://en.wikipedia.org/wiki/Automatic_watch#Automatic_quar...


Just remember that many of the early Sierra games you can end up in a no-win situation without a game-over screen.

Nice, reminds me of Ogre Battle (but without fixed perspective).

This is indeed true; read any interview with Carmack from right before HW acceleration became mandatory.

If I'm not mistaken, the obtuse fast inverse square root algorithm came from early lighting engines.

https://en.wikipedia.org/wiki/Fast_inverse_square_root


For those who were around for it, I'd be interested in a comparison with the ffmpeg/libav issue and the GNU Emacs/XEmacs times.

Like jbk, I'd like to hope it will happen, but the combination of significant tribalism, plus the very different development philosophies may prevent this.

For the merging to happen, there would have to be both some agreement on the technical direction of the two projects, as well as some sort of social resolution, which would likely involve some maintainers of one fork or the other leaving.

Regardless, from my point of view, it seems unsustainable to keep on merging from libav to ffmpeg at the pace that Michael has been doing, so the projects will end up diverging enough at one point that momentum ends up behind one or the other.


I've been a Linux user for 15 years and only once got Ubuntu working in 3 tries (and I had to manually enter a modeline to my xorg config to do so!). Judging by how many people I know who run Ubuntu, I must be unlucky.

For amd64 on linux you only have 128TB; with green threads I regularly use a lot more than 64 threads.

I'm not from the JVM world, but indeed a generational collector can be a huge boon, particularly if they use a Cheney style collector for the nursery.

GC is essentially never an advantage for low latency, but it is not incompatible with it either. Things like metronome can give you extremely well defined latencies.

It's fairly moot for hard real-time programs though, as those typically completely eschew dynamic allocation (malloc can have unpredictable time too).


In a latency sensitive system, you want to minimize how much time you spend allocating and deallocating memory during performance critical moments. GC gives you a great way to leave those operations as trivial as possible (increment a pointer to allocate, noop to deallocate) during performance critical moments, and clean up/organize the memory later when outside the time critical window.

Similarly, it makes it easier to amortise costs across multiple allocations/deallocations.

GC does have a bad rep in the hard real-time world, because in the worst case scenario, a poorly timed GC creates all kinds of trouble, which is why I mentioned that it helps if the allocator/deallocator is aware of hard real-time commits.


It might make it easier, no? I'm working on a perf-sensitive program now. It's written in C (mainly for performance). It's spending about 25% of CPU time in free/malloc. Yikes.

This happened because it has an event dispatcher where each event has a bunch of associated name/value keypairs. Even though most of the names are fixed ("SourceIP", "SourceProfile", "SessionUuid", etc.) the event system ends up strdup'ing all of them, each time. With GC we could simply ignore this. All the constant string names would just end up in a high gen, and the dynamic stuff would get cleaned in gen0, no additional code. (As-is, I'm looking at a fairly heavy rewrite, affecting thousands of callsites.)


So what's the reason for strdup'ing vs having const names that never get freed? Also, sounds like you could use ints/enum to represent the key and provide string conversion util functions. Anyway, spending 25%in malloc/free is just poor code, but you already know that. This really isn't about GC :).

Gen0 or young GC still involves a safepoint, a few trips to kernel scheduler, trashes the cpu instruction and data caches, possibly causes other promotions to tenured space (with knock on effects later), etc. It's no panacea when those tens/hundreds of millis are important.


'Cause all of the strings aren't const, some are created dynamically. Third parties add on to these event names at runtime, so we don't know ahead of time. An int-string registry would work at runtime, except for the dynamic names.

I was just pointing out that GC can "help", by reducing complexity and enabling a team that otherwise might get mired in details to deliver something OK.


> GC is essentially never an advantage for low latency

I can't really agree with that statement. One way to get to lower latency is to avoid using locks and rely on lock free algorithms.

Many of those are much easier to implement if you can rely on a GC, because the GC solves the problem that you can have objects that are still referenced in some thread, but that aren't reachable from the lock-free datastructure anymore. There are ways around this, e.g. using RCU or hazard pointers, but mostly it's easier with a GC.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: