Hacker Newsnew | past | comments | ask | show | jobs | submit | alextingle's commentslogin

It was the BBC that linked me directly to Hans Blix's reports on the UN's search for WMD in Iraq. It was that that convinced me that it was all made up bullshit. It was much more convincing than the obvious wishful thinking that was coming out of the US and UK governments.

For sure social media is propagating conspiracy theories, some of which are the modern equivalent of "Saddam can deploy his WMD in 45 minutes", but I don't agree that old media was doing the same. Quite the opposite.


Electrical Engineer?

Yes. Actually, computer engineer. But I’m so tired of having to explain that, I just say EE usually.

> If you buy a game on steam, you own it for real.

How so? If Steam goes away, then so does your game. That's not ownership.

Just because they have carefully and honestly fostered a lot of trust in their game rental service, doesn't make it not a game rental service.


If Steam itself goes bankrupt, Gabe himself said you'll still retain access to your games.1 Yes its a non-legally binding promise, but Steam has earned that trust.

Also Steam isn't going away, so for these reasons that is "how so".

1. https://web.archive.org/web/20100605062932/http://forums.ste...


If Steam itself goes bankrupt, that decision is not up to Steam, it's up to its creditors and the bankruptcy court. It's not a credible promise for any US corporation to make, regardless of its trustworthiness.

Ah cool, so an alleged promise made to one user makes it all OK and we should stop worrying and just buy more.

Even if Gaben "money drives the community" Newell did make that promise and was sincere, he's going to be gone in a couple of decades at most and likely not in control of Valve before that.


I don't imagine that the exposed state would need to be represented in the final compiler output, so the optimiser could mark the pointer as exposed, but still eliminate the dead integer load.

Or from a pragmatic viewpoint, perhaps if the optimiser eliminates a dead load, then don't mark the pointer as exposed? After all, the whole point is to keep track of whether a synthesised pointer might potentially refer to the exposed pointer's storage. There's zero danger of that happening if the integer load never actually occurs.


I guess the internal exposure state would be “wrong” if the compiler removes the dead load (e.g in a pass that runs before provenance analysis).

However, if all of the program paths from that point onward behave the same as if the pointer was marked as exposed, that would be fine. It’s only “wrong” to track the incorrect abstract machine state when that would lead to a different behaviour in the abstract machine.

In that sense I suppose it’s no different from things like removing a variable initialisation if the variable is never used. That also has a side effect in the abstract machine, but it can still be optimised out if that abstract machine side effect is not observable.


This approach to memory management is completely at odds with the whole point of Rust.

Object orientated programming has conditioned programmers into believing that having a hairy nest of small allocations, all with pointers to each other, is the normal, unavoidable situation.

In fact, it creates all sorts of problems. First, and most obviously, it's really hard to keep track of all those allocations, so you get leaks, and use after free, and all the other familiar memory bugs. But you also get bloated memory use, with both your user code, and the allocator having to keep track of all those chunks of memory. You get poor cache utilisation. You incur often ridiculous CPU overhead constructing and tearing down these massive, intricate structures.

Rust makes it harder to trip over the memory bugs, but that makes it easier to keep on using the lots-of-tiny-allocations paradigm, which is a much bigger problem overall.


> This approach to memory management is completely at odds with the whole point of Rust.

No, not in the slightest. Rust works extremely well with arenas.

> In fact, it creates all sorts of problems. First, and most obviously, it's really hard to keep track of all those allocations, so you get leaks, and use after free, and all the other familiar memory bugs.

Given that the context of this subthread is Rust, I'm not sure why you bring this up. Rust doesn't exhibit any of these.

> But you also get bloated memory use, with both your user code, and the allocator having to keep track of all those chunks of memory.

No, Rust often uses less heap memory than the comparable C or C++ program, because it's so much easier to safely pass around pointers to the stack and thereby avoid the need to use the heap at all. Defensive copying isn't a thing in Rust.

> You incur often ridiculous CPU overhead constructing and tearing down these massive, intricate structures.

No, there is no ridiculous CPU overhead here. Most objects in Rust have trivial drop implementations and simply recursively free their children, who also have trivial drop implementations. Freeing memory does not show up on the list of performance bottlenecks for any ordinary Rust program.


Many Rust programs lean heavily on arenas though.


A serious makefile will disable all the default rules by defining the empty rule ...

.SUFFIXES:


If you can't easily reason about dependencies, then your builds will just get more and more bloated.

People who care about build systems are a special kind of nerd. Programmers are often blissfully ignorant of what it takes to build large projects - their experience is based around building toy projects, which is so easy it doesn't really matter what you do.

In my experience, once a project has reached a certain size, you need to lay down simple rules that programmers can understand and follow, to help them from exploding the build times. Modules make that extra hard.


These days I like system paths for deps more and more. You just need to specify the paths and everything that is in there can be included in the project. But gradle shenanigans where the dependency graph is built by some obscure logic is not to my liking.


My configure scripts never checked for a FORTRAN compiler. I'm not going to claim that using autotools is a pleasant experience, but it's not that bad, and it's very well documented.


The exception that proves the rule...


>> `#include "base/pc.h"`, where that `"base/pc.h"` path is not relative to the file doing the include.

> I have to disagree on this one.

The double-quotes literally mean "this dependency is relative to the current file". If you want to depend on a -I, then signal that by using angle brackets.


Eh, no. The quotes mean "this is not a dependency on a system library". Quotes can include relative to the files, or they can include things relative to directories specified with -I. The only thing they can't is include things relative to directories specified with -isystem and system include directories.

I would be surprised if I read some project's code where angle brackets are used to include headers from within the same project. I'm not surprised when quotes are used to include code from within the project but relative to the project's root.


The only difference between "" and <> is that the former adds the current file's directory to the beginning of the search path.

So the only reason to use "" instead of <> is when you need that behaviour, because the dependency is relative to the current file.

If you use "" in any other situation, then you are introducing a potential error, because now someone can change the meaning of your code simply by creating a file with a name and location that happens to match your dependency.

(Yes, some compilers have -isystem and -iquote which modify that behaviour, but those options are not standard, and can't be relied upon. I'd strongly advise against their use.)


Cheap-as-chips AI from gigantic data-centres is certain to go away, one way or another. Either the companies succeed in their game plan, and start to raise prices once they've established a set of dependent customers, or else they all go bust and the data-centres stand idle whilst the industry puzzles over a business model that works.

Of course the technology will remain. You'll still be able to run models locally (but they won't be as good). And eventually someone will work out how to make the data-centres turn a profit (but that won't be cheap for users).

Or maybe the local models will get good enough, and the data-centres will turn out to be a gigantic white elephant.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: