Hacker News new | past | comments | ask | show | jobs | submit login

By that standard, any optimization that changes scaling in any dimension changes semantics, which, well, I’m not saying its wrong, but I would say it is exactly what people looking for optimization want.





I disagree.

An optimization that speeds a program by x2 has the same effect as running on a faster CPU. An optimization that packs things tighter into memory has the same effect as using more memory.

Program semantics are usually referred to as “all output given all input, for any input configuration” but ignoring memory use or CPU time, provided they are both finite (but not limited).

TCE easily converts a program that will halt, regardless of available memory, to one that will never halt, regardless of available memory. That’s a big change in both theoretical and practical semantics.

I probably won’t argue that a change that reduces an O(n^5) space/time requirement to an O(1) requirement is a change in semantics, even though it practically is a huge change. But TCE changes a most basic property of a finite memory Turing machine (halts or not).

We don’t have infinite memory Turing machines.

edited: Turing machine -> finite memory Turing machine.


>I probably won’t argue that a change that reduces an O(n^5) space/time requirement to an O(1) requirement is a change in semantics, even though it practically is a huge change

Space/time requirements aren't language semantics though, are they?


it changes debug semantics

this is the reason guido avoids it. programs will still fail, except now without a stacktrace


GvR always prioritised ease of debugging over performance, and honestly I'm in the same camp. What good does a fast program do if it's incorrect?

But I think you can get a fine balance by keeping a recent call trace (in a ring buffer?). Lua does this and honestly it's OK, once you get used to the idea that you're not looking at stack frames, but execution history.

IMHO Python should add that, and it should clearly distinguish between which part of a crash log is a stack trace, and which one is a trace of tail calls.

Either way this is going to be quite a drastic change.


that's a nice solution!

> By that standard, any optimization that changes scaling in any dimension changes semantics

That doesn't follow. This isn't like going from driving a car to flying an airplane. It's like going from driving a car to just teleporting instantly. (Except it's about space rather than time.)

It's a difference in degree (optimization), yes, but by a factor of infinity (O(n) overhead to 0 overhead). At that point it's not unreasonable to consider it a difference in kind (semantics).


Modern C compilers are able to transform something like this:

for (int i = 0; i < n; i++) a += i;

To:

a += n * (n+1) / 2;

Is this an optimisation or a change in program semantics? I've never heard anyone call it anything slse than an optimisation.


This will get a bit pedantic, but it's probably worthwhile... so here we go.

> Is this an optimisation or a change in program semantics?

Note that I specifically said something can be both an optimization and a change in semantics. It's not either-or.

However, it all depends on how the program semantics are defined. They are defined by the language specifications. Which means that in your example, it's by definition not a semantic change, because it occurs under the as-if rule, which says that optimizations are allowed as long as they don't affect program semantics. In fact, I'm not sure it's even possible to write a program that would be guaranteed to distinguish them based purely on the language standard. Whereas with tail recursion it's trivial to write a program that will crash without tail recursion but run arbitrarily long with it.

We do have at least one optimization that is permitted despite being prohibited by the as-if rule: return-value optimization (RVO). People certainly consider that a change in semantics, as well as an optimization.


You do have a point. However, if I'm allowed to move the goalposts a little: not all changes in semantics are equal. If you take a program that crashes for certain inputs and turn it into one that is semantically equivalent except that in some of those crashing cases, it actually continues running (as if on a machine with infinite time and/or memory), then that is not quite as bad as one that changes a non-crashing result into a different non-crashing result, or one that turns a non-crashing result into a crash.

With this kind of "benign" change, all programs that worked before still work, and some that didn't work before now work. I would argue this is a good thing.


Amazing it can do that. How does it work?

That definitely does seem to change its semantics to me. I am not a c expert but this surely has problems the previous one doesn’t?


It does change the semantics if n is negative or large enough to cause an overflow. The challenge for the compiler is to somehow prove that neither of those things can happen.

It doesn't have to prove absence of overflow since that is undefined behavior in C and thus modern compilers assume it can never happen.

Great point.

It can be, especially when you do something undefined the compiler can do all sorts of odd things while transforming code

The important thing is whether theres a garuntee of it happening in particular circumstance or not. Like with python referencing counting theoretically finalizers should be called after you lose all references to a file (assuming no cycles) but you cant rely on it.

Python dicts were in insert sort order for 3.6 but this only became a garuntee as opposed to an implementation choice that could be changed at anyvtime with python3.7




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: