Hacker News new | past | comments | ask | show | jobs | submit login
50 Bytes of Code That Took 4 GB to Compile (2013) (randomascii.wordpress.com)
102 points by dotnetnews on Feb 13, 2017 | hide | past | favorite | 33 comments



I have accidentally discovered a bug in Clang that causes the compiler generate millions of consecutive "addq $2147483647, %rsp" instructions and it wouldn't terminate maybe in hours (it could probably be an infinite loop, I haven't seen it terminate):

    $ clang --version
    Apple LLVM version 8.0.0 (clang-800.0.38)
    Target: x86_64-apple-darwin16.4.0
    ...
    $ cat test.c
    #include <stdint.h>

    int main(void) {
        int x[SIZE_MAX / 31];
        return 0;
    }
    $ clang test.c
    test.c:5:11: error: array is too large (595056260442243600 elements)
    int x[SIZE_MAX / 31];
    $ cat test-2.c
    #include <stdint.h>

    int main(void) {
        int x[SIZE_MAX / 32];
        return 0;
    }
    $ clang test-2.c
    ... Keep waiting :-) ...



Interesting, I'm seeing this on clang trunk.


Strange, seems like it should do that with a loop.


...that took 4GB to compile in VS2010.


VS2010 is slightly more impactful than other versions because that is the latest version of the IDE before the transition to C++11 standard. The conversion from a 2010 to a 2013 solution is much more complicated than say a 2013 to a 2015 conversion for example.


No disagreement on any of that. I was mostly wondering who uses VS2010 anymore, and curious to know if the bug still exists in newer versions of Visual Studio (but not curious enough to bother testing it).


Yea at my work we have some legacy solutions that depend on libraries that were built with the 2010 compiler but the source is not available and the library shares stl containers on the boundaries. Therefore we're forced to create wrapper libraries that encapsulate the 2010lib and use things like char* ptrs instead of strings on the library API. Those wrapper libraries need to use the 2010 compiler and then we can add those libraries to our modern solutions without having mismatched definitions of stl containers.


See also previous work in this area: https://news.ycombinator.com/item?id=7127821 "Results of the Grand C++ Error Explosion Competition"


@mods

An account with the same username as OP was allegedly spamming r/programming.

https://www.reddit.com/r/programming/comments/5t7akn/50_byte...


This is a good article. Better still, a good article that was never discussed on HN before. So that account hasn't done anything abusive here—on the contrary.

If abuse occurs, we'll deal with it. If you think abuse might be occurring, please let us know at hn@ycombinator.com. We're can't read all the posts here but we do see all the emails.


Thanks. I'll make sure to take the correct steps next time. :)


usernames across different websites aren't unique identifiers nor is this relevant to the discussion


You should email the mods (at the address in the guidelines) because they don't see every post and tagging mods doesn't work.


Not even alleged anymore. The account's been shadowbanned (https://www.reddit.com/user/dotnetnews), a confirmation from Reddit admins that they've validated it as a spam/bot account.

And yet this post is back up, which means someone vouched for it.

Flagged it.


I vouched it, and I am sure I am among others.

1. Reddit's moderators can make their own decisions, but that should not change things here. It's an untrustworthy site with a culture and moderation system that rewards untrustworthy users. If it went out of business tomorrow the world would be a better place.

2. The article is genuinely interesting.

3. If all he bot does is link us to genuinely interesting back catalog .net writing, that's probably a service (at least for now) as this venue has long ignored a lot of interesting content from that ecosystem.

People can break the rules here and then we'll ban them here.


I agree with all your points, except for one thing that made me curious:

Why would say Reddit is an untrustworthy site, with a system that rewards untrustworthy users? I'm not a particularly active Reddit user, in fact it's been a while since I've even visited the site so I'm not taking this personally at all, but I'm genuinely interested as to why anyone would consider it an untrustworthy site that encourages untrustworthy users.


There have been a lot of shenanigans from the admins/mods over the past year or so, mostly related to the popularity of the pro-Trump subreddit and their resulting chagrin. There have been several thinly-veiled rule changes and algorithm changes aimed at reducing the number of pro-Trump postings that appear on the main homepage and generally lowering the visibility of that subreddit. At one point, the CEO manually edited some comments in the database that were critical of him. It all comes across as somewhat shady and untrustworthy. They could just come out and ban or quarantine the subreddit, but instead they've chose to subversively go Digg-mode and essentially start curating what users see rather than allowing it to rise/fall organically.


> There have been several thinly-veiled rule changes and algorithm changes aimed at reducing the number of pro-Trump postings that appear on the main homepage and generally lowering the visibility of that subreddit.

It's worth noting that there's nothing inherently bad about this. Possible valid reasons why this step might be taken include gaming of the algorithm by specific groups, or to even out the distribution of front page items is one of the goals for the algorithm is to have a heterogeneous set of items for the front page. Not that I have any specific information in this case to point one way or the other. I just think it's important to not immediately jump to negative conclusions about that, even if a group that experiences some loss of exposure feels negatively affected.

> At one point, the CEO manually edited some comments in the database that were critical of him.

That part is pretty egregious, and while I understand the impulse to do what he did if we take his explanation at face value (I don'[t see a particular reason why not to, it's not flattering), one would hope someone as high up in the organization would understand that playing with the trust of your userbase that way (especially with the group in question) is not likely to end well.

> They could just come out and ban or quarantine the subreddit, but instead they've chose to subversively go Digg-mode and essentially start curating what users see rather than allowing it to rise/fall organically.

I'm not specifically aware of what steps they are taking now. Do you have references to point me towards so I can catch up?


Many pro-Trump comments get swept out in botnet purges, because the pro-Trump segment has some actors who are very busy amplifying their voice via sockpuppet accounts to quiet any dissent or discussion on that subreddit.

Try this: post a positive comment about a policy change there. They also talk openly about using multiple identities to control what is allowed to the top. If you watch the "new" section you can see negative or even just not-content-free stories get added often.

There are folks on various forums and IRC channels who will sell you good placement of a story on reddit for a reasonable sum of bitcoin now. It'd be interesting to enlist them to start getting policy questions onto the top of that subreddit.


Spez (The CEO and one of the admins) has admitted to shadow-editing comments posted by other users without visible tracking (normally edits get a *, and I'm otherwise not aware of any mod- or user-facing ability to do so for comments that are not yours) on /r/The_Donald[1].

[1] https://motherboard.vice.com/en_us/article/spezgiving-how-re...


Sure. Quite frankly, I hate those people too. But Reddit's poor decision making goes deeper than that. They have a culture of finding blame rather than solving problems, a massive botnet problem they refuse to address despite multiple people submitting pretty compelling evidence, incredibly lax rule enforcement.

Their voting system encourages people to build elaborate bot networks to game the system. They know it, they refuse to do anything about it.

Reddit is a untrustworthy site that grew and untrustworthy community.


Don't judge a post by its author. If the user has done anything to merit being hellbanned on HN, that's a seperate matter. This submission looks fine as-is and there's no reason to flag it on its own merits. I would always vote for telling the user that they're doing something wrong and giving them a chance to correct the behavior before hellbanning them, too.

Users get shadowbanned on Reddit all the time for stupid reasons.


I doubt Bruce Dawson commissioned a spam bot to post his ancient articles. My guess is that it was vouched for because it's an actually interesting article (hence the corresponding votes) and users don't really care who posted it if it's interesting.


I'll interpret this to mean anyone can script a submission bot which automagically submits to hackernews so long as the articles could be perceived as interesting.

I'm assuming that's not the intent of the submission function, but short of clarification from dang etc, I'll assume no one has a problem with it.


perceived as interesting

Well, they would actually need to be interesting. There have been users who do what amounts to "interesting article arbitrage" by automatically submitting popular posts from other platforms.

You can find the submission guidelines here (https://news.ycombinator.com/newsguidelines.html). The only one that really would apply to a bot, anymore than a normal person, is "Please don't submit so many links at once that the new page is dominated by your submissions.", which naive bots are likely to do.


> I'll interpret this to mean anyone can script a submission bot which automagically submits to hackernews so long as the articles could be perceived as interesting.

Yes, I would say so. Nobody is going to complain (at least not with a valid complaint) as long as the articles are perceived as interesting and upvoted. In fact, I imagine you could do a good Show HN about the bot if it worked well enough.

The problem is when the quality slips, and it starts sliding towards spam (even if the intention was good).


https://xkcd.com/810/

Seriously, though, do you have any evidence that this guy is a bot?


Is this any different from the educative account [0] that posts nothing but educative.io links? If that account is fine I don't see why this one would not be OK as well. The rules here are not the same as on Reddit. There are no rules specifically about self-promotion.

[0] https://news.ycombinator.com/submitted?id=educative


Posting your own (or your employer's, or friends') stuff is ok, but accounts that only do that eventually lose submission privileges, especially if they overdo it and/or if they're corporate identities rather than personal ones. Corporate identities don't fit with the community thing.


Got it, I checked https://news.ycombinator.com/newsguidelines.html and did not see it in there. May want to add it?


Exactly how have you established that it is the same person, and that its a bot?


Seems like a dumb reason to flag something, especially for a new account that's only posted two rather interesting articles here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: