Hacker News new | past | comments | ask | show | jobs | submit | foooorsyth's comments login

Unfortunately the cheats are way ahead of this. Most modern aimbots in shooters like Counter-Strike are (intentionally) not-obvious. They give minor advantages and do tiny corrections for an already-immensely-skilled player to gain a small edge. In a game where the difference between a great player and an elite player is small, they can be the invisible difference maker.


>I think many would quickly understand the value proposition

I think thousands of innocent teenagers without credit cards will be furious. Not to mention anyone that takes a game semi-seriously and cares about their reputation after getting banned. Also, with real-dollar values tied to skins, you’re not just nuking someone’s $50 account — accounts and their associated items can be worth a lot of money.

Anti-cheats should need to be certain. They should also, however, ban the hardware ID, which lots of games companies choose not to do (because they’d lose money).


So because China will steal it eventually, we should just give it away now? That’s your argument?

>clearly China is capable of catching up with ASML’s tooling

The only thing clear to me is precisely the opposite. Nobody has been able to catch up with ASML, including China. If China is capable of catching up on their own (without espionage), why would Taiwan even matter? Why would export controls on ASML tooling even matter?

They matter because ASML and TSMC are companies built on secret know-how that others can’t replicate. Do we really need to explain on HN that companies are built on secrets?


> why would Taiwan even matter?

The CCP has fully subscribed to irredentism and it has popular support in the mainland. Taiwan will never not matter.

But otherwise I agree with the rest of your argument.


> CCP has fully subscribed to irredentism and it has popular support in the mainland

Plenty of countries, particularly those in an economic slump, have popular support for stupid wars. That changes quickly when the war is started and the costs come home.


This doesn’t stop them from starting stupid wars for stupid reasons. Losing a war is not even a guarantee that they will waive their future territorial ambitions or concessions just as two examples: Spain and Gibraltar or Argentina and the Falklands.


> Argentina and the Falklands

This is the example Xi, and those around him, would be looking to.


Yeah. Irredentism is a fundamentally emotional ideology borne from nationalism. It doesn’t have to make sense, it just has to be a rallying cry.


> It doesn’t have to make sense, it just has to be a rallying cry.

Correct. It's for domestic consumption. By the time leadership is weak enough to be compelled into playing it out, chances are it won't make military sense.


The hope is that it not making military sense prevents military action. We don’t have any such promise from reality, or much historic precedent to depend on, and in the case of the PRC and Taiwan, it is CCP leadership which is angling for a takeover of the independent nation of Taiwan and the eradication of the Republic of China.


Not everyone reading your code will be using an IDE. People may be passively searching your code on GitHub/gerrit/codesearch.

val/var/let/auto declarations destroy the locality of understanding of a variable declaration without an IDE + a required jump-to-definition of a naive code reader. Also, a corollary of this problem also exists: if you don’t have an explicit type hint in a variable declaration, even readers that are using an IDE have to do TWO jump-to-definition actions to read the source of the variable type.

eg.

val foo = generateFoo()

Where generateFoo() has the signature fun generateFoo(): Foo

With the above code one would have to jump to definition on generateFoo, then jump to definition on Foo to understand what Foo is. In a language that requires the explicit type hint at declaration, this is only one step.

There’s a tradeoff here between pleasantries while writing the code vs less immediate local understanding of future readers / maintainers. It really bothers me when a ktlint plugin actually fails a compilation because a code author threw in an “unnecessary” type hint for clarity.

Related (but not directly addressing auto declarations): “Greppability is an underrated code metric”: https://morizbuesing.com/blog/greppability-code-metric/


If you accept f(g()), you've already accepted that the type of every expression is not written down.


I don’t particularly accept f(g()). I like languages that require argument labels (obj-c, swift). I would welcome a language that required them for return values as well. I’d even enjoy a compiler that injected omitted ones on each build, so you can opt to type quickly while leaning on the compiler for clarity beyond build time.


Argument labels are equivalent to variable names. You still have them with auto. In either case you don't see the actual type.


I do not agree that using an IDE matters.

If you cannot recognize the type of an expression that is assigned to a variable, you do not understand the program you are reading, so you must search its symbols anyway.

Writing redundantly the type when declaring the variable is of no help when you do not know whether the left hand side expression has the same type.

When reading any code base with which you are not familiar, you must not use a bad text editor, but either a good text editor designed for programmers or any other tool that allows fast searching for the definitions of any symbols encountered in the source text.

Adding useless redundancy to the source text only bloats it, making reading more difficult, not easier.

I never use an IDE, but I always use good programming language aware text editors.


The argument is tautological.

I want to use a text editor => This is the wrong tool => Yes, but I want to use a text editor.

These people do use the wrong tooling. The only way to cure this grievance is to use proper tooling.

The github webui has some ide features, such as symbol search. I don't see any reason why not use a proper ide. github.dev is a simple click in the ui away. When you use gerrit, do a local checkout, that's one git command.

If you refuse to use the correct tools for the job, your experience is degraded. I don't see a reason to consider this case when writing code.


Have you ever worked in a large organization with many environments? You may find yourself with a particular interface that you don’t know how to use. You search the central code search tool for usages. Some other team IS using the API, but in a completely different environment and programming language, and they require special hardware in their test loop, and they’re located in Shanghai. It will take you weeks to months to replicate their setup. But your goal is to just understand how to use your version of the same API. This is incredibly common in big companies. If you’re in a small org with limited environments it’s less of an issue.


I have worked in big environments. My idea about "big" might be naive, environments spanning different Oses and different, including old languages like fortran and pascal. But I never been in a situation where I couldn't check out said code, and open it in my ide and build it. If you can't that sounds like a another case of deficient tooling. Justifying deficient tooling.

These where not some SWE wonderlands either. The code was truly awful at times.

The Joel test is 25 years old. It's a industry standard. I, and many other people consider it a minimum requirement for software engineering. If code the "2. Can you make a build in one step?" requirement i should be ide-browsable in one step.

If it takes weeks to replicate a setup the whole environment is deeply flawed. The one-step build is the second point on the list because Joel considered it the second most important thing, out of 12.


My situation: hardware company, over 100 years old. I’ve found useful usage examples of pieces of software I need to use, but only on an OS we no longer ship, from a supplier we no longer have a relationship with, that runs on hardware that we no longer have. The people that know how to get the dev environment up are retired.

In those cases, I’m grateful for mildly less concise languages that are more explicit at call and declaration sites.


If you are unable to find the type of a right-hand-side expression that appears in an assignment or initialization, then the environment does not allow you to work and it must be changed.

The redundant writing of the type on the left-hand side does not help you, because without knowing the type of the right-hand side you cannot recognize a bug. Not specifying the type on the left-hand side can actually avoid many bugs in complex environments, because there is no need to update the code that uses some API, whenever someone changes the type of the result, unless the new type causes some type mismatch error elsewhere, where it would be reported, allowing to make fixes at the right locations in the source code, not at the spurious locations of variable definitions, where updating the type will not prevent the real bugs to occur at the points of use of that variable.

The only programming languages that could be used without the ability of searching the definition of any symbol, were the early versions of FORTRAN and BASIC, where the type of a symbol was encoded in the name of the symbol, by using a one-letter prefix in FORTRAN (like IVAR vs. XVAR) and a one-symbol suffix in BASIC (like X vs. X$ vs. X%).

The "Hungarian" convention for names used in early Microsoft Windows has been another attempt of encoding the types of the symbols in their names, following the early FORTRAN and BASIC style, but most software developers have disliked this verbosity.


> if you don’t have an explicit type hint in a variable declaration, even readers that are using an IDE have to do TWO jump-to-definition actions to read the source of the variable type.

This isn’t necessarily the case. “Go to Definition” on the `val` goes to the definition of the deduced type in the IDEs and IDE-alikes I’ve ever used.


The ideas are from the Bill of Rights.


Which are that the government can't censor speech. Forcing a private entity to support any form of speech is actually against the first amendment, it is compelled speech.


Who's forcing what? They are just inspired by it, at least that's what the comment you are replying to is saying. If a private corporation wants to support or enable free speech, they are allowed to, just like you said. This is literally from Facebook itself so I'm not sure who is compelling who.


Nobody is forcing anyone to do anything.

Meta, as a large and powerful entity with the ability to censor as much or more than many governments, is opting to allow free speech on its platforms. So is X. That spirit is inspired by the Bill of Rights, not Elon Musk.


Which bit of "Congress shall make no law" did you interpret to state "Congress (and Facebook, Twitter, Google, Amazon, etc.) shall not..."?


If you’re going to attempt to be pedantic, can you at least work on basic reading comprehension?

The concept of a large and powerful entity allowing free speech is in the spirit of the Bill of Rights, whether the large and powerful entity is government or not.

The parent poster was taking the position that allowing speech is somehow a Musk-derived idea, which is absurd.


The spirit of the Bill of Rights is to restrict primarily the Federal government. The First Amendment didn't even apply to states (let alone businesses) until after we had a civil war, the Fourteenth Amendment, and a SCOTUS case in 1925 (https://en.wikipedia.org/wiki/Gitlow_v._New_York / https://en.wikipedia.org/wiki/Incorporation_of_the_Bill_of_R...).

They had no intention of broadly protecting free speech in general. They just didn't want Congress specifically messing with it.

> The parent poster was taking the position that allowing speech is somehow a Musk-derived idea, which is absurd.

On that we can agree.


Just being able to hold a drink in your hand as social cover is part of what you’re buying. At a Christmas party, you just want to have what appears to be the peppermint Christmas-themed drink in your hand like everyone else, even if you’re pregnant or don’t drink for whatever reason. People will stop asking if they can get you a drink. Not everyone buying an NA drink cares about perfectly emulating the sensation of alcohol entering the body.


Tonic and lime or club soda and lime can do that as well.


Yes. Discovering lime soda is what allowed me to stop drinking alcohol. Prior to that I would feel the peer pressure to have an alcoholic drink. For some reason water is unacceptable. Coca cola or similar is OK, but very unhealthy. Lime soda seems to be the minimum that is allowed and doesn't seem too unhealthy.

The other trick is making the drink last for as long as you can. As soon as the drink is finished you are required to obtain another one. But nobody needs to drink multiple litres of fluid in the space of a few hours, plus it's expensive. So I take tiny sips and make half a litre last an hour or longer.


If you allow yourself a touch of alcohol bitters in tonic or club soda is another good choice.


gotos can be used without shame. Dijkstra was wrong (in this rare case).

defer is cleaner, though.


Dijkstra was not wrong. Modern programmers are wrong in thinking that the goto that they use is what Dijkstra was talking about, merely because of the fact it happens to be called the same thing. I mean, I get how that can happen, no sarcasm, but the goto Dijkstra was talking about and what is in a modern language is not the same. https://jerf.org/iri/post/2024/goto/

The goto Dijkstra is talking about is dead. It lives only in assembler. Even BASIC doesn't have it anymore in any modern variant. Modern programmers do not need to live in fear of modern goto because of how awful it was literally 50 years ago.


The interpretation of Dijkstra's sentiment in your blog post is plain wrong.

His paper [1] clearly talks about goto semantics that are still present in modern languages and not just unrestricted jmp instructions (that may take you from one function into the middle of another or some such). I'd urge everyone to give it a skim, it's very short and on point.

[1] https://homepages.cwi.nl/~storm/teaching/reader/Dijkstra68.p...


Well, there you get that I don't believe in letting certain people own ideas and then stick to them as if they were given revelation from on high about how things should work. The distinctive thing about goto that breaks structured programming, to the point that functions as we think of them today can't even exist in such an environment, is the ability to jump arbitrarily.

I'm way less worried about uses of goto that are rigidly confined within some structured programming scope. As long as they stay confined to specific functions, they are literally orders of magnitude less consequential than the arbitrary goto, and being an engineer rather than an academic, I take note of such things.

I don't ask Alan Kay about the exact right way to do OO, I don't ask Fielding about the exact right way to do REST interfaces, and I don't necessarily sit here and worry about every last detail of what Dijkstra felt about structured programming. He may be smarter than me, but I've written a lot more code in structured paradigms than he ever did. (This is not a special claim to me; you almost certainly have too. You should not ignore your own experiences.)


Your analysis is wrong. Dijkstra was a big proponent of structured programming, and the fundamental thesis of his argument is that the regular control flow structures we're used to--if statements, loops, etc.--all represent a tree-based data structure. In essence, the core argument is that structured programming allows you to mentally replace large blocks of code with black boxes whose exact meanings may not be important.

The problem with GOTO, to Dijkstra, is that it violates that principle. A block can arbitrarily go somewhere else--in the same function, in a different function (which doesn't exist so much anymore)--and that makes it hard to reason about. Banning GOTO means you get the fully structured program that he needs.

(It's also worth remembering here that Dijkstra was writing in an era where describing algorithms via flowcharts was common place, and the use of if statements or loops was far from universal. In essence, this makes a lot of analysis of his letter difficult, because modern programmers just aren't exposed to the kind of code that Dijkstra was complaining about.)

Since that letter, modern programming has embraced the basic structured programming model--we think of code almost exclusively of if statements and loops. And, in many language, goto exists only in extremely restricted forms (break, continue, and return as anything other than the last statement of a function). It should be noted that Dijkstra's argument actually carries through to railing against the modern versions of break et al, but the general program of structured programming has accepted that "early return" is an acceptable deviation from the strictly-single-entry-single-exit that is desired that doesn't produce undue cognitive overhead. Even where mind-numbing goto exists today (e.g., C), it's similarly largely used in ways that are similar to "early return"-like concepts, not the flowchart-transcribed-to-code-with-goto-as-sole-control-flow style that Dijkstra is talking about.

And, personally, when I work with assembly or LLVM IR (which is really just portable assembly), I find that the number one thing I want to look at a very large listing is just something that converts all the conditional/unconditional jumps into if statements and loops. That's really the main useful thing I want from a decompiler; everything else as often as not just turns out to be more annoying to work with than the original assembly.


I struggle with how you claim my "analysis is wrong", and then basically reiterate my entire point. I know it's not that badly written; other people got it just fine and complain about what it actually does say.

The modern goto is not the one he wrote about. It is tamed and fits into the structured programming paradigm. Thus, ranting about goto as if it is still the 1960s is a pointless waste of time. Moreover, even if it does let you occasionally violate structured programming in the highly restricted function... so what? Wrecking one function is no big deal, and generally used when it is desirable that a given function not be structured programming. Structured programming, as nice as it is, is not the only useful paradigm. In particular state machines and goto go together very nicely, where the "state machine" provides the binding paradigm for the function rather than structured programming. It is perhaps arguably the distinctive use case that it lives on for in modern languages.


> The modern goto is not the one he wrote about. It is tamed and fits into the structured programming paradigm.

No, it doesn't, not the goto of C or C++ (which tames it a smidge because it has to). That's the disconnect you have. It's not fine just because you can't go too crazy and smash other functions with it anymore. You can still go crazy and jump into the middle of scopes with uninitialized variables. You can still write irreducible loops with it, which I would argue ought to be grounds for the compiler to rm -rf your code for you.

There are tame versions of goto--we call them break, continue, and return. And when the C committee discussed adding labeled break, and people asked why it was necessary because it's just another flavor of goto, I made some quite voluminous defense of labeled break because it was a tame goto, and taming it adds more possibility.

And yes, the tame versions of goto violate Dijkstra's vision. But I also don't think that Dijkstra's vision is some sacrosanct thing that must be defended to the hilt--the tame versions are useful, and you still get most of the benefits of the vision if you have them.

In summary:

a) when Dijkstra was complaining about goto, he would have included the things we call break, continue, and early return as part of that complaint structure.

b) with the benefit of decades of experience, we can conclude that Dijkstra was only partially right, and there are tame goto-like constructs that can exist

c) the version of goto present today in C is still too untamed, and so Dijkstra's injunction against goto can apply to some uses of it (although, I will note, most actual uses of it are not something that would fall in that category.)

d) your analysis, by implying that it's only the cross-function insanity he was complaining about, is wrong in that implication.


"You can still go crazy and jump into the middle of scopes with uninitialized variables."

It is difficult when speaking across languages, but in many cases, no, you can't.

https://go.dev/play/p/v8vljT91Rkr

C isn't a modern language by this standard, and to the extent that C++ maintains compatibility with it (smoothing over a lot of details of what that means), neither is it. Modern languages with goto do not generally let you skip into blocks or jump over initializations (depending on the degree to which it cares about them).

The more modern the language, generally the more thoroughly tamed the goto is.


It doesn't even need to be a modern language to protect against that:

  BEGIN
    INT x := 1;
    print(("x is", x, newline));

    GOTO later;

    INT y := 2;

   later:

    print(("y is", y, newline))
  END
for which we have:

  $ a68g goto.a68 
  x is         +1
  11           print(("y is", y, newline))
                            1           
  a68g: runtime error: 1: attempt to use an uninitialised REF INT value (detected in [] "SIMPLOUT" collateral-clause starting at "(" in this line).
Although admittedly it is a runtime error.

However if y is changed to 'INT y = x + 2;', essentially a "constant", then there is no runtime error:

  $ a68g goto.a68 
  x is         +1
  y is         +0


"when I see modern code that uses goto, I actually find that to be a marker that it was probably written by highly skilled programmers. "

He should have said "correct code", not "modern code" because the times I remember seeing goto the code was horribly incorrect and unclear.

(With break and continue, someone has to be doing something extra funky to need goto. And even those were trigger signs to me, as often they were added as Hail Mary's to try to make something work)

{I typically reviewed for clarity, correctness, and consistency. In that order}


Or the tl;dr in modern parlance Dijkstra was railing against the evils of setjmp.


Sometimes setjmp is useful and I had occasionally used it, but usually it is not needed. There is certain considerations you must make in order to be careful when you are using setjmp though.

(Free Hero Mesh uses setjmp in the execute_turn function. This function will call several other functions some of which are recursive, and sometimes an error occurs or WinLevel or LoseLevel occurs (even though these aren't errors), in which case it will have to return immediately. I did try to ensure that this use will not result in memory leaks or other problems; e.g. v_set_popup allocates a string and will not call longjmp while the string is allocated, until it has been assigned to a global variable (in which case the cleanup functions will handle this). Furthermore, the error messages are always static, so it is not necessary to handle the memory management of that either.)


No, even setjmp/longjmp are not as powerful or dangerous. The issue is not the locality of the jump, but the lack of any meaningful code structure enforced by the language. Using setjmp and longjmp properly still saves and restores context. You still have a function call stack, a convention for saving and restoring registers, locally scoped variables, etc. Though, using setjmp/longjmp improperly on some platforms might come close, since you're well into undefined behavior territory.

Parent is correct that this doesn't really exist outside of assembly language anymore. There is no modern analogue, because Dijkstra's critique was so successful.


Lua uses it for error handling. It is really hard to understand the lua code. :/


Premium economy to MUC out of CLT always annoys me because the legs that hold the seats up can possibly be right in the middle of your under-seat space, making putting a briefcase under the seat in front of you impossible (the legs are unevenly spaced throughout the row so not all seats lose the space). They also have those fold-down footrests that I never actually use and take up more space. Paying more for a seat in which I might not even be able to access my work laptop is a bit annoying.

Man, I sound like a diva.


I feel like any class of seating except business suffers from that "near seat storage shortage". I tend to carry a soft sided satchel instead of a case for the reasons you state: it can be jammed pretty much anywhere


Well in this case on the return flight I’m usually booked regular economy, and regular economy has no under seat space loss. I’m able to work the entire flight back from Europe on the cheaper ticket.


I don’t think you should have to hold the entire canon of programming history in your head to avoid these types of footguns. Conventional design choices of the past that lead to bugs and security vulnerabilities should be replaced.


It’s the fastest product to 100m users ever. Even if they never update their models from here on out, they have an insanely popular and useful product. It’s better at search than Google. Students use it universally. And programmers are dependent on it. Inference is cheap — only training is expensive.

To say they don’t have PMF is nuts.


> And programmers are dependent on it.

that is clearly not the case


>It’s better at search than Google

in what world? what it's good at is suggesting things to search, because half of what it outputs is incorrect, so you have to verify everything anyway

it does, slightly, improve search, but it's an addition, not a replacement.


Two years ago half of what it output was incorrect. One year ago, maybe 30% of what it output was incorrect. Currently, maybe 20% of what it tells you is incorrect.

It's getting better. Google, on the other hand, is unequivocally getting worse.


Rubbish - there's no data that shows accuracy has improved by that much.


I brought every bit as much data to the conversation as you did.


> It’s better at search than Google.

That’s hardly a high bar now.

> And programmers are dependent on it.

Entry level ones, perhaps.


I mean, these models are super useful for small defined tasks where you can check their output.

They're also useful for new libraries or things that you're not an expert in (which obviously varies by domain and person, but is generally a lot of stuff).

I'm a data person and have been using them to generate scraping code and web front-ends and have found them very useful.

But I wouldn't trust them to fit and interpret a statistical model (they make really stupid beginner mistakes all the time), but I don't need help with that.

Like, in a bunch of cases (particularly the scraping one) the code was bad, but it did what I needed it to do (after a bunch of tedious re-prompting). So it definitely impacts my productivity on side projects at least. If I was doing more consulting then it would be even more useful, but for longer term engagements it's basically just a better Google.

So yeah, definitely helpful but I wouldn't say I'm dependent on it (but I'd definitely have made less progress on a bunch of side projects without it).

Note: it's good for python but much, much less good at SQL or R, and hallucinates wildly for Emacs Lisp.


s/programmers/front\ end\ html\ authors/


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: