Hacker Newsnew | past | comments | ask | show | jobs | submit | _dwt's commentslogin

Not the original commenter, but you may have noticed a wee kerfluffle between a large nation-state's "Secretary of War" and a frontier model provider over whether the model's licensing would permit autonomous lethal weapon systems operated by said - and I cannot emphasize the middle word enough - large _language_ model.

Gary Marcus is living in Claude's head rent-free?

It certainly got Claude paid $27.58 towards the rent.

This one IIRC: https://samkriss.substack.com/p/the-law-that-can-be-named-is... He writes about it here, a little: https://samkriss.substack.com/p/against-truth

In general long meandering semi-factual pieces like this, with odd historical excursions, are one of his things and I don't know anyone else that does it quite the same. (Hmm... oddly enough Scott Alexander, who he cites here, also does some similarly Borgesian stuff, but with a different bent.) One of my favorite writers and I recommend pretty much everything he's done since the early 2010s.


Descent was a huge part of my childhood (and surprisingly my little kids are now big fans as well)! Unfortunately this seems to stutter pretty badly with audio issues as well for me on Firefox on Linux. As a huge fan of three.js and other past work... I guess I'll blame Claude?


i have no issues in brave on linux mint


This (LOC is an anti-metric, Goodhart's Law, etc.) is true, but I'm reaching the point of "fuck nuance" when I see so many articles superficially critical of AI which contain things like this:

> If AI-generated code introduces defects at a higher rate, you need more review, not less AI.

I think that is very much up for debate despite being so frequently asserted without evidence! This strikes me as the same argument as we see about self-driving cars: they don't have to be perfect, because there is (or we can regulate that there must be) a human in the loop. However, we have research and (sometimes fatal) experience from other fields (aviation comes to mind) about "automation complacency" - the human mind just seems to resist thoroughly scrutinizing automation which is usually right.


I don't disagree with you entirely here. I probably wasn't clear enough on what I was trying to convey.

Right now AI / Agentic coding doesn't seem is a train we are going to be able to stop; and at the end of the day is tool like any other. Most of what seems to be happening is people let AI fully take the wheel not enough specs, not enough testing, not enough direction.

I keep experiment and tweaking how much direction to give AI in order to product less fuckery and more productive code.


Sorry for coming off combative - I'm mostly fatigued from "criti-hype" pieces we've been deluged with the last week. For what it's worth I think you're right about the inevitability but I also think it's worth pushing a bit against the pre-emptive shaping of the Overton window. I appreciate the comment.

I don't know how to encourage the kind of review that AI code generation seems to require. Historically we've been able to rely on the fact that (bluntly) programming is "g-loaded": smart programmers probably wrote better code, with clearer comments, formatted better, and documented better. Now, results that look great are a prompt away in each category, which breaks some subconscious indicators reviewers pick up on.

I also think that there is probably a sweet spot for automation that does one or two simple things and fails noisily outside the confidence zone (aviation metaphor: an autopilot that holds heading and barometric altitude and beeps loudly and shakes the stick when it can't maintain those conditions), and a sweet spot for "perfect" automation (aviation metaphor: uh, a drone that autonomously flies from point A to point B using GPS, radar, LIDAR, etc...?). In between I'm afraid there be dragons.


@_dwt don't worry you didn't I appreciate good discussion and criticism. The publication is new and I'm still trying to calibrate my voice and style for it.

>I don't know how to encourage the kind of review that AI code generation seems to require. Historically we've been able to rely on the fact that (bluntly) programming is "g-loaded": smart programmers probably wrote better code, with clearer comments, formatted better, and documented better. Now, results that look great are a prompt away in each category, which breaks some subconscious indicators reviewers pick up on.

I don't anyone knows for sure, we all are on the same boat trying to figure it out how to best work with AI; the pace of change is making it so incredibly difficult to keep or try things. I'm trying a bunch of stuff at the same time:

-https://structpr.dev/ - to try to rethink how we approach PR reading, organizing review (dog-fooding it right now so is mostly alpha)

- I have an article schedule next week talking about StrongDMs Software factory, there are some interesting ideas there like test holdouts

- Some experiments in the Elixir stack for code generation and verification that go beyond it looks great. AI can definetively create code that _looks_ great but there is plenty of research that shows a lot of AI generated code and test can have a high degree of false confidence.


I have the strangest feeling that at some point this received a 's/—/...' pass.

That aside, while I'm "old" enough to remember this kind of cultural/vibe-based rivalry between programming language communities, and read enough to know it predates the greyest of living greybeards (TIMTOWTDI vs the Zen of Python, "Real Programmers Don't Write Pascal", "BASIC considered harmful"), I am not sure that this works any more as an argument.

It's a bit tone-deaf to suggest that the difference between Ruby and other communities is that Rubyists are the (only?) ones who care about "how code feels"; that's a pretty core part of the sales pitch for every PL I've ever seen promoted. I am actually nervously awaiting the first big PL that advertises on the basis of "you may not like it, but LLMs love it".

I suspect the real problem is that data science + ML/AI drove (roughly) a bajillion people to Python and LLM training corpus makeup will keep them there. Meanwhile all the Rubyists (ok, I'm being a little inflammatory) that cared about performance or correctness went to Rust (or had already left for Erlang or Haskell or Elm or... whatever). Who's left over there in the dynamic-types-or-bust Ruby world? DHH? (Not an uncontroversial figure these days.)


Great article. I'd never thought about a spacecraft ADI having a third axis. Sadly, a nitpick - Bill Lear's F-5 autopilot was not, as far as I can tell, in any way connected to the Northrop F-5 fighter jet.


Thanks. You are correct about the F-5 autopilot, so I fixed that. It turns out that it was used in planes such as the C-47, C-60, C-45, and B-26, but is unrelated to the F-5.


Oh wow, are you me - I had an almost identical SAT experience. Oddly enough compared to another poster in this thread, I love functional programming and Haskell and studying things like dependent types in a programming context has helped me patch up my crappy understanding of "actual math", proofs, etc.


As an "unlucky" person, I can't help but wonder: what about an alternative version of the experiment where the newspaper contained the big "This newspaper has 43 photographs" text - followed by 44 photographs?

(This "unlucky" - or perhaps "paranoid" - mindset has, frankly, made me a damn good programmer and debugger.)


Especially since commonly followed wisdom seems to be "trust, but verify."


Tbh If I noticed the text(s), I'd still have counted the photos and still looked at every page in case there were other prizes.


See other top-level comment - direction was to keep counting, and you would indeed have won another prize (although it's unclear whether they actually paid out).


Hmmm, "Rama is programmed entirely with a Java API – no custom languages or DSLs" according to the landing page, but this sure looks like an embedded DSL for dataflow graphs to me - Expr and Ops everywhere. Odd angle to take.


I consider "DSL" as something that's its own language with it's own lexer and parser, like SQL. The Rama API is just Java – classes, interfaces, methods, etc. Everything you do in Rama, from defining indexes, performing queries, or writing dataflow ETLs, is done in Java.


This is usually referred to as an "embedded DSL" - you have a DSL embedded in a normal programming language using its first class constructs.


Yep the original term DSL was for custom languages, the eventual introduction of using it for these kinds of literate APIs was done later. Using it in the original way unqualified is fine imo.


Meh. By this definition all libraries expose an "embedded DSL" — their API. I'm honestly not sure this is a useful definition.


Whether you like it or not; internal DSLs became a thing with Ruby back in the day. And these days things like Kotlin also lend themselves pretty well to creating internal DSLs. Java is not ideal for this. Kotlin and Ruby have a few syntax features that make it very easy.


What's the distinguishing difference between a regular library and an embedded DSL?


The difference is making an effort to expose the features of the library via a syntactically friendly way. Most Kotlin libraries indeed have nice APIs that are effectively mini DSLs.

If you need an example, Kotlin uses a nice internal DSL for HTML where you write things like

  Div {
     P { 
        +"Hello world"
     }
  }

This is valid Kotlin that happens to construct a dom tree. There is no magic going on here; just usage of a few syntax features of the language. Div and p here are normal functions that take a receiver block as the last parameter that receive an instance of the element they are creating. The + in front of the string is function with a header like this in the implementation of that element.

   operator fun String.unaryPlus()

The same code in Java would be a lot messier because of the lack of syntactical support for this. You basically end up with a lot of method chaining, semicolumns and their convoluted lamda syntax. The article has a few examples that would look a lot cleaner if you'd rewrite them in Kotlin.


Odd thing to split hairs over.


When someone makes a distinction that you don't immediately appreciate, maybe don't just dismiss it as splitting hairs, as if the world was a simple place.


It’s not a small detail. It’s one of the headline claims!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: