Hacker News new | past | comments | ask | show | jobs | submit | jonahx's comments login

I feel this kind of critique, which I see often as a response to articles like this, is so easy as to be meaningless. How is one supposed to ever talk about general principles without using simplified examples?

Aren't you just saying "Real code is more complicated than your toy example"?

Well sure, trivially so. But that's by design.

> Perfect example is the "redundancies and dead conditions" mentioned: we're making the really convenient assumption that `g` is the only caller of `h` and will forever be the only caller of `h` in order to claim we exposed a dead branch using this rule...

Not really. He's just saying that when you push conditional logic "up" into one place, it's often more readable and sometimes you might notice things you otherwise wouldn't. And then he created the simplest possible example (but that's a good thing!) to demonstrate how that might work. It's not a claim that it always will work that way or that real code won't be more complicated.


Well I guess some comments need to be considered in totality, rather contextomies that enforce whatever point you're trying to make :)

I spelled out the problem pretty clearly.

> I used to try and form these kinds of rules and heuristics for code constructs, but eventually accepted they're at the wrong level of abstraction to be worth keeping around once you write enough code.

It's the wrong level of abstraction to form (useful) principles at, and the example chosen is just a symptom of that.

I'm not sure why we're acting like I said the core problem with this article is that it uses simple examples.


Because that was the only evidence you offered to back up the claim you just quoted. I understood the claim... it might be interesting if you presented some specific example of your own as counter-evidence, instead of straw-manning the article's intentionally simple example as too simplistic.

Your argument sounds like, "I'm so smart and enlightened, I've moved beyond simple heuristics like this." Okay, but the author is also a smart, experienced programmer and is apparently still finding them useful. I am also experienced, and personally find them useful.

I'm not against some argument that there is actually an even better, deeper way to look at these things. But you didn't make that argument. And, perhaps unfairly (you tell me) I suspect your response to that will be that it's all too gossamer, or would take too long to explain....


You're assuming the goal of the marketing is merely "to be remembered". But it's not. It's to be remembered in some positive way, or at least some way that still increases sales. This campaign will live forever as a laughing stock. That wasn't its intention.

Understood. But also “there’s no such thing as bad publicity.”

Scare quotes because obviously there is, but I don’t think this example crosses the line.


Even if one takes it as being on that side of the fence, which I'm not sure I actually buy myself, I'm also still not convinced getting some a niche of folks aware of marketing and brand design documents to have such an association form would deserved to be called wildly successful. Usually you want a significant portion of the population to have a common association with a brand as large as Pepsi, not a small portion to have a rare association.

I am so happy this is the top comment.

My experience was...

Skim through sentence after sentence of award-winning inanity like "Expressive design makes you feel something" as my powerful Macbook stumbles and wheezes...

Then think: "I like how default scrolling makes me feel!"


This is the narrative, of course.

In practice, it is sometimes true, and often not.

You can't overstate how often decisions are large orgs are driven by hype, follow-the-herd, or "use popular framework X because I won't get in trouble if I do" mentalities. The added complexity of tools can easily swamp productivity gains, especially with no one tracking these effects. And despite being terrible decisions for the business, they can align with the incentives of individual decision makers and teams. So "people wouldn't do it if it wasn't a net positive" is not an argument that always holds.



This is not borne out by immigration data (to say the least):

https://www.pewresearch.org/short-reads/2022/01/27/key-findi...


To rephrase crudely: "inline everything".

This is infeasible in most languages, but if your language and concise and expressive enough, it becomes possible again to a large degree.

I always think about how Arthur Whitney just really hates scrolling. Let alone 20 open files and chains of "jump to definition". When the whole program fits on page, all that vanishes. You navigate with eye movements.


> To rephrase crudely: "inline everything".

Sounds a lot like what Forth does.


Indeed numpy is essentially just an APL/J with more verbose and less elegant syntax. The core paradigm is very similar, and numpy was directly inspired by the APLs.


People actually managed to channel the APL hidden under numpy into a full array language implemented on top of it: https://github.com/briangu/klongpy


Time is indeed a flat circle.


I don't know APL, but that has been my thought as well - if APL does not offer much over numpy, I'd argue that the I'd argue that later is much easier to read and reason through.


If you acquire fluency in APL -- which granted takes more time than acquiring fluency in numpy -- numpy will feel bloated and ungainly. With that said, it's mostly an aesthetic difference and there are plenty of practical advantages to numpy (the main one being there is no barrier to entry, and pretty much everyone already knows python).


I thought that too, but after a while the symbols becomes recognizable (just like math symbols) and then it's a pleasure to write if you have completion based on their name (Uiua developer experience with Emacs). The issue with numpy is the intermediate variables you have to use due to using Python.


Not being glib, but this is like the famous comment when dropbox was first announced: "you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem". [1]

You might say, "but chatGPT is already as dead simple an interface as you can imagine". And the answer to that is, for specific tasks, no general interface is ever specific enough. So imagine you want to use this to create "headshots" or "linkedin bio photos" from random pictures of yourself. A bespoke interface, with options you haven't even considered already thought through for you, and some quality control/revisions baked into the process, is something someone might pay for.

[1] https://news.ycombinator.com/item?id=9224


> 3. Learned behavior. It's ironic how even something like ChatGPT (it has hundreds of chats with me) barely knows anything about me & I constantly need to remind it of things.

I've wondered about this. Perhaps the concern is saved data will eventually overwhelm the context window? And so you must judicious in the "background knowledge" about yourself that gets remembered, and this problem is harder than it seems?

Btw, you can ask ChatGPT to "remember this". Ime the feature feels like it doesn't always work, but don't quote me on that.


Yes, but this should be trivially done with an internal `MEMORY` tool the LLM calls. I know that the context can't grow infinitely, but this shouldn't prevent filling the context with relevant info when discussing topic A (even a lazy RAG approach should work).


What you're describing is just RAG, and it doesn't work that well. (You need a search engine for RAG, and the ideal search engine is an LLM with infinite context. But the only way to scale LLM context is by using RAG. We have infinite recursion here.)


You are asking for a feature like this. Future advances will help in this.

https://youtu.be/ZUZT4x-detM


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: