Hacker Newsnew | past | comments | ask | show | jobs | submit | vintermann's commentslogin

It's not about pedigree, but context. Without context our most beloved stories are just meaningless ink on paper.

If you commission a baker, another person, with wants and desires of their own, is involved.

If you use an AI, there isn't.

Either way, it's clear that the author (yes, the author) put a lot of work into this by iterating and shaping it to what he wanted, and that's a lot more than sprinkles.


> If you commission a baker, another person, with wants and desires of their own, is involved.

> If you use an AI, there isn't.

What is the functional difference here? You are commissioning (see: prompting) someone (see: an AI) for a piece of work, or artwork or whatever. The output is out of your control; and I don't think the existence or lack thereof of a human on the other end materially matters.

If we had hyper-advanced ovens from The Jetsons where we could type a prompt using a fold-out keyboard and it would magically generate whatever cake we ask of it: did we or did we not bake that cake? And I do not think it is clear the author put a lot of work iterating and shaping it into what he wanted; we have zero insight into that.


I didn't say the difference was functional. If you don't think the presence of a human on the other end matters (materially or not), feel free to continue this conversation with an LLM simulation of me. You can even prompt it so that you logically triumph and convince "me".

I'm asking you to explain what the actual difference is and you're avoiding the question.

If we had a complete black box where you submitted Prompt and out came Thing, and you had zero clue what said black box actually did, could you claim creation over Thing? What does knowing that it's a human vs LLM make materially different in terms of whether or not you created it?


And I - or did I turn this thread over to an LLM already? - am asking you a question in return, whose answer should give you the answer you want.

No please, I also agree with parent poster. Talk to the LLM, cause the human ain't listening.

The C64 palette is completely different from the EGA palette.

C64: https://lospec.com/palette-list/commodore64

Default EGA palette (which Afaik monkey island used): https://lospec.com/palette-list/color-graphics-adapter

You see that the C64 palette has a much more muted, pastel look and does not map one to one to the CGA/default EGA palette. C64 has a lot less vivid colors, but it also has much better luminosity ramps which can make dithering look a lot better.

In addition, the C64 has restrictions on the number of colors you can use in the same 8x8 block which I don't think EGA had.

It takes an artist to turn a CGA/EGA image into a C64 image.


I think the C64 palette you linked has been "tweaked" by the artist who uploaded it, this is probably closer to the original: https://www.c64-wiki.com/wiki/Color

But your point is still valid: while IBM PCs and other machines of the time had a propensity for "pure" colors (cyan, magenta etc. - so 100% for one or two of the basic colors and 0 for the others), the C64 designers opted for more muted colors.


> this is probably closer to the original: https://www.c64-wiki.com/wiki/Color

Which one? The listed palette looks nothing like the screenshot on the same page.

Notably, there's no way that light blue for example (which is the default font and border), nor the dark blue (which is the default background).

The screenshot is how I remember the C64, and consistent with other screenshots and photos. The listed hex codes are far off.

The one posted by the person you responded to is a bit muted, but the relative colors seems closer to what I'd expect.


This is the link you're looking for: https://www.colodore.com/

If you're interested in how this palette (editor) was derived, read this: https://www.pepto.de/projects/colorvic/.

The discussion on the above site is an update of the original post by the same author: https://www.pepto.de/projects/colorvic/2001/


On my screen that doesn't match videos of actual C64's on actual CRT's. (It also doesn't match my memory of them, but that's a whole lot less reliable)

I would actually find it surprising if it did match videos of actual C64s on actual CRTs, because of the many conversion layers.

Videos of actual C64's on actual CRT's are pretty consistent other than brightness, though, so if it doesn't at least somewhat match those, the model is broken.

Interesting. I have the pixel art book Pepto refers to, it is very nice.

"Were I to reengage, the first step would be at least hundreds of hours of refactoring to pay off accrued technical debt."

Facebook's coding AIs to the rescue, maybe? I wonder how good all these "agentic" AIs are at dreaded refactoring jobs like these.


Refactor doesn't mean just artificial puff-up jobs, it's very likely internal changes and reorganization (hence 100s of hours).

There are not many engineers capable of working on memory allocators, so adding more burden by agentic stuff is unlikely to produce anything of value.


> Facebook's coding AIs to the rescue, maybe? I wonder how good all these "agentic" AIs are at dreaded refactoring jobs like these.

No.

This is something you shouldn't allow coding agents anywhere near, unless you have expert-level understanding required to maintain the project like the previous authors have done without an AI for years.


Hm, I wonder.

I've done some work in this sort of area before, though not literally on a malloc. Yes you very much want to be careful, but ultimately it's the tests that give you confidence. Pound the heck out of it in multithreaded contexts and test for consistency.


> ...but ultimately it's the tests that give you confidence. Pound the heck out of it in multithreaded contexts and test for consistency.

I don't think so.

Even on LLM generated code, it is still not enough and you cannot trust it. They can pass the tests and still cause a regression and the code will look seemingly correct, for example in this case study [0].

[0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...


AI is more than happy to declare the test wrong and “fix it” if you’re not careful. And the cherry on top is that sometimes the test could be wrong or need updating due to changed behavior. So…

That is correct, but it's not to media's credit. Most journalists say basically, "Trust me, I'm the authority, I wouldn't be allowed to say this if it were simply lies. I could prove it to you but I won't, at worst I'll be forced to prove it to my peers. (And you aren't one, peasant)." They practically never link to the scientific paper they just reported on, certainly not to anything that could let us check politically controversial claims ourselves.

And how could it be otherwise? You aren't the customer. Ads, or worse, billionaire political patronage, is what pays the bills for media companies. Their authority - the blind trust people have in them - is what makes them valuable for their actual customers. They're not doing science, the last thing they want is to make it easy to check their work (although, maybe I'm too charitable to scientists too here, if they make it easier to check their work it's often the bare minimum, but I digress).

One of the original points of WikiLeaks was to make a kind of journalism where claims were easy to check from the sources. But you can see how controversial that was.


"Bigfoot" isn't inherently a conspiracy theory. If you say that bigfoot exists, you're wrong, but not necessarily a conspiracy theorist. To be a conspiracy theorist, you also have to posit a grand conspiracy to conceal the existence of bigfoot.

If you posit a conspiracy that only involves a few people who could plausibly coordinate to conceal the truth, that's also not a grand conspiracy, and we don't call people conspiracy theorist for believing in regular, everyday criminal conspiracies.


> If you say that bigfoot exists, you're wrong

That not a philosophically supportable statement. "There's insufficient evidence to warrant belief in your claim" is more realistic.


It wasn't meant to be philosophical, it was meant to be practical. As a practical matter, you're wrong if you say that Bigfoot exists, or that the sun won't rise tomorrow.

> If you posit a conspiracy that only involves a few people who could plausibly coordinate to conceal the truth, that's also not a grand conspiracy, and we don't call people conspiracy theorist for believing in regular, everyday criminal conspiracies.

No, but we did call people conspiracy theorists for believing the thing Snowden subsequently showed to be real.


Not me, I didn't. That conspiracy was certainly pretty big, but there was also a ton of smaller leaks as you'd expect on a real conspiracy of that size, so you certainly wouldn't be called nuts for assuming NSA were spying on a lot they weren't supposed to.

Security state loyalists were not nearly as influential in online discourse back then, as they are now. Probably astroturfing, AI and algorithmic amplification plays a part in that.


> so you certainly wouldn't be called nuts for assuming NSA were spying on a lot they weren't supposed to.

You say that like it isn't what happened.

If anything there were more security state loyalists in the first years after 9/11 than there are now.


> If you say that bigfoot exists, you're wrong, but not necessarily a conspiracy theorist.

I’m not sure if “I’m just a cryptozoologist” is much of a vindication.


It's the record companies/publishers which don't use them. I don't think I know one record company which reports metadata well.

Spotify's discover weekly was genuinely good when it first came. It was on another level from other recommendation services. Maybe 90% of the music I've bought on Bandcamp, I would never have known about if it wasn't for discover weekly (Bandcamp's own recommendation/discovery features are lousy).

But somehow, probably from a combination of rights owners gaming it and Spotify gaming it, DW is a pale shadow of its former self.


It has been tried. I don't remember its name, but I remember that they have changed names at least once. It's a pretty obvious "app" for Spotify's API which they opened up a few years ago.

If you're implementing it for Computercraft anyway, there's no reason to stick to the standard. It's well known that bzip2 has a couple of extra steps which don't improve compression ratio at all.

I suggest implementing Scott's Bijective Burrows-Wheeler variant on bits rather than bytes, and do bijective run-length encoding of the resulting string. It's not exactly on the "pareto frontier", but it's fun!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: