Hacker Newsnew | past | comments | ask | show | jobs | submit | tgv's commentslogin

It looks very convincing, and funky. How does the simulation work?

I capture each of the frames and process it pixel by pixel[1]. There are 3 inputs to the simulation

1. The gain knob controls the overall intensity of the effect

2. The selected pins / effects are applied to the frame. I describe a couple of the effects below:

For HClock: If the horizontal clock pin is selected, I cut the frame into variable height slices (some are 2-3px, others 8-20px). For each slice, I calculate a random shift (up to ~20% of the frame width) and move the slice to the left or right by the shift value. Then I randomise between keeping the slice normal (70% of the time), black (15%), or a random color band (15%). I then add a magenta tint + darken every other line to simulate a broken TV signal.

For OD: If output drain pin is selected, I compute a random global offset and per line offet jitter. Then for each of the pixels, I move the red to the left and blue to the right by the jitter value.

After the effects are added, I add a global noise, some corrupt lines (on ~30% of the frames, random horizontal lines of magenta/pink/white, shifted/added)

3. Finally a global hue shift is added based on the second knob.

One thing I realised is that Math.random() produced a lot of noise and flow between the frames looked disorienting. So I used a simple integer hash function to produce a more "deterministic" random number and the frames looked more stable/consistent.

[1] I should probably look for optimisations to prevent the device heating up after a few minutes.


[0-9A-Z] doesn't fit in 5 bits, which impedes shift/ctrl bits.

Not the best argument, as the answer is breathing.

If Instagram is as omnipresent as breathing we still have problem.

Sure, I totally agree with the sentiment, but it's not the right way to formulate an objection.

I don't think it's okay to breathe for 16 hours a day.

Author certainly has a point. The central idea is (IMO) best expressed in this quote:

> to focus on its [i.e., AI's] benefits to you, you’re forced to ignore its costs to others.


Also works if you substitute "technology" for "AI".

True, and other changes in society as well. But in contrast to e.g. the introduction of Phillips screws, AI (or rather: LLMs) is a biggy, and (IMO) one with where the negatives clearly outweigh the positives.

Definitely agree that LLMs are a big deal, but I'm holding out hope that they are a net positive. Even the current wave of false/fake news content could have benefits if it results in emphasizing chains of trust to reliable information.

Are there other technologies you think the negatives outweigh the positives? Leaded gasoline and paint come to mind for me.


I'm not sure that's true. Like I find the Google AI handy but that doesn't mean I have to ignore that others may be annoyed by it.

It's of course a quote of a handful of words from a 5000 word article. Not to be snarky, but did you read it?

You could say that it waged a silent war, and our kids' attention spans lost.

Of course there are cases where SSR makes sense, but servers are slow; the network is slow; going back and forth is slow. The browser on modern hardware, however, is very fast. Much faster than the "CPU"s you can get for a reasonable price from data centers/colos. And they're mostly idle and have a ton of memory. Letting them do the work beats SSR. And since the logic must necessarily be the same in both cases, there's no advantage to be gotten there.

If your argument is that having the client do all the work to assemble the DOM is cheaper for you under the constraints you outlined then that is a good argument.

My argument is that I can always get a faster time to paint than you if I have a good cluster of back end services doing all that work instead of offloading it all to the client (which will then round trip back to your “slow servers over a slow network”) anyway to get all the data.

If you don’t care about time to paint under already high client-side load, then just ship another JS app, absolutely. But what you’re describing is how you deliver something as unsatisfying as the current GitHub.com experience.


Audio AI companies are just another death star, intent on reducing human creativity to "make a song like Let it be, but in the style of Eminem, and change the lyrics to match the birthday of my mother in law". The only rebels are musicians resisting this hedge-fund driven monstrosity.

I don't think "separation of concerns" is entirely dead. Ideally, the CSS is readable and maintainable, and that implies structure. If you have a bunch of (co-)related components, you don't want to find/replace tailwind class names when you need to change the layout. So you separate that part of the layout in classes based on (layout!) functionality. You can see that as "concerns."

Components are the tool you’re looking for. For the rest there’s CSS variables. Soon we may have @mixin.

> Us having to specify things that we would never specify

This is known, since 1969, as the frame problem: https://en.wikipedia.org/wiki/Frame_problem. An LLM's grasp of this is limited by its corpora, of course, and I don't think much of that covers this problem, since it's not required for human-to-human communication.


A modern LLMs corpora is every piece of human writing ever produced.

Not really, but even if it would be true, I don't think humans ever explained to each other why do you need to drive to car wash even if it's 50 meters away. It's pretty obvious and intuitive.

There has to be a lot of mentions about the purpose and approximate workings of a car wash, as well as lots of literature that shows that when you drive somewhere, your car is now also at that place, while walking does not have the same effect.

It's then up to the model to make the connection "At the car wash people wash their car -> to wash your car you need your car to be present -> if you drive there your call will be there"


No, I think they have explained this to each other (or something like it). But as you suggested, discussion is a lot more likely when there are corner cases or problems.

Apart from the fact that that is utterly, demonstrably false, and the fact that corpora is plural, still the fact remains that we don't speak in those text about things that don't need to be spoken about. Hence the LLM will miss that underlying knowledge.

> "we don't speak in those text about things that don't need to be spoken about"

I'd imagine plenty of stories contain something like "I had an easy Saturday morning, I took my car to the carwash and called into a cafe for breakfast on my way home".

Plenty of instructables like "how to wash a car: if there's no carwash close enough for you to bring your car, don't worry, all you need is a bucket and a few tools..."

Several recipe blogs starting "I remember 1972 when grandpa drove his car to the carwash every afternoon while grandma made her world famous mustard and gooseberry cake, that car was always gleaming after he washed it at BigBrand CarWash 'drive your car to us so we can wash it' was their slogan and he would sing it around the house to the smell of baked eggs and mustard wafting through the kitchen..."

And innumerable SEO spam of the kind "Bob's car wash, why not bring drive take ride carry push transport your car automobile van SUV lorry truck 4by4 to our Bob's wash soap suds lather clean gleaming local carwash in your area ford chevvy dodge coupe not Nokia iphone xbox nike..."

against very few "I walked to the carwash because it was a lovely day and I didn't want to take the car out".


They're ending webhooks? Bummer. By the looks of it, they're going to introduce a more complex alternative. No, two, because why not. Why make something work when you can also make two things that work half, right?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: