Hm so not only is it possible to manipulate scores on HN with weird scripts, but mods only noticed this when users pointed it out in this instance, with no automation or logging there to detect something like this?
Do you suppose motivated individuals might have used something like this to bury wrongthink at a time when mods insisted such a thing wasn't happening, and even if it was, it was happening to all tribes equally so really it wasn't a big deal?
It's not that AI can't convince a novice that what comes out is passible.
It's that experts in a field generally agree that what comes out is insidiously hollow garbage.
This isn't a "semi-religious" belief. It's linear token soup and diffusion bakes running headfirst into actual expertise, second and third order effects, refined skill and taste, and so on.
If you actually want to see civilization advance, you cannot rely on machines that merely mash up existing intellectual output while pretending to have expertise.
We already had that in the form of art school avant-gardism. AI is just style transfer of that, with corporate sycophancy and valley hyperbole as a veneer.
But you really believe it will stay that way? What do you think models will be 10 years from now? (not only models, we must include processes and tools in it) - developers were thinking this until recently there is some sort of sudden switch where "shit, it's good enough" and then pass this in a 50x loop and suddenly it becomes "shit, it's actually great" which proves it's a matter of time imo before it's not hollow garbage but actually innovative and expert in its field.
I still think you are missing entirely the point about music or any art in general.
It doesn't matter how technically innovative, or how much expertise, a model has, while an AI is not a consciousness that can express itself it will be hollow. There's no way around that.
If some form of AI becomes conscious, and can express itself through whatever art form it conjures for that, why would it even use music? Music is human, it's tuned to how our brains work and perceive sounds, I'd be much more interested to discover what art forms another form of consciousness that we can commuicate with can come up on its own.
I can't fully agree with the hollow part, when AI resonate with me about real-life issues (I understand it's just a machine without thoughts) it's pretty expressive and spot-on, and genuinely useful. I don't really see why it couldn't be the same with music, it can already write completely unique pieces that are very entertaining and full of emotions (even tho they are "fake")...
The brain perceiving sounds a certain way in the end is just data, that can be mapped as well, an AI can make us laugh right because it understands speech really well (and will be a thousand time better someday), what's the actual difference with music?
Let me give you another example, there is some Meme about older folks getting bamboozled by AI images right (especially doomsday stuff) which proves that it does trigger them genuine emotions, what's the difference if that image does actually exist or not (or let say a human photographed it).
What if that does not matter to someone? I know my opinion can't be common, but I cannot stand live music. I dislike the sound quality, the differences from the recording, the crowds, the cost, and more.
I know not everyone enjoys concerts, but it’s fundamental to my listening experience. That aside, I have no interest in music or art of any kind generated by AI. Other folks might, but I’ll have nothing to do with it.
The difference is the indelitable reality behind it.
You are confusing the topography of it with the substance, what's the point of something that is without substance? Without meaning? It's just fake, whenever you point to someone that an image that brought them joy is fake, generated by AI, it immediately changes the feeling they had. It doesn't bring the same awe anymore, awe is reserved to what is real. It might bring awe in the sense of "woah, a computer can do that" but that's a different feeling than being in awe of the story the image created.
How can it be full of emotion if it's created by something without emotion? It's just a mimicry of emotion, I really cannot understand how you cannot feel that knowing it's not created by another being; being real is the whole point, an emotion triggered by something not real, not experienced, transformed, and communicated by someone else is inevitably hollow.
Like: how can AI know what is to feel in love? Or to feel the loss of a loved one? Or to feel despair about something? Or to feel depressed? Or to feel extreme joy? Why would you listen to a song telling you a story to evoke an emotion on something that simply does not exist? There is no experience being transmitted, it's purely a hollow amalgamated mimicry of the experiences that were ingested but the output has absolutely no emotion, just a synthetic mimesis of it.
You are enjoying the mimicry, it's entertaining, but I really would like for you to ask yourself deeper questions about this rather than be impressed by the surface of it.
> The brain perceiving sounds a certain way in the end is just data, that can be mapped as well
I completely understand your point of view, but I can't genuinely agree with:
> How can it be full of emotion if it's created by something without emotion?
A nice crystal, a nice rock (something devoid of emotion or feeling) is used as art, it's also triggering emotions in individuals, this thing doesn't have a consciousness, nor understand anything, but still, it's able to change humans brain chemistry. AI that acts as a therapist, let say saying the EXACT same thing as a real therapist would, let even bring it further where the therapist is on vidcall to have a proper representation, and let say now it's 1:1 AI generated as in zero flaws (exact same, you'd think it's a human with exact same speech as that therapist), why would the experience not be transmitted? Ton of people say things that they don't really mean as well right, and those thoughts are transmitted successfully, felt or not.
AI can incur pain, emotion, distress, happiness and so-on. I genuinely try to think about what's behind, but what I feel is that in the end, humans aren't so magical, it's like watching a beautiful woman being all "fake" with heavy make-up, most humans can still appreciate it, despite knowing it's all BS. People lie as well, this is very deceptive, let say someone is saying he is so happy but in reality, he just isn't, you just felt something for him that were just false (a mimick), and this is kinda our normal.
What if you never knew, let say you are so fond of an artist/person but in the end, you discover it's 100% AI without human supervision, then what, those were real emotions you felt, not entertainment, you RELATED with that "person", you felt his pain.
And one more thing, why couldn't I teach an AI to transmit my own knowledge, speak to it for decades, write to it for decades, then just mimick everything, mimicking the "truth" about my innerself, why would that not be valid? Isn't exactly what the bible is doing (I'm not religious), people seem to find it valid.
After making one of the least worst rich editors out there on the web, they needed to keep their developers and designers busy (while not having time to fix privacy bugs).
Like every other AI tool it mainly seems to exist to produce productivity porn. Summarize the meetings nobody could be bothered to summarize. Write the docs nobody can be bothered to read or write. Communicate as an end, not a means, because the company your work for has transitioned into the dead-weight phase.
This article's timeline is mostly accurate, but contains a few inaccuracies:
- Unified toolbar and titlebar dates from much earlier... it was 10.4, not 10.7.
- The brushed metal look was supposed to be applied to "appliance-like" apps as opposed to "document-like" apps... But Apple was never able to stick to that rule themselves.
There are a few design ideas that always turn out to be bad when implemented, but which designers seem to have to learn the hard way. Transparency is the biggest one, but I guess so is excessive rounding now.
>Despite the quick spread of agentic coding, institutional inertia, affordability, and limitations in human neuroplasticity were barriers to universal adoption of the new technology.
Blaming lack of adoption purely on regressive factors follows the same frame that AI firms set. It isn't very effective satire for that reason.
It couldn't be that there is something essential and elementary that is wrong with the output, no... all these experienced experts are just troglodites and wrong and we should instead tag along with the people who offloaded the parts of their work they found tough to a machine the first chance they got.
There's no such thing as ape coding. There's still just coding, and vibe coding.
The person this thread is about publicly outspoke and critized Watkins, 8chan and other forums that promote hate speech. Multiple times, in newspapers and documentaries about the topic(s).
He went in business with the wrong people, and they followed him years after that, with bogus lawsuits, showing up at his door step to intimidate him, threats etc.
Personally, I wanted to say that he was a person that strongly believed in the idea of open debate and cultural exchange. So much so that it was too late for him to see what the Watkins family and their qanon/pol/whatever movement were planning to do with his boards.
>Before you get your pitchforks out and call me an AI luddite, I use LLMs pretty extensively for work.
Chicken.
Seriously, the degree to which supposed engineering professionals have jumped on a tool that lets them outsource their work and their thinking to a bot astounds me. Have they no shame?
No, we truly live in a post-shame society and that's definitely not a good thing. Shame is (or was) an important tool to enforce social norms and the acceptance of AI slop (both writing and code) is only the latest where a sufficiently large percentage of people think anything goes that it feels literally pointless to speak up at times.
Sorry but this post is the blind leading the blind, pun intended. Allow me to explain, I have a DSP degree.
The reason the filters used in the post are easily reversible is because none of them are binomial (i.e. the discrete equivalent of a gaussian blur). A binomial blur uses the coefficients of a row of Pascal's triangle, and thus is what you get when you repeatedly average each pixel with its neighbor (in 1D).
When you do, the information at the Nyquist frequency is removed entirely, because a signal of the form "-1, +1, -1, +1, ..." ends up blurred _exactly_ into "0, 0, 0, 0...".
All the other blur filters, in particular the moving average, are just poorly conceived. They filter out the middle frequencies the most, not the highest ones. It's equivalent to doing a bandpass filter and then subtracting that from the original image.
Here's an interactive notebook that explains this in the context of time series. One important point is that the "look" that people associate with "scientific data series" is actually an artifact of moving averages. If a proper filter is used, the blurryness of the signal is evident.
https://observablehq.com/d/a51954c61a72e1ef
"In today’s article, we’ll build a rudimentary blur algorithm and then pick it apart."
Emphasis mine. Quote from the beginning of the article.
This isn't meant to be a textbook about blurring algorithms. It was supposed to be a demonstration of how what may seem destroyed to a causal viewer is recoverable by a simple process, intended to give the viewer some intuition that maybe blurring isn't such a good information destroyer after all.
Your post kind of comes off like criticizing someone for showing how easy it is to crack a Caesar cipher for not using AES-256. But the whole point was to be accessible, and to introduce the idea that just because it looks unreadable doesn't mean it's not very easy to recover. No, it's not a mistake to be using the Caesar cipher for the initial introduction. Or a dead-simple one-dimensional blurring algorithm.
If you have an endless pattern of ..., -1, 1, -1, 1, -1, 1, ... and run box blur with a window of 2 or 4, you get ..., 0, 0, 0, 0, 0, 0, ... too.
Other than that, you're not wrong about theoretical Gaussian filters with infinite windows over infinite data, but this has little to do with the scenario in the article. That's about the information that leaks when you have a finite window with a discrete step and start at a well-defined boundary.
Interesting...I've used moving averages not thinking too hard about the underlying implications. Do you recommend any particular book or resource on DSP basics for the average programmer?
It also makes no sense to me, and I also have a DSP degree. Of course moving averages (aka box blurs) filter out higher frequencies more than middle frequencies.
Homework assignment: make a bode plot of the convolution filters [1 1 1] vs [1 2 1].
Which one turns +1, -1, +1, -1, .. into all zeroes?
You ought to know this because the fourier transform of [1 0 1] is a cosine of amplitude 2 on the complex unit circle e^(i*omega), which means the DC quefrency needs to be 2 to get the zeroes to end up at nyquist.
The frequency response H(z) (= H(e^i*omega)) of [1 1 1] on the other hand will have its minimum somewhere in the middle.
Also here's a post that will teach you how to sight read the frequency response of symmetric FIR filters off the coefficients:
https://acko.net/blog/stable-fiddusion/
The degree to which people defend poor scholarship and writing on HN these days is frankly pathetic.
There is nothing about that intro that is offensive. Reading comprehension ought to tell you that "pun intended" is a joke to make the bitter pill that OP wrote garbage easier to swallow.
> Are people still not over him buying Twitter and firing all the dead weight?
You think that's really the issue? Or are you not making a good faith comment yourself?
I cannot remember the last time I saw someone hating on Elon for his Twitter personnel decisions. The vast majority of the time it is the nazi salutes he did on live TV and then secondary to that his inflammatory behavior online (e.g. calling the submarine guy a pedo).
I still pick on it, but I was never a big Twitter user, I just enjoy calling it Xitter. Picking on Elon Musk is for the shitty things he's been doing to our government and the world, and for being a bad person in general.
I rely on AI coding tools. I don’t need to think about it to know they’re great. I have instincts which tell me convenience = dopamine = joy.
I tested ChatGPT in 2022, and asked it to write something. It (obviously) got some things wrong; I don’t remember what exactly, but it was definitely wrong. That was three years ago and I've forgotten that lesson. Why wouldn't I? I've been offloading all sorts of meaningful cognitive processes to AI tools since then.
I use Claude Code now. I finished a project last week that would’ve taken me a month. My senior coworker took one look at it and found 3 major flaws. QA gave it a try and discovered bugs, missing features, and one case of catastrophic data loss. I call that “nitpicking.” They say I don’t understand the engineering mindset or the sense of responsibility over what we build. (I told them it produces identical results and they said I'm just admitting I can't tell the difference between skill and scam).
“The code people write is always unfinished,” I always say. Unlike AI code, which is full of boilerplate, adjusted to satisfy the next whim even faster, and generated by the pound.
I never look at Stack Overflow anymore, it's dead. Instead I want the info to be remixed and scrubbed of all its salient details, and have an AI hallucinate the blanks. Thay way I can say that "I built this" without feeling like a fraud or a faker. The distinction is clear (well, at least in my head).
Will I ever be good enough to code by myself again? No. When a machine showed up that told me flattering lies while sounding like a silicon valley board room after a pile of cocaine, I jumped in without a parachute [rocket emoji].
I also personally started to look down on anyone who didn't do the same, for threatening my sense of competence.
Do you suppose motivated individuals might have used something like this to bury wrongthink at a time when mods insisted such a thing wasn't happening, and even if it was, it was happening to all tribes equally so really it wasn't a big deal?
reply