Hacker Newsnew | past | comments | ask | show | jobs | submit | app134's commentslogin

> I'm pretty sure that someone else would have come around the corner with a similar idea some time later, because the fundamentals of these stuff were already discussed decases before

I am not trying to be dismissive, but this could apply to all research ever


thats true! I meant "not somewhen accidentally in the future" but more of "relative close together on the timeline"

Dismissing someone with a different opinion as astroturfing is not productive.

There are loads of high performance open source LLMs on the market that compete with the big 3. I have not seen this level of community engagement and collaboration since the open-source boom 20 years ago.


If I believed it was a different opinion I wouldn’t even have written the first paragraph, or maybe the whole reply.

The issue arises from it not being that person’s opinion but a talking point. People didn’t all individually arrive at this “democratisation” argument by themselves, they were sold what to say by the big players with vested interest in succeeding.

I’m very much for discussing thoughts one has come up with themselves, especially if they disagree with mine. But what is not productive is arguing with a proxy.

> I have not seen this level of community engagement and collaboration

Nor this level of spam and bad submissions.


> It signals either astroturfing or someone who just accepts what they are sold without thinking.

> Nor this level of spam and bad submissions.

Your comments seem pretty aggressive for what you’re replying to. Maybe take a beat to assess your biases? I thought the main comment was pretty fair and sensible, yet somehow you landed on calling them a spammer/bad submitter/astroturfer/non-thinker. Maybe they are? I could be wrong, but that's quite a strong reaction for what they asserted at face value. Not really trying to police anything here, I just thought the initial comment had merit and this devolved quite quickly.


You misunderstood. Spamming and bad submissions has nothing to do with the original comment.

You're overthinking it.

Programming is a tricky skill and takes a long time to get good at. Lots of people aren't good at it. AI helps them program anyway, and allows them to sometimes produce useful programs. That's it.

It's not a talking point. It's just the reality of what the technology enables, and it's a simple enough observation that millions of people can independently arrive at that conclusion, and some of them might even refer to it as "democratization".


> Programming is a tricky skill and takes a long time to get good at. Lots of people aren't good at it.

This is a good thing. It's a filter for the careless, lazy, and incompetent. LLMs are to programming what a microwave is to food. I'm not a chef because I can nuke a hot pocket. "Vibe coders" (not AI-assisted coding) are the programming equivalent of the people on Kitchen Nightmares. Go figure, it's a community rife with narcissism, too.


It is a fair note when there are a lot of people with a monetary incentive to hype up a certain piece of technology. And as gp correctly points out: "democratizing" is most commonly used in a very hostile and underhanded manner.

It is what we are talking about, hence not "counterproductive".


You asked earlier if you were being overly cynical, and I think the answer to that is "yes"

We are indeed simulating what we find in nature when we create neural networks and transformers, and AI companies are indeed investing heavily in BCI research. ChatGPT can write an original essay better than most of my students. Its also artificial. Is that not artificial intelligence?


It is not intelligent.

Hiding the training data behind gradient descent and then making attributions to the program that responds using this model is certainly artificial though.

This analogy just isn't holding water.


Can't you judge on the results though rather than saying AI isn't intelligent because it uses gradient descent and biology is intelligent because it uses wet neurons?


I am and that is also a problem.


I strongly believe that our concept of intelligence is like the „god of the gaps“ [0]. Intelligent is only what we haven’t yet explained.

Chess computers surely must be intelligent, but then deep blue was „just search“.

Go computers surely must have intelligence because it requires intuition and search is intractable, but then it’s „just CNN based pattern matching“.

Writing essays surely requires intelligence, because of the creativity, but then it‘s actually just a „stochastic parrot“.

We keep attributing intelligence to what is currently out of reach even as this set is rapidly shrinking before out eyes.

It would be better to say that intelligence is an emergent phenomenon and that behavior that seems intelligent is intelligent.

[0] https://en.m.wikipedia.org/wiki/God_of_the_gaps


In-context learning is proof that LLMs are not stochastic parrots.


I don't believe they have made it public.

I worked this wreck with RIMAP and had to sign an NDA before boating out, but that was back in 2020


Steganography tool called ez-steg. It supports least significant bit steganography as well as emoji/unicode encoding via variation selectors. It grew from a set of scripts I had written to test out data loss prevention systems.

Includes some nice-to-haves like payload encryption, carrier image creation

https://github.com/a-bissell/ez-steg


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: