Hacker News new | past | comments | ask | show | jobs | submit login
The Continued Trajectory of Idiocy in the Tech Industry (soatok.blog)
90 points by gmemstr 1 day ago | hide | past | favorite | 114 comments





> Does the company own any software patents?

> This includes purely “defensive” patents, in industries where their competitors abuse intellectual property law to stifle competition.

My own company was pretty staunchly opposed to software patents until we got sued by patent trolls. We are now pretty aggressive about patenting things so that doesn't happen again. Software patents were never great, and bad actors have made things even worse.


But you can't use patents to protect yourself against a typical patent troll. A typical patent troll doesn't do anything except sue people so they are unlikely to infringe anybody else's patents.

You can patent the stuff that is essential to what you do as soon as you come up with it - before the troll can. Then the troll can't sue because they can't get the patent in the first place (or if they do, you can use yours to invalidate theirs).

I don’t believe this actually holds up, as it assumes the USPTO is sufficiently funded to perform due diligence on prior art, and assumes that minuscule improvements will not constitute patentability. Patents are intentionally written with overreach. Overlaps are common.

Crap patents are granted daily.


Oh, absolutely. It's just that, if they grant yours first, and you also intentionally write it with overreach, then even if the PTO also grants theirs, the courts will find for you because yours has priority.

That's the theory. In practice it tilts the odds in your favor, but is not an absolute safety net.


the classic tale of the "make it proprietary" trolls. uuh! someone should make it a paper chase all the way to insurance companies!

Now I'm having fond nostalgia for Groklaw and coverage of SCO vs. everyone.

The web was a hype-train, and then a bubble, too. It's not enough that a new technology is adopting the hype cycle, what matters is what's left of it afterwards

How many of us got their workflow disrupted by AI? It's been a while since I've googled a "how do I..." type thing, because of how much I appreciate Bing Copilot or Phind

Safe to say my life did not change with the arrival of Web3!


> How many of us got their workflow disrupted by AI?

Single digit percentage?


You're kidding, I assume? Or maybe you misread "workflow" to mean something like "work-life"?

For sure more than a single-digit percentage of people have tried a Q&A bot by now. Shoot, Windows just updated to insert Copilot into users' machines, with the icon right next to the Windows button. It can't not continue to grow. It's such an amazingly effective way to get information versus scouring through web pages.


> For sure more than a single-digit percentage of people have tried a Q&A bot by now.

Tried is doing a lot of heavy lifting. https://meow.social/@crumbcake/113156927685392932

> Shoot, Windows just updated to insert Copilot into users' machines, with the icon right next to the Windows button.

This goes back to my point about consent and opt-in vs opt-out.

> It can't not continue to grow.

"If we shove this down people's throats enough maybe their gag reflex will adapt."

> It's such an amazingly effective way to get information versus scouring through web pages.

The only reason I stopped using Google isn't to choose AI instead. It's because the obtrusive "AI Overview" pissed me off.


> For sure more than a single-digit percentage of people have tried a Q&A bot by now

Yep, this was one of those tries:

Me: "I was billed twice for the same storage unit"

Highly Intelligent Chat Bot: "Let me check on that...you have two storage units so you receive 2 charges each month"

Me: "I was charged two times in the same day for unit #X"

HICB: "Unit #X is charged Y amount on date Z each month"

Me: "I received two charges for amount Y on the same day for unit #X"

HICB: "That's correct, you have two units and get charged Y amount for each one. Please let me know if you have any other questions"

Me: "Is it possible for you to be any less helpful?"


Yes, we've all seen unhelpful help bots.

There being negative cases don't mean the prolific amount of positive cases don't tell me the world has shifted.

Seriously I was very skeptical of using ChatGPT, worried it'd give me bad advice but I keep not finding quality content in google searches ("Hey googz, what's the flag in a bash if statement that verifies a file does not exist? --> oh here's a page full of every possible thing bash can do") and i ask ChatGPT and it's just bam, the answer.

It helped me recover a crashed harddrive, verify how `in` and `out` variables work in pl/sql functions ("yes, all `in`s have to come before any `out`s")... it's just so right so concisely that, like it or not, it's here to stay.


I hardly use it at all. ChatGPT was interesting at first but it slowly got worse and worse as changes had to be made to it so OpenAI wouldn't get sued. The same is true for all the other ones. LLMs can be great if they are allowed to blatantly violate copyright. But once everyone started circling the wagons and walling off their information they quickly started to lose their value.

Feel free to not use it, it's your time.

Some recent examples:

* "I installed windows and now I can't access my linux partition" * "I want to add an `in` variable to an existing pl/sql function, can it go on the end of the parameter list? (Answer: No, because...)" * "What size soccer ball does AYSO U10 use?" * "what is the name of that python library that handles command line arguments, and could you provide an example that demonstrates required flags?"

nothing earth shaking. not things that're impossible to find, but... instead of reading someone's blog post in which they work as hard as possible to belabor the point so you'll scroll through all 14... this thing just gives you the answer.


Disrupted? As in destroyed so that you need to create a completely different one?

There are some reports of people hired to write low quality spam articles that fit this. There isn't much else.

There exist lots of people that claim some small gains, mostly because they use shitty tools that can't do what some tools could do 15 years ago (often the same tools). I'm afraid that's not enough to keep our current LLMs alive.


There's a bunch of crap and most of the AI bubble is low-value BS, and back when the Web was starting there was Pets.com.

Realistically how many people are currently employed "writing low quality spam articles"

That's not reality. That's just reddit crapping on strawmen because a lot of the visible results from AI is 1) Crap creative work and 2) Polluted search results/comment sections

These are red herrings

There's real people doing real productive things with AIs that do boring things right now.

The ability to search for things that are still fuzzy in my mind, ask open-ended questions and get an answer back, or find something when you can't come up with a good keyword

I've never had the google-fu that some other developers have so this was something that often required asking a peer before. Sometimes a forum or some Discord, even.

Now you just feed your garbage question to a large language model that swallowed the entirety of the internet and a few seconds later it's spitting back something useful.

It's the first time I have access to a robot that's on stand-by with surface level knowledge of everything.

I suspect people who don't find this extremely valuable in their day-to-day life are just not catching the moments when they could've used it -- while on the other side of the spectrum the value it removes from comment sections/image search results is impossible to ignore.


> Realistically how many people are currently employed "writing low quality spam articles"

Up until LLMs started eating their lunch, a lot. I knew several people who wrote softball slop articles for no name tech blogs. Lots of generic how-tos in basic Java or Python questions, etc.

Now that's all automated and its mostly slop being used by AI to generate more slop.

> There's real people doing real productive things with AIs that do boring things right now.

elaborate.

I can think of at least a couple, but most of their approaches are only "AI" in a marketing sense, like Moderna using data driven, highly automated testing to create vaccine candidates. Absolutely nothing to do with LLMs as they exist in most places.

> Now you just feed your garbage question to a large language model that swallowed the entirety of the internet and a few seconds later it's spitting back something useful.

citation needed. I'm about 4 for 8 in terms of real technical questions and real, correct answers. Feeding Copilot and ChatGPT some questions about CVEs exploited in the wild got me wildly wrong answers until I started doing a lot of prompt work, but by that point I could have just googled the CVEs myself and got an answer I knew was accurate.

Ditto with fixing some MongoDB calls that look correct but didn't work, and took me longer to understand than just trying random shit, or asking on reddit.

Maybe it will get there, eventually, but outside of mass content generation I can't see a reason to trust the results.


> Feeding Copilot and ChatGPT some questions

My guess is the GP is talking about things like Perplexity. What is indeed ChatGPT, but it's set so that it can point you to useful sites instead of just giving you an answer. (I don't know why so many people think they answer itself will be useful, it can only be good if it was taken verbatim from some site.)

And yeah, those things are useful. But to say they'll disrupt one's workflow is a bit too much.


I think they mean 'disrupted' in a more benign, quotidian sense. Like, changed significantly. That said, perhaps you could clarify your comment about "shitty tools that can't do what some tools could do 15 years ago"? Like what? I'd preemptively argue that Google Search has changed for market reasons, not technical ones, but I'm curious to hear if you meant something else. Objectively speaking, chatbots from 15 years ago (or 2...) were just not at all coherent. You can criticize modern LLMs all day, but they're definitely coherent.

> That said, perhaps you could clarify your comment about "shitty tools that can't do what some tools could do 15 years ago"?

Yep, a large part is Google. But the most popular IDEs seem to be less capable of providing useful information nowadays too. (Yet, there exist good ones out there, and they haven't been explicitly enshitified, it's the languages and ecosystems that changed.)


> It's been a while since I've googled a "how do I..." type thing

Same, but you’re missing planet-sized elephant in the room here. This use-case has been technically solved for 20 years. What’s happened is that web search was ripe for consumer value extraction (aka enshittification). Have you ever seen a market more saturated with perverse incentives, from ad delivery, content farms, data mining, and of course the very search engine that prints money from it?

Now, if you wanna do an apples-apples comparison, imagine popups, consent forms, hidden influences from advertisers, recommendation engines, arbitrary lock-outs from community guidelines violations, ads while waiting for responses, JS bloat at recipe-website levels, and so on and so forth.

By all means, enjoy ChatGPT or whatever in its naked current form. But make no mistake that this is the honeymoon phase. Potential doesn’t govern the direction of new tech. Incentive does.


> How many of us got their workflow disrupted by AI? It's been a while since I've googled a "how do I..."

I did about an hour ago. AI answers run the gamut from boilerplate that's useful that still ends up needing tweaking to utterly making shit up that doesn't exist. I did a copilot search once that referred to an entire framework that it just made the fuck up, and you know it would've been great if it did exist, the framework would've suited my needs perfectly, it was just complicated by the fact that it didn't.

I'm off that train, I hate having my time wasted and until they can guarantee veracity in answers it's just a google search that's even less reliable than google, which is saying something lately.


OT but the "best" part of it is that the text-continuation foundations inherently encourage this exact behavior. People don't write "I don't know" very often, much less "wait a minute, I think I dreamed that, but it's too bad that doesn't exist".

Did you set Bing Copilot to "precise" mode?

I've asked it pretty niche questions and I've never seen it make up anything, much less a whole framework

Plus it gives you its sources so I don't see how this could've happened. A whole framework?


What the hell is "precise" mode? Why do I need to select that, who in the world wants their code AI running in imprecise mode?

I dunno maybe that would help, but it seems like a deeply un-serious product then.


There's "precise", "balanced" and "creative"

If you're asking it to write a story about a camel in a hat you probably shouldn't choose "precise" mode


You should've asked it to sketch it for you.

Lets be serious

Just because your reddit or hn had non tiny amount of posts about blockchain or nft, then it does not mean that there was real and significant push towards those in real world

There is huge world outside your twitters and reddits


Well, my concern started to grow when two things happened: - My sister who is not in a tech field asked me how blockchain worked since she had heard about it - A server at a restaurant was describing that they help people buy their first bitcoin as a side hustle.

So while every example may not permeate out of the echo chambers, there are some that are reasonably widespread.


They talk about Bitcoin on financial news shows, which lends it a huge amount of credibility. It's like if Jim Cramer has opinions about last night's Powerball numbers.

They talk about btc as financial instrument $$$$, not blockchains

If Cramer is your gold standard of credibility, then you're not so credible yourself

Uh, that's not how I interpreted their comment. At all.

how'd you interpret it?

As someone working at a very large bank. I strongly disagree. The bank spun up a new business to experiment with blockchain technology and went hard at it for several years until they recently sidelined those projects... In favour of AI

The amount of internal hype across all areas of the business (especially in the tech areas of the business) with regards to AI has been frankly stunning to witness.


> Generate with AI? [...] WordPress is not alone in its overt participation in this consumption of binary excrement.

> EA’s CEO called generative AI the “very core of our business”

Yeah just little twitter and reddit things like WordPress and EA.


Compare it with AI

How many orders of magnitude of difference?


Didn't El Salvador convert their currency to Bitcoin ?

Edit: El Salvador not Al Salvador


Bitcoin =/= Blockchain

Just like Facebook =/= HTTP


But they did it during peak blockchain hype though.

Partially. Legally, they use both USD and Bitcoin; but it wasn't really adopted by the population [1] (it doesn't help that they've been using USD since ~2000, which is relatively stable by latam standards). It was also much more a political endeavour by their president to promote neoliberal ideas rather than anything to do with blockchains per se.

[1]: https://english.elpais.com/international/2023-09-02/two-year...


El Salvador. They at least said they were going to. I don't know if they actually did.

Sure, the push in the real world was from the usual crowd of vendors & consultants & upper management who make buzzwords a thing. Totally unrelated to people grifting on online fora.

We have bitcoin ATMs in rural WI and I personally know a number of people (my own parents included) that have lost a significant amount of money to the cryptocurrency grift.

A lot of people like drugs, you want small amounts shipped to you by strangers, you need crypto. I've never tried it. But I've seen people on both sides (selling+shipping and buying) do it with acceptable success rates.

This is the one and only "big real world market" (not financial products, not lower tx costs, not governance issues) that crypto currency solved. Quite ingenious.


Don't forget ransomware payments!

Cryptocurrency is also useful for ransomware payments.


which is a fast growing business lately...

This is a commonplace corollary to Gell-Man Amnesia/Knoll's Law[1] - in truth every single field/category in the universe has varying degrees of stupidity, it's just that you only reacts emotionally to the stupidity in the fields you have a personal attachment to.

[1] https://en.wikipedia.org/wiki/Michael_Crichton#Gell-Mann_amn...


or the ones currently present, hyped, in trend? wait, I suggest: amplified/augmented attachment via exposure.

I don't understand how anyone is shocked by businesses trying to integrate "the next big thing" into their offerings. Objectively, there are new things we can do with the recent advancements in AI. Experimentation is natural, even if those in charge are not intimately familiar with the technology. Some experiments will be useful and some less so.

How does seeing "generate image" on WordPress make you fume? Why would they implement the option to disable it? It's a perfectly sound idea. Instead of finding some stock image to use, allow users to generate one without leaving the platform. Regardless of any qualms with the technology itself and its implications, it now exists as a commercial offering. Businesses are going to try and use it to improve user experience and make more money.


There's a difference between identifying areas where a new technology (AI) can solve a problem in a better way, and just slapping "Now with AI!" on an existing product purely to chase investor dollars.

> How does seeing "generate image" on WordPress make you fume?

HN crowd is increasingly threatened by new technology.


That’s possible, but it also seems likely that the HN crowd is used to existing in a space where a tiny fraction of good ideas succeed. Figuring that out requires an appropriately tuned sense of technology and the market, for which it is objectively reasonable to be skeptical of new things, especially when in a high point of the hype cycle. It’s also important to stay optimistic though…

> HN crowd is increasingly threatened by new technology.

Just because HN became bigger. But hackers are used to this, creating the new technology.


Presumably there’s a middle ground between arguing for experimenting with new tech and somewhat disingenuously arguing that it’s the core of your business. The manner of delivery and applicability of the tech to a company’s products impact the credibility of the claim that AI is important to a business’s future.

The only thing better than to attach your startup tech to either AI or blockchain, is to attach to BOTH simultaneously, like the wave of AI blockchain tokens!

Thing is, ultimately, some of these technologies may yield useful innovations.

NFT's are the biggest load of speculative garbage to come down the pike in years but in terms of smart contracts (which may have some use) they're not total garbage.

AI is in the "Peak of Inflated Expectations" portion of the hype cycle but that doesn't mean it will never yield anything of value.

The Tech Industry isn't really unique in our chasing the latest shiny object behavior. Anyone recall when 3-D TV's were going to be the next big thing? The consumer electronics industry was practically salivating at selling us new TV's every couple of years.


> some of these technologies may yield useful innovations.

Why not just focus on the useful innovations, then?


Because it's not often apparent where useful innovations will come from. If we could predict in advance what will be useful and what won't, we would surely save a lot of money and effort but life isn't that easy.

I could deal with the hype cycles better if the industry had more solid foundations on the basic stuff like CRUD apps and websites. We're still pretty lousy at it.

It feels like a sinking ship but we all know it's not going anywhere because humanity now depends too much upon getting messages anywhere in the world on the order of seconds.

Maybe having a bunch of hype cycles is normal and we're meant to dig through the bullshit for gold. Kind of resembles prompting AI over and over until you get a good response. It would be nice if the industry admitted that instead of trying to pretend every hype cycle is a guaranteed existence-changer. It changes your mindset and mental fortitude if you know you're going to be dealing with bullshit instead of being lied to and finding it out once you've accepted the job.

Also, don't forget a trend that was apparently so bullshit that people don't even mention it next to blockchain: VR/AR.


https://www.coolest-gadgets.com/vr-headset-statistics

  The Global Virtual Reality (VR) Headset Market is projected to surpass USD 121.9 billion by 2032, growing from USD 13 billion in 2023

Which is small compared to Bitcoin + Ethereum's market cap (which will also grow).

So if you think blockchain is bullshit, then VR/AR definitely is.


As with a number of other increasingly serious problems that people generally cannot or will not acknowledge as problems.

You can’t have reasoned discussions about the problems, people won’t actually engage with the topics, concerns or warnings are simply dismissed by people who won’t think it through even to only second order consequences, and in the meantime the sharks scent blood in the water and the money frenzy is upon us all, directly or indirectly.

There aren’t happy solutions to human nature at scale.


Tech blogger hype has a history of cargo culting nonsense. This is nothing new. I think they killed AR by hyping GGlass as an artifact only for special people, while VR was handed a half-measured fate tempered by gaming (it's still impractical and rears its ugly head every 10 years compared to the potential utility of AR).

Centralized blockchain ledgers for immutable data storage is immensely useful to detect silent corruption or malicious data tampering for specialized use-cases (backup & restoration with air-gapped storage processes are required to assure high-confidence data integrity to correct such data problems).

GPU, server systems, and network access are so damn cheap and impressive with incremental advances that the killer apps challenges now are sanely management, planning, and cross-cutting concerns like emulation, resource allocation, configuration management, monitoring, security, and effective discovery and implementation of appropriate and benchmarked solution(s).

The application of LLMs yet aren't to the point of complete silicon & PCB design, application development, or systems management assuring a level of correctness using iterative feedback, but probably will approach something useful over time. Perhaps in 50 years, all living humans will lack the understanding of what software will have "eaten" in certain areas if engineers allow it and corporations optimize for it.


The author’s premise brings me to a different conclusion. First, they detail Big Data, then the building of infrastructure to support it. New technology was then developed to harness insights from Big Data, and along the way, blockchain and smart contracts were developed, which are essentially cryptographically verifiable distributed state machines. Both of these innovations have also driven the development of hardware to support those activities. This seems like a solid trajectory.

I'm a Trekkie, so having an interactive conversation with a computer still makes me smile, even if it's just a stochastic parrot. I'm also super interested to see what AR/VR and AI can do together. Additionally, I look at the Swedish model for direct democracy and see blockchain and smart contracts as viable technological solutions to make that process more efficient, secure and hopefully increase adoption of that sort of governance.

The tech industry's main problem is grifters, and I think they mostly (not always) come from other industries (marketing, finance, crime). Somehow, they have convinced everyone that tech needs them to succeed (with their grifts). To me, the actual underlying technology is mind-blowing, but it’s the grift implementations that are the problem and make everything else look bad.


Re democracy: we bank online I am sure we have enough tech to make voting secure

It's not just about security; it's about how democracy, in certain forms (i.e. canton type voting, certain diaspora communities, etc.) is implemented[0].

I'm not advocating for any anarchist ideals, and I believe cryptocurrency should be banned if for no other reason than to deal with cybercrime. However, I also recognize legitimate use cases for the underlying technology that are drown out by all these grifts.

https://en.wikipedia.org/wiki/Direct_democracy


Rust and Solidity are interesting omissions from this ranty post, especially given the crypto attachments.

Article Peppered with furry imagery

Yeah that tends to happen to furry blogs.

Go hype! These are exciting times.

So what are the tech hypes that did end up being more than mere idiocy?

SaaS? Cloud? Smartphone apps? Social media?

Going back further: Internet? Wireless networks everywhere?


- WWW

- Smart phones from iPhone and onwards

- CI/CD pipelines

- Containers


Successful technologies that were hyped up started with acknowledging a problem exists, then working to solve it.

The blockchain / big data / Cloud Everything / LLM hype is largely a solution (the thing being hyped up) looking for a problem.

I hope that's clear.

The problem isn't the hype in and of itself. It's that there's a disproportionate amount of hype for things that aren't actually that valuable. I don't begrudge people for being excited about technology, only for being excited about excitement (and forgetting to actually solve problems for businesses and consumers).


I think LLMs have some killer features which solve real problems.

Translation (the problem transformers were invented to solve), Document QA, Embeddings for more semantic search, and unstructured data ingestion are all areas where LLMs excel.

When you have documentation to reference and keep in context, it’s pretty easy to achieve a low hallucination rate.

The hype is around like content creation and AGI, which is a nothingburger.


what is 'document QA' ?

My guess it's answering "how do I do X", where X is on the documentation.

Still I don't understand the people that claim the shared LLMs can do that with internal documentation. That's bullshit and easily disproved, you can force it to work for toy problems, but only for toy problems.


Off the shelf, LLMs can’t just look at docs and give you the answer.

But if you properly pre-process the documents and create a RAG type system (which uses embedding to find semantically similar docs before inserting them into LLM context) then it actually works quite well.

It’s good for big organizations with internal wikis, I’ve found.

It also works well for ingesting articles from online publications.


> The hype is around like content creation and AGI, which is a nothingburger.

Right, which is why TFA specifically talks about "Generative AI".


If LLMs stop evolving now, we (humanity) have a great new technology to build on. When we have computers and smartphones that can run Llama 3.1 405B the possibilities will be infinite.

If that doesn't turn into something as profitable as Google, that's actually great for all of us but terrible for the investors. I'm actually hoping for this scenario.


blockchain: selling/buy illegal stuff online

big data: is just data bigger than fix on one computer, many had this problem before and after the term got popular, many more got sold they have the problem without actually having it

cloud: as a user I love being able to get a new workstation running with just installing a password mgr, not loosing files in fires

LLM: being able to search through my pictures by text


A big part of the idiocy in tech is the endless stream of generic, cynical neckbeard takes that are indistinguishable from AI slop.

For some reason AI really brings out the neckbeard rage. I don't get it. AI assistants are amazing tools. It's like the rise of the PC industry all over again. So much value to unlock and our ideas barely scratch the surface right now.

It’s undeniable that there are a lot of ethical concerns when it comes to LLM training data.

Mix that with very aggressive marketing, (with some outright lies thrown in) difficulty finding jobs under the threat of AI, and with the fact that LLMs can and hallucinate and you have a recipe for people not liking it.

They’re far from useless and I think people are reacting more to the marketing than to the capabilities.


I have zero ethical concerns about the data used to train AI. Add that to my list of things I don't get, I guess. No one has even come close to offering a compelling argument that would even raise an eyebrow toward these "undeniable ethical concerns."

Well llama is famously trained on the books3 dataset, which was full of stolen books.

You can’t even get that dataset anymore and the people who made the scripts that generated it got arrested.

Same goes for fb using all text data from almost all adults on their platform in Australia and the US. OpenAI seemingly used YouTube data, without permission, to train their sora model. Copilot was trained on all public GitHub repos, regardless of license.

If you don’t think there are ethical concerns there… then I think we have different definitions of “ethics”


It's not a perfect tool for my hyper-specific use case, and getting the most out of it requires a different form of esoteric knowledge than the one I've spent years curating. Clearly it's garbage.

Some of it's simply a benign failure in pattern recognition -- despite my argument below that AI isn't the same as blockchain, you can certainly see the surface level similarities if you're not familiar with the technologies involved. It's also triggering a lot of people's anti-capitalist sentiments, which I personally relate to in spades. They're just applying metrics that they've been applying for years, and they're backed up in their feelings by legions of peers. Plus, I think we can all agree that unexpectedly-effective AI is fucking terrifying -- Vinge didn't call it a "Singularity" out of admiration, but fearful respect.

It's fascinating though, in a way... a lot of HN's rules are about "flaming", and I think this issue might be the first to really bring that out since the end of the distro/OS wars (I don't think Xbox v. Playstation ever made a splash here?). Perhaps there was a bit with Crypto, but IMO it never really took hold here in earnest, either. But AI brings out the vitriol, my god! "AI slop" is just the start -- just yesterday I saw someone on Substack call a random and obviously good faith data blogger a purveyor of "demonic machine filth". That's a new one!


> demonic machine filth

People find the dark mechanicum creepy.


Who else is not looking forward to cleaning up AI-written merely-accidentally-working garbage code for the next umpteen years?

AI will clean up AI’s merely-accidentally-working garbage code

My plan is to blame all my bad code on AI even though I don't use it.

In reality you say, "AI" but in your head you really mean, "Analog Intelligence" that way you're never lying :thumbsup:

There's a lot of argument about it, but it seems to me that the idea that our brain is digital is the one winning.

Been making bank doing this for decades. I really don't care who or what wrote the code. It's beautiful.

So you refactor bad code? How do you get these gigs?

Look for places hiring Java, C#, or COBOL enterprise developers

on a subscription basis?

Up and to the right! lol

This is some of the most knee-jerk content I've seen outside of Tumblr in a while. Fun, engaging, and well written, but totally off base IMO. It's a simple fact of life that we have struck an unexpected (by most) breakthrough with LLMs, and begun meaningfully solving the Frame Problem decades -- or centuries -- before anyone (other than Vinge, who literally predicted it to the year 30y ago: https://users.manchester.edu/Facstaff/SSNaragon/Online/100-F...) thought we would.

We can point out issues with AI all day, such as the ever-relevant impacts of automation on workers in a capitalist society, but this critique is both exceedingly vague and occasionally mistaken.

  Once upon a time, everyone was all hot and bothered about Big Data
If your oldest example of a hype cycle is Big Data, you might not be old enough/reading enough history to comment effectively. Big Data is the same hype cycle as AI -- it's called "Machine Learning", and it's been going since full-bore since ~2017: https://subredditstats.com/r/machinelearning

To keep my (central!) critique here very short: "AI" isn't the same as blockchain because some people on forums sound similar when talking about it, it's the culmination of decades of research that started in earnest w/ Judea Pearl's 1984 Probabilistic Reasoning in Intelligent Systems, or even the hubbub around Marvin Minsky's 1969 Perceptrons. It's an academic discipline, not a buzzword, and any professional or serious critique should engage with such work, not just quote-dunk the CEO of Jack-In-The-Box.

  There is no way to opt out of, or disable, this feature.
...Don't use it?

  Mozilla Firefox 128.0 released a feature (enabled by default of course) to help advertisers collect data on you.
This is just 100% unrelated, and it's a clap-back to Google anyway. The problem here is the entire industry of Display ads, not Mozilla trying its best to keep them alive along with every other player in the industry (it's not gonna work, but that's a whole separate thread).

  Investors (read: fools with more money than sense)... Furthermore, there are a lot of gullible idiots that drank the Kool-Aid and feel like they’re part of the build-up to the next World Wide Web...
https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect, also this tone isn't very HN-friendly, just in general -- I say as one of the gullible idiots that has hope for the future.

  Some solutions are incredibly contentious, though, and I don’t really want the headache. For example: I’m sure that, if this blog post ever gets posted on a message board, someone in the peanut gallery will bring up unions as a mechanism, and others will fiercely shoot that idea down. It’s possible that we, as an industry, are completely in over our heads. 
A) Unions are great, but completely orthogonal to this discussion, and B) AI is a subset of capitalism (or, at worst, a synecdoche) not the whole thing.

  Hacker News, Lobsters, etc. are full of clueless AI maximalists that cannot see the harms they are inflicting.
Sign me up as one of the clueless maximalists -- again, if I responded in kind, my comment would be flagged and removed. In the end, I think this person is a "cringe" writer "with more money than sense" who creates "binary excrement" because they're an "arrogant" "gullible idiot" who is openly more interested in "shame and ridicule" than academic discussion.

If you think "Big Data" started in ~2017, you may not be old enough/reading enough history to comment effectively. Look at "data mining" and "data warehousing" for previous incarnations. The general idea has been around since at least the 1980s.

Ha, fair enough, I deserved that! I just mentioned that year in that sentence because that’s when the linked graph takes off. For the curious onlooker: this person pretty much nailed it with “1980s”, according to NGrams: https://books.google.com/ngrams/graph?content=Machine+learni...

Thanks for the "fair enough". I was worried that I had been too harsh.

[flagged]


You know the term authoritarian cuts both ways across the political spectrum? Not everything is right wing just because it’s authoritarian.

Thats just not a good analysis.

Blockchain and NTF are and were stupid. A lot of people knew about this and the hype was more of a news hype because we had nothing else to talk about (until ai came).

I have seen so many really good and helpful ai demos/features internally, its impressive.

With AI / ML we are getting self driving cars, robots (talking, listing, walking), agents etc.

LLMs are not crazy good because they can generate stories, they are crazy good because they are a very very good interface to humans.

Facebooks Segment Anything ml model basically solved segmentation problem. Alpha Fold solved protein folding. Nvidias Omniverse with Robots solved the robot motion problem.

AI is not a hype, AI delivers left and right every week there is something really cool new.

Instead of writing uneducated blog posts or just blindly rant about it, at least try to follow up on AI news, you will be amazed how much it solves. And Until we are seeing ANY slow down or ceiling, until then i do believe that this right now is what the iphone was or the internet just crazier.

Its frustrating that people are not even able to understand AI, Blockchain and NFT good enough to be able to separate them. Just because something gets hyped doesn't mean its the same thing as the other thing which got hyped.

And no you were not able to talk to a computer system as fast and good as you are able to do that today with OpenAIs voice input. And no you never had a system which was able to answer that many questions in such a high quality.


> LLMs are not crazy good because they can generate stories, they are crazy good because they are a very very good interface to humans.

Do you have an example of a tool that uses an LLM as an interface? Seems like that'd be the fastest way to show people this is a superior interface.

We're obviously a long ways away from star-trek style natural interaction with computers, so I'm curious what you're doing that can work today. Aside from straightforward content generation, of course.


https://www.openinterpreter.com/

Its great for people with mobility issues, since speech to text is so good now.

v0.dev kind of has a hybrid traditional interface mixed with LLM content generation. May not be exactly what you were asking for.


This is very cool and I could have seriously used this when recovering from RSI! However, it's not exactly a great argument that this is better than a keyboard and mouse for those who are abled enough to use them fully.

Well tbf, we’ve had decades of improving mouse and keyboard interfaces, but beyond better speech to text, natural language interfaces have been the same for like 15 years.

The mouse was controversial on release as well, since most computers weren’t graphical at the time.

Let the LLM backed interfaces cook. I don’t think they’re a replacement for graphical UIs, but that doesn’t mean they won’t be better for some applications.

Braille, for example, can be read by blind people AND non-blind people in dark rooms. Not strictly better than regular text, but far from useless.


> at least try to follow up on AI news

your comment seems to based on reading AI clickbait 'news' that claims everything is 'solved' by ai .


> Its frustrating that people are not even able to understand AI, Blockchain and NFT good enough to be able to separate them. Just because something gets hyped doesn't mean its the same thing as the other thing which got hyped.

I agree with your main point, to those of us on HN there is obviously more substance to the current AI wave than (say) web3.

But I can hardly blame people who aren't actively following tech news from believing that it's more of the same -- many of the VCs and tech media boosters are the same every cycle. If anything, I think it's more of an onus on those of us who do follow closely to sound the alarm on the bullshit. (And there is bullshit this AI wave, too, on top of the obvious substance.)


Which AI startup do you work for?

I agree with the general gist of the article that generative AI is bullshit for many use cases. But I'd rather point out why this comment is misguided for a very specific reason: it's a prime example of why we, as technologists, need to be careful with our terminology around some of these innovations.

You are lumping together many different things in this post. For example:

> With AI / ML we are getting self driving cars, robots (talking, listing, walking), agents etc.

The "AI / ML" part of this sentence is telling. I am aware of exactly zero self-driving cars that are powered by LLMs (what the general public almost always means when they say "AI" these days).

Self-driving cars are enabled by physical sensors in combination with various ML algorithms which have been around in some form for literally decades. I'm not an expert in this field, but my understanding is that what's actually happened in the last ~decade which has allowed them to flourish is the development of better _hardware_, that is, hardware that can run these algorithms fast enough, at a large enough scale, and still be small and cool enough to fit into a car.

Ditto to some extent with your other examples, though maybe a general-purpose robot could be made better by interfacing with an LLM.

I realize this may not be your intent, but by writing in this way, you are confusing the layperson into thinking that all of these innovations were enabled by ChatGPT-style "AI," when in fact some of them have nothing to do with that type of tech at all.

I really wish we'd all be more honest, and not conflate transformers/LLMs with other "AI" algorithms. In fact, I think it'd be good if we stopped saying "AI" completely, though I realize this will never happen given that term's stickiness with the public at large.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: