Hacker News new | past | comments | ask | show | jobs | submit login
AI hype is over. AI exhaustion is setting in (disconnect.blog)
36 points by marban 21 days ago | hide | past | favorite | 9 comments



I wouldn't say I'm particularly hyped about "AI", but I would say that language models have gotten to the point where they are boring and useful enough to integrate into my daily workflow.

I use Kagi Ultimate, and I have gone though a valley of language model usage. I started using them a lot when I first got Ultimate to play around with them and understand their limitations. I stopped using them as much in favor of plain search after I hit those limitations. Models have since gotten considerably better, and in many cases they are more useful than regular web searches, so I have started using them a lot again.

I also run Llama 3 locally on a 7900 XTX to process information that I don't feel comfortable with / can't share with external APIs. That's definitely not the greatest, but it's also definitely good enough to be useful in a pinch sometimes.


I subscribed to Kagi Ultimate but I feel I’m underutilizing it’s functionality. What are some use cases you would encourage me to try with the models? What are some key takeaways with Ultimate you discovered in your testing?


Claude 3 Opus has the most knowledge encoded in it from my experience, so I use that when I am using the non web search models. I also find that I don't often need the most recent information for the work that I do, so I don't often use the search enabled one. If I use the expert assistant, I usually just provide it an exact URL of a long document that I want to ask questions about.

I think they are particularly useful when you know what you want with particular nuances, and it would not be easy or possible to find that specific information in a web search. For example, I recently used LLMs to help me write a configuration for my RAID. I knew I wanted mirror+stripe, and I wanted to mount the RAID at /data, and mount my home directory at /data/home. I explained this to the language model, and it essentially built me a script to do that.

I could have looked at the manuals for how to use mdadm and edit /etc/fstab, but writing down exactly what I want in plain English and then doing what the language model spits out was easier and faster for me.


How are you running llama on the XTX? I have the same card and getting local LLMs running was always such a pain I've not tried in a while.


I'm running ollama on Ubuntu. I tried other distros first, but I ran into some issues. Pretty straightforward on Ubuntu.


I have only used nvidia cards, but in my experience kobold.cpp and ollama forks of llama.cpp can be easier to compile.


Much of the bold interview statements and podcast predictions surrounding the AI hype will easily be meme material by end of the decade when dust settles down.

You can mark this comment and visit it by 2030.


Indeed, the hype is still frothy. But AI exhaustion is certainty here.


Other than an incredible amount of cynicism, does this author offer a single objective fact or 3rd party source to support his argument?

The argument that he's trying to make is that AI is overhyped, but his conclusion is that it's nearly worthless. This is a non sequitur.

He's cynical last years announcements of new discoveries AI models aren't immediately useful. But he's completely ignoring the acceleration and advances of these announcements. He's relying on the matter that hallucinations exist, but he's ignoring the dramatic reductions in error rates across models every 6 months.

He's ignoring that his very industry of writing has been completely changed, and the irony is editing will almost never be done without an AI model again. How many other industries are seeing the same impact (design, project management, software engineering, digital creatives,...).

Medicine and law are actively updating their methodologies and practices in order to better incorporate these new tools 1. https://mcpress.mayoclinic.org/healthy-aging/ai-in-healthcar... 2. https://hls.harvard.edu/today/harvard-law-expert-explains-ho...

This is obviously a piece that was written with a conclusion regardless of the facts




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: