Hacker News new | past | comments | ask | show | jobs | submit login

I’ve used Claude today to:

Write code to pull down a significant amount of public data using an open API. (That took about 30 seconds - I just gave it the swagger file and said “here’s what I want”)

Get the data (an hour or so), clean the data (barely any time, gave it some samples, it wrote the code), used the cleaned data to query another API, combined the data sources, pulled down a bunch of PDFs relating to the data, had the AI write code to use tesseract to extract data from the PDFs, and used that to build a dashboard. That’s a mini product for my users.

I also had a play with Mistral’s OCR and have tested a few things using that against the data. When I was out walking my dogs I thought about that more, and have come up with a nice workflow for a problem I had, which I’ll test in more detail next week.

That was all whole doing an entirely different series of tasks, on calls, in meetings. I literally checked the progress a few times and wrote a new prompt or copy/pasted some stuff in from dev tools.

For the calls I was on, I took the recording of those calls, passed them into my local instance whisper, fed the transcript into Claude with a prompt I use to extract action points, pasted those into a google doc, circulated them.

One of the calls was an interview with an expert. The transcript + another prompt has given me the basis for an article (bulleted narrative + key quotes) - I will refine that tomorrow, and write the article, using a detailed prompt based on my own writing style and tone.

I needed to gather data for a project I’m involved in, so had Claude write a handful of scrapers for me (HTML source > here is what I need).

I downloaded two podcasts I need to listen to - but only need to listen to five minutes of each - and fed them into whisper then found the exact bits I needed and read the extracts rather than listening to tedious podcast waffle.

I turned an article I’d written into an audio file using elevenlabs, as a test for something a client asked me about earlier this week.

I achieved about three times as much today as I would have done a year ago. And finished work at 3pm.

So yeah, I don’t understand why people are so bullish about LLMs. Who knows?






Yuck. Do your users know that they are reading recycled LLM content? Is this long winded post generated by an LLM?

Yeah, they are not “reading recycled LLM content”, no. The dashboard in question presents data from PDFs. They are very happy with being able to explore that data.

So much about this seems inauthentic. The post itself. The experience. The content produced. I wouldn’t like to be on the other end of the production of this content.

This just sounds like a normal day for someone who does research and analysis in 2025.

Where do you think expert analysis comes from?

Talk to experts, gather data, synthesize, output. Researchers have been doing this for a long time. There's a lot of grunt work LLM's can really help with, like writing scripts to collect data from webpages.


Great! You’re not the audience for it.

Who is?

The people who pay for what I do.

What happens on the day when those people just directly pay some AI model to do it?

Then that part of my work will change.

However, as this thread demonstrates repeatedly, using LLMs effectively is about knowing what questions to ask, and what to put into the LLM alongside the questions.

The people who pay me to do what I do could do it themselves, but they choose to pay me to do it for them because I have knowledge they don’t have, I can join the dots between things that they can’t, and I have access to people they don’t have access to.

AI won’t change any of that - but it allows me to do a lot more work a lot more quickly, with more impact.

So yeah, at the point that there’s an AI model that can find and select the relevant datasets, and can tell the user what questions to ask - when often they don’t know the questions they need to have answered, then yes, I’ll be out of a job.

But more likely I’ll have built that tool for my particular niche. Which is more and more what I’m doing.

AI gives me the agency to rapidly test and prototype ideas and double down on the things that work really well, and refine the things that don’t work so brilliantly.


Love the pragmatic and varied use. Nice one and thanks for some ideas.

This sounds like a lot of actions without any verification that the LLM didn't misinterpret things or just make something up.

Well the API calls worked perfectly. The LLM didn’t misinterpret that.

The data extraction via tesseract worked too.

The whisper transcript was pretty good. Not perfect, but when you do this daily you are easily able to work around things.

The summaries of the calls were very useful. I could easily verify those because I was on the calls.

The interview - again, transcript is great. The bulleted narrative was guided - again - by me having been on the call. I certify he quotes against the transcript, and audio if I’ve got any doubts.

Scrapers - again, they worked fine. The LLM didn’t misinterpret anything.

Podcasts - as before. Easy.

Article to voice - what’s to misinterpret?

Your criticism sounds like a lot of waffle with no understanding of how to use these tools.


How do you know a summary of a podcast you haven't listened to is accurate?

Firstly I am not summarising the podcast, simply using whisper to make a transcript.

T even if I was, because I do this multiple times a day and have been for quite sone time I know how to check for errors.

One part of that is a “fact check” built into the prompt, another part is feeding the results of that prompt back into the API with a second prompt and the source material and asking it to verify that the output of the first prompt is accurate.

However the level of hallucination has dropped massively over time, and when you’re using LLMs all the time you quickly become attuned to what’s likely to cause them and how to mitigate them.

I don’t mean this in an unpleasant way but this question - and many of the other comments responding to my initial description of how I use LLMs - feel like the story is things that people who have slightly hand wavey experience of LLMs think, having played with the free version of ChatGPT back in the day.

Claude 3.7 is far removed from ChatGPT at launch, and even now ChatGPT feels like a consumer facing procure while Claude 3.7 feels like a professional tool.

And when you couple that with detailed tried and tested prompts via the api in a multistage process, it is incredibly powerful.


Did you also do that while mewing and listening to an AI abridged audiobook version of the laws of power in chinese? Don't forget your morning ice face dunks.

No, I leave that to the highly amusing people like you.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: