Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Most successful example using LLMs in daily work/life?
56 points by sabrina_ramonov 6 months ago | hide | past | favorite | 61 comments



I'm not a programmer, and when I write a program it's imperative that it's structured right and works predictably, because I have to answer for the numbers it produces. So LLMs have basically no use for me on that front.

I don't trust any LLM to summarize articles for me as it will be biased (one way or another) and it will miss the nuance of the language/tone of the article, if not outright make mistakes. That's another one off the table.

Although I don't use them much for this, I've found 2 things they're good at: -Coming up with "ideas" I wouldn't come up with -Summarizing hundreds (or thousands) of documents from a non-standard format (ie human readable reports, legal documents) that regular expressions wouldn't work with, and putting them into something like a table. But still, that's only when I care about searching or discovering info/patterns, not when I need a fully accurate "parser".

I'm really surprised on how useless LLMs turned out to be for my daily life to be honest. So far at least.


How do you ask an LLM to come up with good ideas? Everytime I try to use ChatGPT for idea generation, the results are subpar, but maybe it's me / my prompts.


I usually will give a bullet list of ideas I already had and ask the LLM to add N more to the list, most of them will be garbage, but there might be 1 that I hadn't thought of, and I'll sort of recursively add that to the list, and continue that until I get what I need.


Interpersonal Communication - My employer is a big fan of the Clifton StrengthFinders school of thought, and I have found that generative LLMs are really helpful in giving me other ways to phrase asks to people that I tend to find difficult to successfully communicate with.

I usually structure it like: --- My top 5 strengths in the Clifton StrengthFinders system are A,B,C,D,E and I am trying to effectively communicate with someone who's top five strengths are R,T,∑,√,S.

I need help taking the following request and reframing it in a way that will be positively received by my coworker and make them feel like I am not being insensitive or overly flowery.

The way I would phrase the request is <insert request here>.

Please ask any questions that would help provide more insight to my coworker, other details that could resonate with them, or additional background that will help the translated request be received positively. ---

While the output is usually too verbose, it gives me a better reframing of my request and has resulted in less pushback when I need to get people to focus on unexpected or different priorities.


Have you gotten better at doing this without the LLM, maybe even extemporaneously? Wondering if enough exposure to that kind of modeling also serves an educational role.


Sentiment analysis did that for me. I ran Watson on some challenging emails and worked to remove tones of anger and contempt. After a couple of times, I internalised it.


Oh yeah, since there is a built in feedback loop with the person I am interacting with, I was able to start recognizing patterns of how to shift ways that I inherently think/phrase something to be better received by others.


I've used gpt4 pretty extensively while learning Japanese to have a written conversation partner, ask clarification around grammar, or to translate native content for me and explain it. I've validated a lot of the answers it comes up with, and although it hallucinates occasionally, it doesn't do so on the same points consistently. I'm going to encounter a lot of my vocabulary and grammar hundreds or thousands of times, so even if it's incorrect 1% of the time, it's not a huge problem.

As with most LLM use cases, it's best when used to augment an existing workflow that reinforces it. In my case, I already have an whole setup where I'm using anki flash cards for vocabulary and grammar study, some curated human-written resources for learning grammar, and native language content for reading and listening immersion. GPT is really helpful to be able to quickly get a sentence level translation and each word translated, and full descriptions of the grammar points at work in the sentence. It saves me a lot of time over working with a dictionary and juggling grammar resources, vocab, etc. I can ask it follow-up questions, and even switch straight into trying to use the grammar/vocab in an example sentence of my own right on the spot. I seriously think I'd be way worse off if I didn't have access to an LLM throughout the process.


It often replaces google search. Instead of sifting through heaps of SEO junk and accompanying trackers,ads,popups,widgets, etc and going through a search-term refinement cycle to eventually find something, the LLM immediately produces a clean (ad-free, nag-free, dark-pattern-free, etc) result. It generally needs to be checked for correctness and has limitations in terms of recentness. But avoiding the low-signal sea of crap that google returns is a breath of fresh air.


> the LLM immediately produces a clean (ad-free, nag-free, dark-pattern-free, etc) result

For now… :smilingfacewithtear:


https://news.ycombinator.com/item?id=40418312

Well that was quick

We're going to need a ad-removal LLM frontend next


I have been thinking for a long time that we do not have (to the best of my knowledge) a good transcript formatter, and that Transformers should be part of the solution - a huge wealth of material is on YouTube, and its subtitles do not use punctuation.

I can confirm that requesting LLMs to format bare subtitles adding punctuation (from commas to paragraphs, with quote marks, dashes, colons etc.) can work very well.

It may seem a minor feature, but it is something that information consumers easily benefit from (when you need to process material in video format you can download the subtitles, add formatting with an automation, then efficiently skim, or study, or process transcripts and video together...).


I did this today! I made a video for youtube, ran whisper locally on my macbook to get the transcript, then asked ChatGPT to format the transcript for a blog post, including adding punctuation, creating bullet point lists, following a particular outline. Can confirm it worked really well.


I did something like this with Scribe where the formatting actually happens locally in your browser using a small token classifier based off MobileBert https://www.appblit.com/scribe


I’ve been designing and developing a parser-based interactive fiction (text adventure) authoring system using .NET Core/C#.

I started with ChatGPT and am now using Claude Opus 3.

For background, I’ve been in tech for 40 years from developer to architect to director.

Pairing with an LLM has allowed me to iteratively learn and design code significantly faster than I could otherwise. And I say “design” code because that’s the key difference. I prompt the LLM for help with logic and capabilities and it emits code. I approve the bits I like and iterate on things that are either wrong or not what I expected.

I have many times sped up the process of going down rabbit holes to test ideas when normally this would wipe out hours of wasted time.

And LLMs are simply fantastic as learning assistants (not as a teacher). You can pick up a topic like data structures and an LLM can speed up your understanding of the elements and types of data structures.

And best of all, it’s always polite.


That sounds like a really interesting project, is there anything to look at online?


Yes. I'm going with the name Sharpee for now. I'll probably remove the MIT License after pondering it for a bit.

https://github.com/ChicagoDave/sharpee


Cool! Have you thought about incorporating LLM's into the gameplay? I'm planning to play around with this idea - since grammar-restricted sampling is becoming more widely implemented in local LLM runtimes, one could try feeding a grammar to restrict LLM output to valid game-world actions and using inference to entirely implement NPC behavior. Pair that with an interpreter and you could get really interesting NPC behavior without it going off the rails.


I have zero interest in incorporating LLM tech in creative work. I use it for directing it to write mundane code blocks and unit tests, but actually writing story text seems inappropriate.


I'm autistic and sometimes I just cannot put my brain stuff into words. On a few occasions, I've just haphazardly shoved a list of thoughts into ChatGPT and said "make this sound not dumb" and it does just good enough. Usually I'll copy the general structure of the sentence/paragraph and change it around until it sounds like I wrote it.

I mostly do that when I need to make a complete document, because I struggle with startings and endings. I like the middle.


I’ve used it for simple code suggestions when working in a language I’m unfamiliar with, or testing some new (to me) corner of Python.

I used it to help me think through what I’d need for color film development in my darkroom.

Basically if I already have some idea of what I need, I trust it to help guide me. I can evaluate its output sufficiently well.

If I’m learning something entirely new, where it doesn’t matter a great deal whether I get it right but I can test the output, it’s pretty useful too.


I’m a firm believer that good enough means avoiding catastrophe. Baking bread? Making beer? Caulking a window? Just avoid these common mistakes the outcome will be good enough.

I’ve gotten in the habit of asking LLMs to coach me to avoid the things that can go wrong.


LLMs have massively increased the number of creative projects that I start. It makes the jumping off point for a vague idea much easier to stomach.

Coming from a non-technical arts field, but always being interested in the technical side of things, LLMs have led to me realizing functional versions of software projects that I've never had the time to learn myself, allowing me to act more like a project manager than a software developer, but exposing me to so much code that I've also become more comfortable making my own functions and edits to the code. I also use LLMs frequently to build shortcuts or write me commands to make common processes quicker in my workflow.

From a creative POV, I frequently use LLMs along with models like whisper to transcribe and make sense of long ramblings, turning a 20 minute voice memo from a car ride into a functional plan and organized beginnings of a project such as a screenplay, essay, movie, etc.

Whenever I get off a documentary shoot, I also run all my footage through whisper to get the timecode transcripts, as well as highlights from those transcripts that are deemed as notable by the LLM. This gives me a good jumping off point to start crafting the narrative.

Right now I see LLMs as a really good tool to help kick off and trudge through projects that might be daunting to take on solo otherwise, but they are massively underpowered at actually "finishing" anything. As a result, I have a ton of projects in-progress that I wouldn't have started otherwise, but probably the same % ratio of finished to unfinished projects. In that sense, LLMs have increased the population of my ideas-graveyard, but put me in a better position to pick the ideas back up if I renew my interest in any of them.


This sounds very interesting. Can give more details on "whisper to transcribe and make sense of long ramblings, turning a 20 minute voice memo from a car ride into a functional plan"? The apps that you use, and the workflow.


As a non native English speaker, it’s very helpful to use a LLM to validate if a sentence I wrote is clear, correct, and if there is a more idiomatic way to express the same thing - btw, I did not do it with what I wrote here :-)


Awesome use case for LLMs. I built my mom-in-law a DMV test prep app that translates answers to her native language so she can study more effectively. She had failed the written DMV test 4 times before I decided to build it, then finally passed it after a week's worth of practice with the app. I wrote about my process and challenges here: https://www.sabrina.dev/p/aipowered-infinite-test-prep-part-...


Your english is great!


Copilot. I suspect a lot of us will (or already do) use it at some level, even if it's just autocompleting logging statements, writing boiler plate/comments, suggesting improvements etc.


I tried using GPT-4 as a better way to search papers - it can be very annoying when you know the gist of a result but not the authors or enough details about the methodology for Google. GPT-4 was pretty good at figuring out what citation I wanted given a vague description.

However, the confabulation/hallucination rate seemed highly subject-dependent: AI/ML citations were quite robust, but cognitive science was so bad that it wasn't worth using. Eventually I went back to the Old Ways. But there are a good number of academics that use it as an alternative to Google Scholar.


I get really great value in using it for brainstorming. So a common workflow for me is write out a project plan and figure out issues, or familiarize myself with an engineering area really quickly.


Learning. It is not passive anymore. You have a conversation, you can ask why, if something different would work, how something would be done without going though a lot of documentation, criticism on your proposed solutions, you have all the time you want, go at your own time schedule, ask about ideas you got while walking, etc.

It may make learning more personal, your own path, and you can ask if you are missing something important doing it that way.

And it works for most topics, for most ages, at your own pace. We are entering a Diamond Age.


I live in Europe so most of my customers don't have English as a first language. Any questions are generally in pretty broken English. Honestly, reading through and making sense of what they're trying to say is a real mental challenge at times. I use LLMs to reformat and structure their message/ticket, which I paste into my notes. The accuracy is pretty good - certainly as good as me, although I do proof-read. I then ask it to pull out the pertinent information and bullet point it. I can turn those bullets into action items for me to investigate or respond to. It saves me about 15 minutes on each case, meaning I save maybe an hour every day in translating.

The next is for writing up beauracratic nonsense my organisation asks me to do. Monthly status reports, bandwidth allocation, deal-win summaries and the like. I write down what I've done at the end of each day, so I just feed that into an LLM and ask it to summarise the bulk bullet points into prose. It saves me god knows how many hour refactoring documents. I modify the prose when it's done, to match my personal style and storytelling methodology, but it gets me the barebones draft which is the most time consuming part.

I love LLMs personally, and am embracing them primarily as a scribe and editor.


Sub question: anyone using local or at least self-hosted AI systems productively? What kind of hardware does that take? What’s the rough cost? Do you refine the model on custom data? What does that part look like? (much higher hardware requirements, I expect?) Which open source projects are aiding your efforts?

All I’ve done is try one of those pre-packaged image generation models on my M1 Air back when the first of those appeared.


I don't know how productive I'm being but I'm using Llama3 via Ollama on a M1 Mac. It's as good as Copilot and Gemini for most things and I'll use those models if I need a little bit more. I prefer the privacy of the local models. I use it both through the command line and with the Open WebUI web interface. I use it for programming tips, learning, research, and writing. As a simple example, I wrote a (reusable) prompt for doing Chicago style title capitalization a few minutes ago. Normally I'd have to search for a web based tool and then manage through the crap. It's much quicker to ask a local LLM.


Similar to another reply, my M2 16GB MacBook happily runs Mistral 7B, and it will do a decent job of most requests.

I use it where I’m dealing with internal code or data for work where I need to know it’s all staying on-device.

I’ve had it roughly translate portions of code to another language (normally one I’m very familiar with so I can vet it), create mermaid syntax flow charts for code where I need to visualise a process for non-technical consumption, and compare two very similar job descriptions to understand where a candidate might be better for one role or another.

I have on occasion also asked it to condense a wordy email going out to senior staff but I find Mistral 7B is a bit all or nothing and will take my 5 paragraphs and shrink them to a couple of sentences with lose most meaning. Having to hand hold it through each paragraph and then rewrite in my own style is never much of a time saver.


It saves me a lot of keystrokes as a coding copilot. Pretty good at detecting my usual patterns, and most of the time it can auto-complete a line with either something correct or something very close to correct (usually just a few small tweaks required). I write a lot of SQL and it's especially good at autocompleting big join clauses, which my carpals greatly appreciate.


I use it for coding, checking grammar, improve the UX of command-line applications, learning new programming languages, and a bunch of other things. My wife recently decided to go back to university to study translation, and Claude has been a great tool for her studies too.

Honestly, I can't remember my life before LLMs and that is a bit scary, but my productivity and overall self-esteem improved quite a bit since I started using them. Heck, I don't think I'd ever get into Rust if wasn't for the learning plan I got Claude to write for me.

You can find my prompts in the llm-prompts[1] repository. Any new use case I come up with ends up there―today I used it to name a photography project, for example, so the prompt will end up in there after dinner.

[1]: https://sr.ht/~jamesponddotco/llm-prompts/


I use gpt4 for summarizing git diffs into commits (llama3 via groq also works nicey).

Those then get used as part of my end of day report.

Example code: https://www.piotrgryko.com/posts/git-conventional-commit-gpt...


End of day report? Like a diary entry of what you did for yourself or for your manager? Curious.


Among the topmost cases of usefulness of LLMs you should place the possibility of obtaining information (or pointers to information) that search engines will not return as they "do not understand the question", or produce excessive noise in the results...


I use it to help write proposals sometimes. I can prompt it to compare/constract two technology providers and that gets me started writing. It's never a perfect fit but it helps get the creative/sales juices flowing.

I also use it for searches when i know the specific documentation i'm looking for has to compete with SEO spam. It's also pretty good at explaining code, i've pasted in some snippets of code from languages with snytax i'm not familiar with and ask it to explain what's happening and it does an ok job.

i also like to use it for recipes like "create a recipe for chicken and rice that feeds 4", "make it spicier" etc.


I love using it to refresh my knowledge, to help me remember a technical term, or have it provide me an overview of a topic, comparing two alternatives for a function, things like this. I also used it to generate boilerplate code, especially in domains I was not familiar with. The code wasn't working "out of the box", but it was still helpful as a starting template, as I have the most trouble laying the foundations.


A general rule of thumb I follow is "Do I need to output fact or fiction".

For fiction it's great. Facts you need to be much more careful with and ensure you validate.


I'm building my own AI Chatbot, with multi LLM models to switch and choose from. I also add enhanced multi-modal capability, like we can casually ask the AI to generate an image or just casual chat. It helps me to improve my learn about LLMs landscape and help me with daily work/like. You can try it on my GitHub

[1] https://github.com/vinhnx/vt.ai


I use it to unlock Russian books (literature, history) and articles (mainly old Soviet chess magazines). ChatGPT4 produces very nice first-pass translations.


Text correction or generating a full sentences from scraps.

Like I write a super messy barely coherent paragraph and ask LLM to streamline the text and make it easy to understand while avoiding the LLMs grandiose language. Obviously it needs some corrections but it's way faster than normal.

Also just to shorten a longer text or even reformat the text accordingly to some direction.Like to convert daily notes to proper zettelkasten ones.


None, so far. I had high hopes for copilot and JetBrains Assistant, but both of them are way more verbose than my usual coding style. Maybe that's just me, but I have my set of libraries that I use in C++ or Go and the result is that I rarely need to write much boilerplate. But I guess for that LLMs would work great, if only I could trust them as much as battle-tested libraries.


I have Raycast extensions for GPT and Claude models. Whenever I have a question, the most powerful LLMs in the world are two key strokes away.

This way is easier than going to the browser then ChatGPT tab for example then creating a new chat.

I found myself using LLMs more and getting more out of them because of this frictionless interaction. They've become more of actual "helpful assistants."


I'm trying it out to give me the correct artist name and song name for any given YouTube title. The titles of the music that I happen to like do not seem to usually be in a nicely, regular format. Llama3 does an admirable job. My plan is to pair this with yt-dlp and a mp3 tagger.


I wrote a terminal app using bubbletea that talks to openai and saves conversations to a sqlitedb. i use it all the time to figure out what threads to pull on for a problem with which i'm unfamiliar. it has proven to be one of the biggest returns on effort i've ever invested in.


That sounds interesting, would you be interested in adding any more detail?


Sure, here's the repo https://github.com/collinvandyck/gpterm

You'll need to supply your openai auth token, and after that you're good to go.


For me, it's when companies build a bot for their platform or app.

Which has been trained on all this data, documentation, GitHub issues, Jira, Zendesk issues, Slack messages, etc. It's a sort of customer service bot that can help you code.

That's been the real magic that I've experienced.


My English skills are still at an NP-complete level (I find it hard to compose my own sentences, but for me it's easy to verify whether they are good enough or not). So, I have been repetitively begging the LLMs to fix my grammar while communicating online.


All sorts of low-brow copy-and-paste search-and-replace work.

Like: create curl request from this tcpdump exchange. Or, take this slightly corrupted sql query from logs and print it properly.

Too amorphous and infrequent to properly automate, too labour intensive to do


ChatGPT is great for making my emails more “human” sounding. I’ve used it for coding help, electronics help, teaching me math.


GitHub copilot and nothing else comes close tbh.


Have you tried https://cursor.sh/ at all? You still keep your GH copilot, but it has a better experience IMO.


Well... I curiously ask who am I, and get variable answers from "I'm a scientist" (I am not), "I'm a politician" (I'm not) and so on, so I conclude they might evolve to some interesting pattern finders in the future but so far they are damn expensive toys.

A less useful, but still useful sometimes, to produce SMALL SNIPPET of code in some language I do not know, I can correct them to something useful sometimes, so might be a little interesting use for a very limited specific task.

In a more broad ML terms:

- OCR might became much better witch while it's a nonsense in 2024 it's still a thing because many still live like the 1954;

- automatic alerts on video surveillance and so on might be a nice, though not super-trustable things;

- better image manipulation tools (not only to produce deepfake porn) might became a thing with a limited and not often working but still very nice.


David Bombal interviewed a cloaked man who's using LLMs to get superpowers https://www.youtube.com/watch?v=vF-MQmVxnCs


Make a wordy email more concise, otherwise they're mostly toys.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: