Hacker News new | past | comments | ask | show | jobs | submit login

I’ve been actively using it and it’s become my go-to in a lot of cases - Google is more for verification when I smell something off or if it doesn’t have up to date information. Here are some examples:

• reviewing contract changes, explaining hard to parse legalese

• advice on accounting/tax when billing international clients

• visa application

• boilerplate django code

• learnt all about smtp relays, and requirements for keeping a good reputation for your IPs

• travel itinerary

• domain specific questions (which were 50/50 correct at best…)

• general troubleshooting

I’m using it as a second brain. I can quickly double check some assumptions, get a clear overview of a given topic and then direction on where I need to delve deeper.

Anyone who still thinks that this is “just a statistical model” doesn’t get it. Sure, it’s not sentient or intelligent, but it sure as hell making my life easier. I won’t be going back to the way I used to do things.

Edit: bullet formatting




100% this. It's also game changing for learning a new language (of any type, not just programming), any of the boring parts of software engineering like most programming tasks (it's like a personal intern -- sure you have to check their work and the quality is all over the place but still, dang I love it), and even a bit of therapy.

At worst/minimum, It's the ultimate rubber duck.

(To be clear, I'm exclusively using gpt-4)


Learning a new language is a really cool use case. Especially when it gets to the point where you can talk with it and it corrects pronunciation, etc. even just the practise of random conversation is a cool idea.


Can you elaborate on how you've used it for natural language learning?


I'm studying Chinese. If I run across a sentence whose grammar I can't parse, I paste it in and say, "Can you explain this sentence?" It will usually break it down, phrase by phrase, explaining what each thing means and how it fits within the whole. If it doesn't, you can ask "Can you break it down in more detail?" If there's a specific word you don't understand, you can say "What is the word X doing in this sentence?"

You have to watch it, because it does hallucinate (at least, GPT-3.5; I'm using the API and haven't been given access to GPT-4 yet). In one instance, it said that a series of characters meant X in Chinese, when in fact I happened to know it was just a transliteration of a different language, and not in Chinese at all. But it's still helpful enough to be worth using.

You can also ask it to give you example sentences with a specific word; and I've had some success asking it to generate sentences in which then word is used in the same way, or with the same grammar structure.


> and even a bit of therapy

I’d be very careful with relying on gpt for anything health related; I’m not saying there can’t be benefits, just that the risks increase exponentially.


Risky vs what? Googling? Not doing anything? Waiting for a therapist? It’s extremely sensitive to human emotional dynamics. It is also extremely biased toward non violent communication, which is very hard for humans.


Agree, and for things like cognitive behavioral therapy, where the "rules" are well-known and well-represented in its training corpus, it's amazing.


Guys, you are really crazy. Please find a real therapist with experience.


In the context of mental health, telling people they are crazy and they need a real therapist, is generally a poor word choice, at least.


Personally I wouldn't use gpt as a therapist but I've seen enough bad or useless therapists in my time to say that it's worth a shot for most people, especially if you need help now


As risky as any other health related self help, plus the added risk of unreliability.

When GPT proves itself to be reliably beneficial, then therapists will use it or recommend it themselves. Until then it’s an experimental tool at best.


I would say self-help is quite unreliable already, more unreliable doesn’t make it much worse.

The authority argument is pointless. The therapist must value person’s wellbeing above their continued income for this to apply. In theory they should, but it would take a lot to convince me and I would want to know what’s the incentive behind such a recommendation. An to be clear, I’m not saying LLM can be your therapist.


Can I just say that Im actually become scared reading your comment? Personally I would never ask chatGPT these questions because for me these questions are hard to verify, and knowing how often AI likes to hallucinate.. I just can't trust it.

You mentioned 50/50 correctness in domain questions. I can't be sure that other hard to verify questions do not follow these percentage..


It IS dangerous. You must apply critical thinking to what’s in front of you. You can’t blindly believe what this thing generates! Much like heavy machinery, it’s a game changer when used correctly, and likewise it can be extremely damaging if you use it without appropriate care.


Quantum computing has a similar problem, in that the error rate is high. As does untrained data entry. You can put things in place to help counter this once you know it's happening.


I'm reluctant for the same reasons.

Google search might uncover BS too, but I'm already calibrated to expect it, and there are plenty of sources right alongside whatever I pulled the result from where I can go immediately get a second opinion.

With the LLMs, maybe they're spot on 95% of the time, but the 5% or whatever is bullshit, but it's all said in the same "voice" with the same apparent degree of confidence and presented without citations. It becomes both more difficult to verify a specific claim (because there's not one canonical source for it) as well as it involves more cognitive load (in that I specifically have to context switch to another tool to check it).

Babysitting a tool that's exceptionally good at creating plausible bullshit every now and then means a new way of working that I don't think I'm willing to adopt.


I'm excited about the potential of travel itineraries once extensions are available. What if I can tell it where I want to go, and it could just handle picking the best flights and accomodations for me and I didn't have to spend any time searching airline or hotel websites. I'm curious to know more detail about how you're using it for travel itineraries now.


I have used it to build travel itineraries and was tempted to write a travel app around that. Until I realized that some of the hotels and places it recommends do not actually exist or have existed in the past. It overconfidently also publishes broken booking links to these fake hotels. I am hoping that with chatGPT plugins, it would get better.


The real time applications are a game changer. I haven’t dabbled with that yet! Pasting things from emails and summarising - then keeping in my notes app. Also for planning out days when on holiday.


Is there a tutorial you followed before to train your own model?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: