Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For my use case, definitely.

I have worked on AWS Connect (online call center) and Amazon Lex (the backing NLP engine) projects.

Before LLMs, it was a tedious process of trying to figure out all of the different “utterances” that people could say and the various languages you had to support. With LLMs, it’s just prompting

https://chatgpt.com/share/678bab08-f3a0-8010-82e0-32cff9c0b4...

I used something like this using Amazon Bedrock and a Lambda hook for Amazon Lex. Of course it wasn’t booking a flight. It was another system

The above is a simplified version. In the real world , I gave it a list of intents (book flights, reserve a room, rent a car) and properties - “slots” - I needed for each intent.




Thank you for sharing an actual prompt thread. So much of the LLM debate is washed in biases, and it is very helpful to share concrete examples of outputs.


The “cordele GA” example surprised me. I was expecting to get a value of “null” for the airport code since I knew that city had a population of 12K and no airport within its metropolitan statistical area. It returned an airport that was close.

Having world knowledge is a godsend. I also just tried a prompt with “Alpharetta, GA” a city north of Atlanta and it returned ATL. An NLP could never do that without a lot more work.


That’s a great example and I understand it was intentionally simple but highlighted how LLMs need care with use. Not that this example is very related to NLP:

My prompt: `<<I want a flight from portland to cuba after easter>>`

The response: ``` { "origin": ["PDX"], "destination": ["HAV"], "date": "2025-04-01", "departure_time": null, "preferences": null } ```

Of course I meant Portland Maine (PWM), there is more than one airport option in Cuba than HAV, and it got the date wrong, since Easter is April 20 this year.


If the business stakeholders came out with that scenario, I would modify the prompt like this. You would know the users address if they had an account.

https://chatgpt.com/share/678c1708-639c-8010-a6be-9ce1055703...


OK, but that only fixed one of the three issues.


While the first one is easy. I mean you could give it a list of holidays and dates. But the rest you would just ask the user to confirm the information and say “is this correct”? If they say “No” ask them which isn’t correct and let them correct it.

I would definitely assume someone wanted to leave from an airport close by if they didn’t say anything.

You don’t want the prompt to grow too much. But you do have analytics that you can use to improve your prompt.

In the case of Connect, you define your logic using a GUi flowchart builder called a contact flow.

BTW: with my new prompt, it did assume the correct airport “<<I want to go to Cuba after Easter>>”


Sure, all the problems are “easy” once you identify them. As with most products. But the majority of Show HN posts here relying on LLMs that I see don’t account for simple things like my example. Flights finders in particular have been pretty bad.

>BTW: with my new prompt, it did assume the correct airport “<<I want to go to Cuba after Easter>>”

Not really. It chose the airport you put basically in the prompt. But I don’t live in MA, I live closer to PDX. And it didn’t suggest the multiple other Cuba airports. So you’ll end up with a lot of guiding rules.


A human would assume if you said “Portland” they would first assume you meant PDX unless they looked up your address and then they would assume Maine.

Just like if I said I wanted to fly to Albany, they would think I meant New York and not my parents city in south GA (ABY) which only has three commercial flights a day.

Even with a human agent, you ask for confirmation.

Also, I ask to speak to people on the ground - in this case it would be CSRs - to break it.

That’s another reason I think “side projects” are useless and they don’t have any merit on resumes. I want them to talk about real world implementations.


How about the the costs?


We measure savings in terms of call deflections. Clients we work with say that each time a customer talks to an agent it costs $2-$5. That’s not even taking into account call abandonments


My base thing while advising people is that if anyone you pay needs to read the output, or you are directly replacing any kind of work then even frontier llm model inference costs are irrelevant. Of course you need to work out of that's truly the case but people worry about the cost in places where it's just irrelevant. If it's $2 when you get to an agent, each case that's avoided there could pay for around a million words read/generated. That's expensive compared to most API calls but irrelevant when counting human costs.


link is a 404, sadly. what did it say before?


The link works for me even in cognito mode.

The prompt:

you are a chatbot that helps users book flights. Please extract the origin city, destination city, travel date, and any additional preferences (e.g., time of day, class of service). If any of the details are missing, make the value “null”. If the date is relative (e.g., "tomorrow", "next week"), convert it to a specific date.

User Input: "<User's Query>"

Output (JSON format): { "origin": list of airport codes "destination": list of airport codes, "date": "<Extracted Date>", "departure_time": "<Extracted Departure Time (if applicable)>", "preferences": "<Any additional preferences like class of service (optional)>" }

The users request will be surrounded by <<>>

Always return JSON with any missing properties having a value of null. Always return English. Return a list of airport codes for the city. For instance New York has two airports give both

Always return responses in English




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: