Hacker News new | past | comments | ask | show | jobs | submit | rastapasta42's comments login

And the boeing whistleblowers...Hopefully he avoids airplanes.


Design - just a basic wordpess theme

Scraper - GPT-4 wrote some of the code for the scraper itself, and deserves 50% of the credit for the rest of the python pipeline; we also used GPT4 to optimize our crappy prompts

images, summaries, tags, titles - everything GPT-4 128k, with a small sprinkle of 3.5


Automated error-correction is an easy fix.

User: did you provide me the correct answer to the question which I asked? Please look carefully for errors in your answer and let me know if it accurately answers my question or not.

  Please respond exclusively in JSON structure:
  {
     "error_found": bool,
     "error_description": string,
     "updated_answer": string

  }
Answer:

  {
     "error_found": true,
     "error_description": "Mistaken repetition of 'feathers' instead of comparing 'feathers' with 'steel'.",
     "updated_answer": "Both a kilogram of feathers and a kilogram of feathers weigh the same, as they both are one kilogram."
  }


getting some vibes here

  Dave: How would you account for this discrepancy between you and the twin 9000?
  HAL: Well, I don’t think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error.
  Frank: Listen HAL. There has never been any instance at all of a computer error occurring in the 9000 series, has there?
  HAL: None whatsoever, Frank. The 9000 series has a perfect operational record.
  Frank: Well of course I know all the wonderful achievements of the 9000 series, but, uh, are you certain there has never been any case of even the most insignificant computer error?
  HAL: None whatsoever, Frank. Quite honestly, I wouldn’t worry myself about that.


This can be easily fixed by a validation step in GPT4.

GPT4 detects the error and auto-corrects itself, even I don't point out the answer explicitly, but only ask it to review it's own previous answer:

> ME: did you provide me the correct answer to the question which I asked? Please look carefully for errors in your answer and let me know if it accurately answers my question

> GPT: I apologize for the error in my previous response. You asked about the weight comparison between a kilogram of feathers and a kilogram of feathers. Since both are a kilogram of feathers, they weigh exactly the same. There's no difference between the two in terms of weight.

> ME: what error did you make?

> GPT: Upon re-examining my response, I realize that I incorrectly mentioned "a kilogram of steel" instead of referring to "a kilogram of feathers" twice, as per your original question. The question was about comparing the weight of a kilogram of feathers with another kilogram of feathers, and in my response, I mistakenly brought in steel for comparison. The correct answer is that both quantities of feathers weigh the same, as they are both one kilogram.


Certainly, I can provide an insight into my thought process and why I made that error.

1. *Common Comparison:* The comparison between a kilogram of feathers and a kilogram of steel is a well-known thought experiment used to illustrate the difference between weight and volume. When you asked about the weight of a kilogram of feathers, my algorithms likely defaulted to this familiar comparison, anticipating a question about the difference between weight and volume using these two substances.

2. *Pattern Recognition:* As an AI, I process queries by recognizing patterns in language and context. The phrase "a kilogram of feathers" is frequently paired with "a kilogram of steel" in educational contexts and discussions about density and volume. This pattern recognition led to the incorrect assumption that your question was leading towards this common comparison.

3. *Error in Detail Processing:* In your question, you mentioned "a kilogram of feathers" twice. I should have processed this detail more accurately. Instead, my response was influenced by the common comparison I mentioned earlier, leading to the introduction of steel into the answer.

4. *Default to Common Contexts:* In cases where the query is brief or seems to lack specific details, I might default to the most common context in which similar phrases are used. This can lead to assumptions that might not align with the unique aspects of a particular question.

In summary, the error stemmed from a combination of relying on common comparisons and patterns in language, and not paying close enough attention to the specific details of your question. I appreciate your pointing out this error, as it helps in refining the accuracy of my responses.


how did he make woman uneasy? Is dirty neckbeard a crime now? :(


> Is dirty neckbeard a crime now?

Of course not. I'm obviously not talking about his appearance and could not care less about his hygiene.


Lolol have been running compose in dev & prod for 7 years - still the best tool around.


That might work for whatever you are doing, but the truth is that root-ful containers are not appropriate for a lot of applications, and docker as a layer to your container runtime is rough sometimes. I don't think docker wants to continue to develop this anyway - they have had enough problems trying to be profitable so instead it is time to focus on docker desktop and charging for docker hub image hosting.

I feel like we are kind of in this weird limbo where I know I need to move to a podman focused stack, but the tooling just isn't there yet. I guess that is what makes Quadlets interesting, but idk if a single tool will really emerge. There is also podman-compose floating around. I still feel like I should hold of on some stuff until a winner emerges. So at home I'm still on docker with compose files. Although I will be moving to kubernetes for... reasons.


Docker already supports rootless, the issue is networking, which podman also suffers from.


There's already a clear winner, docker compose. Podman is barely used in prod and podman compose is barely used in dev. Docker also has buildkit and rootless configs so I'm not sure they are just letting it decay.


It will depend on the complexity of the stack, the number of environments you are deploying into, the number of devs on the team etc. It can work, and if it is working for you in this manner don't change what isn't broken, but in my experience for sophisticated systems with many services, envs, and developers it becomes unmanageable pretty quickly.


What are your thoughts on this update? https://chat.openai.com/share/6178453a-4a8a-430a-ac13-156b54...

Basically, re-iterate the original instructions each time, describe last 2 moves in details, and provide brief summary of all the previous moves. Can have much longer games this way - maybe this deserves to be a python script.


I like it. Tried something similar early on, but decided it wasn't worth it because the ChatGPT context window is only enough to run for 10-20 turns anyway. But with Claude's 100k token context window, this should work great. Thank you.

I'm sure there are ways around this if you use the API and connect it to a MySQL database to allow users to "save" their spot... I'm not technical so my understanding of what's involved is hazy, but curious if people have ideas of how to do this simply. But for my current use case, I'm working with dozens/hundreds of college students so I need to make sure the whole thing is free. I've applied for a grant that could fund use of the API though, fingers crossed.


Perhaps a no-code UI might let you wire up the conversational memory being suggested here.

I haven’t used these but saw a post on them:

https://cobusgreyling.medium.com/flowise-for-langchain-b7c40...

https://cobusgreyling.medium.com/langflow-46563a8af323


Time to buy telecom stocks?


Ask dumb questions receive dumb answers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: