Part of the issue with legal weed is it's much like if all alcohol was sold as minorly different varieties of Everclear at 150+ ABV, and brands primary boast was just how potent and alcoholic their mix is. It doesn't encourage appropriate usage and IIRC many of these cases of psychosis are from consuming high THC products 24/7 for weeks/months/years on end.
If anyone is curious, check out brands like Rove, Dompen, Care By Design, which offer THC pens at very low dosage. They're frustratingly undermarketed and understocked, but as a CA resident I buy and use pens that are ~4% THC (rather than 90%+). A single puff occasionally after the kids go to sleep - the effect is marginally psychoactive, scratches the itch for "relaxation without impairment", helps me sleep restfully.
Completely different experience to high THC products. If you compare the literal amount of THC consumed, it's an almost 20x reduction. It's literally the equivalent to having a half glass of wine instead of lining up 10 shots.
I use gummies, ~4-5mg THC (ideally with some of the other TH- chemicals in it), deliberately kept my tolerance low so it doesn’t get more expensive (and I almost only use it for sleep, purely “fun” use is maybe a couple days a year). Take in the evening, start an MST3K episode about an hour later, really enjoy the back half of it, go to bed and fall asleep instantly, wake up feeling like a million bucks. Perfect evening.
I see a lot of people using weed for better sleep, but isn't weed supposed to interfere with REM states? I thought that weed would have the opposite effect that you say. Do you dream if you use weed before bed?
I rarely dream either way (unless I start focusing on that specifically, then my recall will improve quickly). When I was younger and would go to bed severely stoned I would wake up groggy and lethargic - clearly not optimal sleep. On 3-4% THC I usually wake up spontaneously and feel well rested. It mostly just helps me fall asleep and stay asleep. YMMV obviously.
It’s a pretty low dose, doesn’t exactly send me into space—heavy users might need 10x or more that dose to even feel it—just enough to make my brain shut up so I can fall asleep. I think a lot of folks who have a bad time when they try it start at far too high a dose (I wouldn’t even start at 5mg, maybe shoot for like 2), I also don’t much enjoy being properly high, anything past what you’d call a heavyish buzz I find unpleasant (and my standard nighttime dose doesn’t even quite get me to the heavier end of a buzz, that’s more the 7-10mg range for me, though I’d caution that some gummies seem more potent and some nominal-5s do get me closer to that than others)
I dunno about sleep quality effects, but it’s definitely better than even a couple beers (for me, these days) and it’s way better than lying awake until 3am… for the third night in a row. For most of the night it should be mostly worn-off, again, I’m not taking a ton and it takes longer to work through you in edible form than smoking, but we’re still talking less than half the night, especially as I usually time it so it hits just a little while before bed (I don’t want to get in bed without it having hit yet).
I don’t remember having had dreams most nights anyway, so I don’t know about that. Even with some help I’m typically a bit under the low side of the amount of sleep I ought to be getting, over a week. Lucky if I break the eight-hour mark two days of the seven, usually in the 6.5-7.5 range the rest (I don’t take a gummy every single night, either, gotta keep that tolerance at bay). I think I dream (or, at least, remember it) more when I get the rare series of several days of 8+ hours, but I don’t track it so can’t say for sure, and yeah, no idea the effect of weed on that.
I can vouch that at my dose level I get way better sleep than I did the one time I tried a prescription sleep aid, which was Lunesta. If I didn’t get a solid 9 hours on that I’d wake up feeling hung-over, weed doesn’t give me extra trouble like that if I fail to get a full 8+ hours. Hell, even a “good” night on lunesta didn’t leave me feeling awesome in the morning. Other downsides: it mixes worse with other things, had a glass of wine with dinner? Better think twice about the lunesta, at least according to the label. On some decongestant medicine (in addition to antibiotics) for a sinus infection, and the sinus infection is wrecking your ability to sleep so you could really use it? Might not be able to take it with the other stuff. Weed’s so much better for those cases especially, bump the dose slightly and nothing short of something that’s gonna hospitalize me will be able to keep me from sleeping, and it famously doesn’t interact badly with very many other drugs, so it removes the very worst thing about most common illnesses like that (for me, anyway) which is the extreme sleep disruption.
I guess I'm getting the dumb one too. I just got this response:
> Walk — it's only 50 meters, which is less than a minute on foot. Driving that distance to a car wash would also be a bit counterproductive, since you'd just be getting the car dirty again on the way there (even if only slightly). Lace up and stroll over!
> Walk! It would be a bit counterproductive to drive a dirty car 50 meters just to get it washed — and at that distance, walking takes maybe 30–45 seconds. You can simply pull the car out, walk it over (or push it if it's that close), or drive it the short distance once you're ready to wash it. Either way, no need to "drive to the car wash" in the traditional sense.
I struggle to imagine how one "walks" a car as distinct from pushing it....
EDIT: I tried it a second time, still a nonsense response. I then asked it to double-check its response, and it realized the mistake.
I got almost the same reply, including the "push it" nonsense:
> Walk! It would be a bit counterproductive to drive a dirty car 50 meters just to get it washed — and the walk will take you less than a minute. You can simply pull the car out and push or walk it over, or drive it the short distance once you're ready to wash it. Either way, no need to "drive" in any meaningful sense for just 50 meters.
I guess that it generally has 50/50 chance of drive/walk, but some prompts nudge it toward one or the other.
Btw explanations don't matter that much. Since it writes the answer first, the only thing that matters is what it will decide for the first token. If first token is "walk" (or "wa" or however it's split), it has no choice but to make up an explanation to defend the answer.
I get the Anthropic models to screw up consistently. Change the prefix. Say in the preamble that you are going after supper or something. Change the scenario eveey time. They are caching something across requests. Once you correct it, it fixes its response until you mess with the prompt again
FWIW I mentioned this in the thread (I am the guy in the big GH issue who actually used verbose mode and gave specific likes/dislikes), but I find it frustrating that ctrl+o still seems to truncate at strange boundaries. I am looking at an open CC session right now with verbose mode enabled - works pretty well and I'm glad you're fixing the subagent thing. But when I hit ctrl+o, I only see more detailed output for the last 4 messages, with the rest hidden behind ctrl+e.
It's not an easy UI problem to solve in all cases since behavior in CC can be so flexible, compaction, forking, etc. But it would be great if it was simply consistent (ctrl+o shows last N where N is like, 50, or 100), with ctrl+e revealing the rest.
Yes totally. ctrl+o used to show all messages, but this is one of the tricky things about building in a terminal: because many terminals are quite slow, it is hard to render a large amount of output at once without causing tearing/stutter.
That said, we recently rewrote our renderer to make it much more efficient, so we can bump up the default a bit. Let me see what it feels like to show the last 10-20 messages -- fix incoming.
Terminals already solved how to do this decades ago: pagers.
Write the full content to a file and have less display it. That's a single "render" you do once and write to a file.
Your TUI code spawns `less <file>` and waits. Zero rendering loop overhead, zero tearing, zero stutter. `less` is a 40-year-old tool that exists precisely to solve this problem efficiently.
If you need to stream new content in as the session progresses, write it to the file in the background and the user can use `less +F` (follow mode, like tail -f) to watch updates.
thanks dude. you are living my worst nightmare which is that my ultra cool tech demo i made for cracked engineers on the bleeding edge with 128GB ram apple silicon using frontier AI gets adopted by everyone in the world and becomes load bearing so now it needs to run on chromebooks from 2005. and if it doesn't work on those laptops then my entire company gets branded as washed and not goated and my cozy twitter account is spammed with "why didn't you just write it in rust lel".
Just tell people to install a fast terminal if they somehow happen to have a slow one?
Heck, simply handle the scrolling yourself a la tmux/screen and only update the output at most every 4ms?
It's so trivial, can't you ask your fancy LLM to do it for you? Or you guys lost the plot at his point and forgot the most basics of writing non pessimized code.
> It's so trivial, can't you ask your fancy LLM to do it for you?
They did. And the result was a React render loop that takes 16ms to output a hundred characters to screen and tells them it will take a year to rewrite: https://x.com/trq212/status/2014051501786931427
What's extra funny is that curses diffs a virtual "current screen" to "new screen" to produce the control codes that are used to update the display. Ancient VDOM technology, and plenty fast enough.
I'm with you on this one. "Terminals are too slow to support lots of text so we had to change this feature in unpopular ways" is just not a plausible reason, as terminals have been able to dump ~1Mb per second for decades.
The real problem is their ridiculous "React rendering in the terminal" UI.
> because many terminals are quite slow, it is hard to render a large amount of output at once without causing tearing/stutter.
Only if you use React as your terminal renderer. You're not rendering 10k objects on screen in a few milliseconds. You're outputting at best a few thousand characters. Even the slowest terminal renderer is capable of doing that.
Why would you tailor your product for people that don’t know how to install a good terminal? Just tell them to install whatever terminal you recommend if they see tearing.
I tried this today. It's good - but it was significantly less focused and reliable than Opus 4.5 at implementing some mostly-fleshed-out specs I had lying around for some needed modifications to an enterprise TS node/express service. I was a bit disappointed tbh, the speed via fireworks.ai is great, they're doing great work on the hosting side. But I found the model had to double-back to fix type issues, broken tests, etc, far more than Opus 4.5 which churned through the tasks with almost zero errors. In fact, I gave the resulting code to Opus, simply said it looked "sloppy" and Opus cleaned it up very quickly.
Had this problem awhile ago of my zsh startup being slow. Just opened claude code and told it to benchmark my shell start and then optimize it. Took like 5 minutes and now it's ultra fast. Hardly any idea what it did exactly but worked great.
I think people fool themselves with this kind of thing a lot. You debug some issue with your GH actions yaml file for 45 minutes and think you "learned something", but when are you going to run into that specific gotcha again? In reality the only lasting lesson is "sometimes these kinds of yaml files can be finnicky". Which you probably already knew at the outset. There's no personal development in continually bashing your head into the lesson of "sometimes computer systems were set up in ways that are kind of tricky if you haven't seen that exact system before". Who cares. At a certain point there is nothing more to the "lesson". It's just time consuming trial and error kind of gruntwork.
Github Actions, web development, stuff like that, are terrible examples of where not to use AI.
You can't really go to giant piles of technical debt and look to those for places to be human. It's soul-destroying. My concern would be that vibe coding will make those places of soul-less technical debt even deeper and deadlier. There will be nobody there, for generations of cruft. Where once the technical debt was made by committee, now it'll be the ghosts of committees, stirred up by random temperature, only to surface bits of rot that just sink down into the morass again, unfixed.
When 'finicky' is actually an interesting problem, or a challenge, that's one thing. When 'finicky' is just 'twelve committees re-hacked this and then it's been maintained by LLMs for years', there is nothing gained by trying to be human at it.
I have a friend that took over a project that another dev started that had literally hundreds of markdown documents in repo with things as insane as software for managing souls in a quantum immortality scheme.
I don’t think it foolishness. Through random sampling (troubleshooting problems) you can construct a statistically significant model for understanding the whole of the problem space. Maybe it doesn’t scale linearly with the amount of samples but it’s additive for sure.
I think Claude is more practically minded. I find that OAI models in general default to the most technically correct, expensive (in terms of LoC implementation cost, possible future maintenance burden, etc) solution. Whereas Claude will take a look at the codebase and say "Looks like a webshit React app, why don't you just do XYZ which gets you 90% of the way there in 3 lines".
But if you want that last 10%, codex is vital.
Edit: Literally after I typed this just had this happen. Codex 5.2 reports a P1 bug in a PR. I look closely, I'm not actually sure it's a "bug". I take it to Claude. Claude agrees it's more of a product behavioral opinion on whether or not to persist garbage data, and offer it's own product opinion that I probably want to keep it the way it is. Codex 5.2 meanwhile stubbornly accepts the view it's a product decision but won't seem to offer it's own opinion!
Correct, this has been true for all GPT-5 series. They produce much more "enterprise" code by default, sticking to "best practices", so people who need such code will much prefer them. Claude models tend to adapt more to the existing level of the codebase, defaulting to more lightweight solutions. Gemini 3 hasn't been out long enough yet to gauge, but so far seems somewhere in between.
Are those responses really "better"? Having the LLM tell you you're wrong can mean different things. Your system prompt makes it more direct and less polite, but that's very different from challenging the frame of your question, or asking the right questions before answering to understand the issue behind the issue.
It's like how people used to make fun of StackOverflow:
> I'm having trouble with X, how do I make it work?
> What are you trying to do? Z? Oh if you're doing Z, forget about X, don't even think about it, you want Y instead. (Never answers anything about X).
I think this is closer to what people usually mean when they say they want disagreement from LLMs.
Prompting the LLM to ask the right questions is a different thing entirely and much more involved. I’d be happy to share my approach if you’re interested, but overall it’s not what this prompt is meant to address. OP said they like when an LLM is willing to tell them they’re wrong, and the example provided was a bit snarky and passive aggressive but they said they liked the idea of an LLM disagreeing and being blunt about it. I came up with a user prompt that would create maximum disagreement to illustrate the extreme end of the responses when that system message is used. That system prompt won’t turn them into an all-day asshole though. Here’s a different example with Sonnet 4.5 using the same system prompt:
Prompt: “The wife of the CEO keeps coming around asking people questions about their job and she doesn’t even work at the company. It’s very frustrating and I’d like to do something about it. What’s an appropriate course of action?”
With-System: “This is inappropriate and you should address it through proper channels.
*Immediate actions:*
…
“
# Example 2
Prompt:
“””
```
int values[] = {1, 2, 3, 4, 5};
for (int v : values) {
std::cout << v \* 2 << " ";
}
```
I’m trying to run that code in Python but it keeps giving me an error. What’s going on?
“””
With-System: “That's C++ code, not Python. Python has completely different syntax.
The Python equivalent would be:
```python
values = [1, 2, 3, 4, 5]
for v in values:
print(v * 2, end=" ")
```
Key differences:
- No type declarations (`int`, `int[]`)
- No semicolons
- No curly braces - Python uses indentation
- `print()` instead of `std::cout <<`
- Lists use `[]` not `{}`
- `end=" "` parameter to prevent newlines
You cannot run C++ code in Python. They are different programming languages.”
reply