It's very hard to separate the model's tendency to flatter and appease from a sincere attempt to reason. It very much picks up on what you want it to say if it's at all reasonable and will praise you for suggesting it. (There are lots of things it won't agree with, but usually very outside the Overton window)
I've had some success asking it to poke holes / find weaknesses in an argument it just agreed with. Basically, you kind of need it to play both prosecution and defense in order to feel confident in the resulting conclusion
Yes, it's very pleasant to talk to. The only great conversation I had with it involved playing both sides, teasing apart dimensions, and asking for concrete numbers (1-10 scale). That was the closest I ever felt to being a Asimov robot-psychiatrist x Minority Report pod-dweller.
Lawyer finds ChatGPT waffles and is easy to persuade because it holds no strict views on anything, thinks this is a confirmation that (his) persuasion skills are magical.
There's no skepticism in the post, for example making the opposite argument in his "But <my opinion, X> is true, right?" conversational style, and seeing whether ChatGPT tells him just as readily that he "is correct" and it "appreciates the clarification" and he is "raising an astute point".
Lessig is famous, but lawyers -- especially famous ones -- are not known for their resistance to intellectual flattery.
I think it is fair to describe this lack of skepticism as a fatal flaw in the blog post (and a potentially significant flaw in its author). The blog post was supposed to be about how AIs can be convinced by reason, but the author did not actually assess, or show curiosity about, whether reason is a requirement for convincing them in any way. That's not a small blind spot; the mention in the final paragraph of "But I'll confess that this persuadable intelligence has seduced me." seems to be confessing more than it intended to.
This article is about a human changing his mind multiple times over the course of his life. And then, seeing a machine do it too. Sadly, I doubt that important idea will be considered on the Internet very well.
When AI is working well enough to have a virtually unlimited context window, and doesn't hallucinate, or much, wouldn't it be interesting to feed it the entire corpus of law for a given territory and have it identify contradictions between laws?
> what a good lawyer does, that makes this system work. It is not the bluffing, or the outrage, or the strategies and tactics. It is something much simpler than that. What a good lawyer does is tell a story that persuades. Not by hiding the truth or exciting the emotion, but using reason, through a story, to persuade.
When it works, it does something to the people who experience this persuasion.
The Weimar Republic (and Hitler's rise to power) has to be one of the most well-documented time periods in German history, yet ChatGPT refuses to spot obvious nonsense and happily hallucinates.
ChatGPT is a useful tool, but you have to be careful about the language you use with it. If you use strong language it will almost always agree, regardless of whether you're making sense. However if you phrase your argument more weakly and with less conviction, it will be much more likely to point out mistakes.
> However if you phrase your argument more weakly and with less conviction, it will be much more likely to point out mistakes.
I've found this too.
I often use ChatGPT to explore unfamiliar topics, and I think the custom instructions help a bit.
Something like : "I try not to bias your responses by loading my questions or putting strong opinions in them, but please do contradict or correct me if I'm on the wrong tracks, following bad practice, or making faulty assumptions. Please do have your own opinions."
I've had some success asking it to poke holes / find weaknesses in an argument it just agreed with. Basically, you kind of need it to play both prosecution and defense in order to feel confident in the resulting conclusion