Hacker News new | past | comments | ask | show | jobs | submit login

> Anyone with knowledge who can expand this or explain what exactly is "arbitrary", "capricious" or "abuse of discretion"?

I'm a lawyer. Here's a summary from Perplexity.ai, which comports well with my general understanding:

The U.S. Supreme Court defines "arbitrary and capricious" in the context of administrative-agency action primarily through the standards set forth in the Administrative Procedure Act (APA). According to the APA, a court must invalidate agency actions that are found to be "arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law" 1 3 4.

The arbitrary and capricious standard is applied when reviewing an agency's decision-making process and involves several key considerations:

Consideration of Relevant Factors: An agency action is deemed arbitrary and capricious if the agency has relied on factors that Congress did not intend it to consider, failed to consider an important aspect of the problem, or offered an explanation for its decision that runs counter to the evidence before the agency 2 3.

Rational Connection: The agency must demonstrate a rational connection between the facts found and the choices made. This requires a satisfactory explanation for its action based on consideration of relevant data 6.

Consistency and Reasoning: The decision should not be based on seriously flawed reasoning or be inconsistent with prior actions unless adequately explained. The agency must also respond to relevant arguments or comments during the decision-making process 6.

Zone of Reasonableness: Recent interpretations by the Supreme Court have introduced the concept of a "zone of reasonableness," where agency actions are upheld if they fall within a reasonable range of decisions based on the agency's expertise 4.




Sigh. We’re here as people asking other people who hopefully have expertise. If we wanted an AI summary we could get it from perplexity ourselves.

I say this as someone who actually builds and sells AI research software - but there’s a time and place for such things.


[flagged]


Here’s the reason:

Dang has said before that HN is not for bots or AI generated content.


And HNers have interpreted that as, "Any comment that even mentions the output of an LLM is radioactive toxic waste, regardless of its context or its appeal to the audience's sense of intellectual curiosity that is otherwise encouraged by the admins."

As a result, a lot of thoughtful conversation threads are stopped in their tracks.


People may be overreacting to some extent but that's better than succumbing to a deluge.


No one is stopping conversation. We weren't allowed to have the coversation, because someone would rather outsource their efforts to AI, which isn't thoughtful or inspiring curiosity within the writer, regardless of the perception upon readers. It's lazy and it degrades the discourse because we're not interacting with the author of the comment when we read words that aren't theirs, and allows for those who use AI to simply say that the words were not their own, as if that abdicates them of responsibility for what they post under their own account/username.

Not all AI comments are downvoted like this one was, which should tell you that this just wasn't a very good comment, AI or not.


> Not all AI comments are downvoted like this one was, which should tell you that this just wasn't a very good comment, AI or not.

Please tell us your qualifications to judge whether my comment was a good one.


The proof in the pudding is in the tasting.

I'm making no judgement(s) at all. I'm observing that it was downvoted and flagged and is now dead, which is the judgement of HN collectively, not my own.

If you feel that your comment is flagged in error, please contact hn@ycombinator.com

It may have been a very fine comment in another context, but it appears to not have been a good comment on HN, as determined by HN. What other metric would apply?


[flagged]


It's not Ludditism. You're just making low-effort posts that are not appreciated on HN as evidenced by downvotes and past moderator statements. I don't even know what you're arguing for.

Just write your own comments. You actions in this thread reflect poorly on yourself as a lawyer and upon the entire legal profession.


Please cite your source for the supposed HN policy. I sometimes answer nonlawyers' questions about how the law works; my answers often attract upvotes. If purists insist on downvoting AI-assisted answers, I can live with that.


I got this in email from dang just now, but I asked him to chime in so he may have something else or additional to say.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


I've asked dang to find the citation for us.

> If purists insist on downvoting AI-assisted answers, I can live with that.

I'm no purist. I have seen AI comments that have genuinely been helpful on here, so I don't know what else to say, other than that I also have had to accept that sometimes HN doesn't roll the way I'd like either, but it's still the best place to post online the vast majority of the time.

To your original point:

> I see no reason to spend non-billable time writing an evanescent answer to a very-general question. Perplexity did a quite-serviceable job in just a few seconds.

I don't think posting on HN is meant to be measured in time, but in impact. I don't come here to read AI comments, but human comments. I'd wager the same is true of nearly everyone here, including you.


> Dang has said before that HN is not for bots or AI generated content.

I have a feeling that he meant something akin to spam, not to AI-assisted comments addressed to specific points.


There are lot of grey areas; for example, your GP comment wasn't just generated—it came with an annotation that you're a lawyer and thought it was sound. That's better than a completely pasted comment. But it was probably still on the wrong side of the line. We want comments that the commenters actually write, as part of curious human conversation.


I’m a lawyer and I would not be comfortable doing this outside of perhaps the message board context.


> I’m a lawyer and I would not be comfortable doing this outside of perhaps the message board context.

Why is that? I flatter myself that it provides useful content for readers, and there's approximately zero chance that it could ever lead to any kind of malpractice liability. (My perspective might be influenced by the fact that I've done a lot of law-related teaching over the course of my career, for both lawyers and non-lawyers, and have never had even a whisper of an issue on that score.)


Doing this in the message board context: very low risk as you said. Don’t disagree at all. Downside risk: lots of downvotes. Who cares.

Doing this in the paid legal advice realm: why not at least ask Westlaw, which your insurance carrier would be less allergic to? Asking a general purpose chat it seems like it’s asking for trouble.

Helping people understand the law: pretty cool however biased I may be.


> Doing this in the paid legal advice realm

We're on the same page: I can't imagine giving paid legal advice without doing the usual research and citing the usual cases, for no other reason than to confirm what I think I know.


That’s squared up with me. I would not trust myself to second guess Peplexity without researching and at that point why am I messing around slash wrecking a/c priv?

But good on you for giving layman’s style explanations. I do think that’s good work.


> But good on you for giving layman’s style explanations. I do think that’s good work.

But they didn't do that, AI did, regardless of good intentions. It was just not a good comment in this instance.


Would your answer have been different if I'd quoted from — and approved of — say, Wikipedia? or Cornell Law's Legal Information Institute? If not, then your beef is that the comment was drafted by an AI, and only humans should be involved in producing any text that's to appear on HN?


I think that’s consistent with my views and my understanding of HN Guidelines and clarifications by Dang, though I don’t speak for HN.

The issue to my mind is that AI doesn’t perform reasoning and may give different answers entirely depending on prompts and on sources the AI references, sources that may not be clear to the user or secondary readers.

Other sources have the benefit of having had more eyes on the same content. With enough eyes, all bugs are shallow kind of thinking.


So it'd have been OK with you if I'd just posted the Perplexity.ai output — which I thought was a very good summary of the law, and I claim to have at least modest knowledge in this area — without identifying it as an AI output.


That doesn't logically follow. HN operates on good faith principles. You'd be posting in bad faith, knowing that posting AI output as your own on HN is frowned upon, and the output didn't really add anything substantive to the discussion because the information provided by the AI output is unverifiable, and as you didn't write it or investigate the sources, you can't really say that it corresponds with reality. You'd have done better to just post the sources that the AI used, if available.

Your responses now seem like sealioning. I don't think that you necessarily are posting in bad faith, but I've already answered the question you asked to which I'm replying to.

https://en.wikipedia.org/wiki/Sealioning


> You'd be posting in bad faith, knowing that posting AI output as your own on HN is frowned upon

First, you haven't proved up your "frowned upon" premise. It'd be misguided to peremptorily condemn the posting AI-generated answers when they're initiated, and vouched for, by knowledgeable humans. I've been around HN for awhile and am quite skeptical that this is HN policy — if it is, I'd like to hear it from someone official, or at least to get a link to an HN posting. Those who don't like AI-assisted comments are of course free to downvote them.

Second, the alternative might be that the original questioner doesn't get an answer, or at least not one with any indicia of reliability — how many responses have you seen that are prefaced by "IANAL"? As I've said, I am a lawyer, I use my real name, and I'm vouching for the AI-generated answer as a general explanation.

> Your responses now seem like sealioning.

It's not sealioning, it's Socratic method — looking ahead on the chessboard, examining an assertion's logical implications N moves out. That's what lawyers are trained to do from the first day of law school, because it's how legislators, judges, administrators, and their staffs (try to) achieve scalable, sustainable policies and decisions. It's one form of critical thinking.

https://en.wikipedia.org/wiki/Socratic_method


> The alternative might be that the original questioner doesn't get an answer, or at least not one with any indicia of reliability

That would be preferable to AI output on HN. That's the stance that HN and dang have taken, so I'll ask him to chime in in this thread for everyone's benefit.

> IMHO, your peremptory condemnation of posting AI-generated answers — when initiated, and vouched for, by knowledgeable humans — is short-sighted.

It's not my policy. I'm only going off of what I've seen dang say to others, so interpret that accordingly.

> You're of course free to express your opinion by downvoting my comments.

I can't downvote comments that are replies to me, yours or anyone else's. You can't either. No one on HN can. The interface doesn't allow it. I didn't flag your comments either for that matter, because I didn't want to derail our discussion, as you can't reply to a comment that is dead. And that's all I'll say on that matter, because:

https://news.ycombinator.com/newsguidelines.html

> Please don't comment about the voting on comments. It never does any good, and it makes boring reading.


We detached this subthread from https://news.ycombinator.com/item?id=41937792.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: