> the people in charge think they can replace people with LLMs
Additionally using it as a pretext to fire lots of workers like Amazon and others seem to have been doing. Some friends mentioned their companies using it as a way to offshore to cheaper locales while getting less bad press.
> For me it's the constant feel of everything being "exciting" while no real information is actually conveyed. It's a common tactic of both AI and clickbaity articles.
Yes! You put your finger on what bugs me about "no LLM" rules. It's not that LLMs writing isn't uniquely bad, but that it tends towards low quality clickbaity prone writing we already see everywhere. Banning LLM content is redundant.
Side note, I'd guess LLMs don't tend towards vapid writing just because of clickbaity training material. Rather it's more fundamental. Writing well takes effort and energy. LLMs seem to avoid effort just like humans. Emotional based reasoning in humans is itself a heuristic system favored by evolution. Thinking is expensive. Emotional slop is cheap.
Imagine you know a guy named Patel. He pirated every movie ever made and is a prolific writer. So prolific, in fact, that he has a blog, called "Patel's Log." On this blog is a review of every movie ever made.
At first, you think that's neat. It's not exactly a book of all knowledge, but it's a significant human achievement, perhaps even historic.
Things take a turn for the worse when you're reading a review in the Times. You recognize Patel's distinctive style, and call him up to ask if the Times stole his post. He says that a Times columnist asked for his opinion, and he sent them a link. It turns out the columnist copied his blog post verbatim: but he says he can't complain without being inconsistent, since he pirated every movie ever made.
You find this humorous, until you recognize his style in the Atlantic - then the Post. Eventually you're disappointed when the Ebert staff publish an opinion piece in favor of Patel's Log matching (PatelLM), and you're forced to wonder if that' what Ebert would have thought.
Your boss sends you copy-pasted PatelLM content in a morning Slack message about a movie she watched over the weekend. Your friends quote Patel's Log verbatim on Discord. Hollywood starts using PatelLM to indirectly plagiarize other movies. Soon, Patel's posts begin to echo each other as the supply of novel perspectives is overwhelmed by PatelLM. Film criticism become a dessicated corpse, filled with plastic and presented in a glass case with a pin through its heart. Thought is dead. There is only Patel.
> It turns out the columnist copied his blog post verbatim: but he says he can't complain without being inconsistent, since he pirated every movie ever made.
Copyright laws should be applied to LLMs and their users just like any others. If they verbatim reproduce a post (or near enough), then it should be a copyright violation.
> You find this humorous, until you recognize his style in the Atlantic - then the Post.
There's nothing inherently wrong with humans or LLMs learning to mimic someones style. This is actually a basis for styles and genres, etc. Whole trends in arts are just people copying others style's. Sometimes with little improvements.
> Hollywood starts using PatelLM to indirectly plagiarize other movies. Soon, Patel's posts begin to echo each other as the supply of novel perspectives is overwhelmed by PatelLM. Film criticism become a dessicated corpse, filled with plastic and presented in a glass case with a pin through its heart. Thought is dead. There is only Patel.
How exactly is this different than what Hollywood did pre-LLMs for the last decade or two? LLMs didn't cause the homogenization of culture. Corporate Hollywood and the internet did that.
I've had a similar complaint about publishing in machine learning conferences. They're putting in these "no LLM" rules but those are just idiotic. Proving LLM usage is really really difficult, but (one of) the underlying problem has always been bad reviews, or low quality reviews. So why write a LLM rule? Why not tackle the problem more directly?
I don't care if people use LLMs, I care about generating slop. The two correlate, but by concentrating on the LLM part you just let the problem continue. It's extremely frustrating. Slop is slop. Doesn't matter how it is generated or by who. Slop is slop. Doesn't matter if you dress it up and put lipstick on a pig. Slop is slop.
This is a fascinating idea IMHO. A couple years back I was toying with the idea of whether multiple gyroscopes could be combined to make extraction easier aboard a vessel. The limit I see is how difficult / practical the mechanism to convert procession to usable energy would be. I'll have to dig into the paper itself.
The tunable frequency adjustments would make wave energy much more useful for a range of applications!
> The waves are assumed to be regular waves propagating unidirectionally over an infinitely deep ocean across all wave frequencies, and the GWEC is modelled as a 1-DoF system with pitch motion.
> We need robustness in the global economy more than some megajillionaire needs another half cent per customer in profit.
Exactly this.
Economies follow the same general principles of our distributed products. There’s good reasons you pay extra and lower efficiency (a bit) to have redundancy and resilience. We saw that we need more of it during COVID lockdown chaos.
Generally lowering tariffs has been a good thing overall, but there’s a point where it stops being beneficial.
> Getting macOS code signing and notarization working in CI was honestly the hardest part of this project. If anyone is distributing a macOS app outside the App Store via GitHub Actions, I'm happy to answer questions — the workflow is fully open source.
You're not kidding! That's actually the first thing I looked at in your Github Repo. It's annoying as I made a neovim gui and downloaded it from GH and couldn't run my own app until I dug into some hidden place in the Settings App. Definitely super helpful to see how it's done.
I'm digging the app too! As another commenter said it'd be cool to see the comments as native SwiftUI elements as well. :)
> Getting macOS code signing and notarization working in CI was honestly the hardest part of this project. If anyone is distributing a macOS app outside the App Store via GitHub Actions, I'm happy to answer questions — the workflow is fully open source.
I'm thankful that it's largely a "once it's working, it rarely breaks". If it does break, it's usually because I have to sign in to the developer portal and accept some contract somewhere. Error messages in CI rarely indicate this is the case sadly.
The byzantine and overly complex nature of FreeIPA is a feature not a bug. It lends itself to consulting money for RedHat et al in those legacy markets. Sure, the server might be free but good luck getting it running.
For more complex projects I find this pattern very helpful. The last two gens of SOTA models have become rather good at following existing code patterns.
If you have a solid architecture they can be almost prescient in their ability to modify things. However they're a bit like Taylor series expansions. They only accurate out so far from the known basis. Hmm, or control theory where you have stable and unstable regimes.
Additionally using it as a pretext to fire lots of workers like Amazon and others seem to have been doing. Some friends mentioned their companies using it as a way to offshore to cheaper locales while getting less bad press.
reply