Hacker Newsnew | past | comments | ask | show | jobs | submit | cagz's commentslogin

I understand the argument, and there are some really good points.

My biggest concern would be that adopting the CLI method would require LLM to have permission to execute binaries on the filesystem. This is a non-issue in an openclaw-type scenario where permission is there by design, but it would be more difficult to adopt in an enterprise setting. There are ways to limit LLMs to a directory tree where only allowed CLIs live, but there will still be hacks to break out of it. Not to mention, LLM would use an MCP or another local tool to execute CLI commands, making it a two-step process.

I am a supporter of human tools for humans and AI tools for AI. The best example is something like WebMCP vs the current method of screenshotting webpages and trying to find buttons inputboxes etc.

If we keep them separate, we can allow them to evolve to fully support each use case. Otherwise, the CLIs would soon start to include LLM-specific switches and arguments, e.g., to provide information in JSON.

Tools like awscli are good examples of there LLM can use a CLI. But then we need to remember that these are partly, if not mostly, intended for machine use, so CI/CD pipelines can do things.


I think their intention is not mining your data (easily opt-outable) or hoping that you maintain the subscription after 6 months. It is rather making large open source project maintainers give AI a proper go.

Believe it or not, there are still a large amount of great tech professionals out there who are sceptical about AI. Many tried AI a year ago and has the impression that "It was alright but had limitations". AI came a long way since then, and it is going to improve even faster over the next 6 months. So this is Anthropics invite for you to join that journey.

In turn, of course this fuels the adoption by superstars (maintainers) endorsing the models.


The use of data for model training is a simple toggle, very easy to opt out of during the initial setup.

Also, the end product is open source anyway, so there is no case of IP being leaked into training data. What remains is that they can use, with your permission, the overall coding practices of a great programmer to fine-tune Claude's code and models. As in, how one approaches planning or troubleshooting. Is this a bad thing? Perhaps every maintainer should decide for themselves whether they want to contribute back or not.


Assuming they've got reasonable programming skills. They can simply find an open-source project they are passionate about. Spend time understanding the overall structure. Then pick up an issue raised by the community and prepare a fix as a pull request.

The first PR is unlikely to be merged the next day; however, it sparks lots of productive discussions with the rest of the community, allowing your kid to build a mental model of the project's best practices and sensitivities.

The more he contributes, the more integral he becomes to the community. After gaining enough experience through small issues, they can even consider working on a new feature.

As a byproduct, a great addition to the CV if they are also looking to go commercial.


I don't think this is an AI issue. It is about the terms of use: they don't allow a second account, even if it's intended for ad management. The recommended way is to use Meta Business Manager via the existing account.

Low-quality AI-created PRs that are submitted to open-source repositories are prompted by humans. And those are the same humans who fails to review AI's output properly before submitting (or letting AI submit) as PRs. Let's not blame the tools instead of bad workmanship.

A smaller number of PRs generated by OpenClaw-type bots are also doing so based on their owner's direct or implied instructions. I mean, someone is giving them GitHub credentials and letting them loose.

AI is also allowing the creation of many new open-source projects, led by responsible developers.

Given the exponential speed at which AI is progressing, surely the quality of such PRs is going to improve. But there are also opportunities for the open-source community to improve their response. It will sound controversial, but AI can be used to perform an initial review of PRs, suggest improvements, and, in extreme cases, reject them.


Does the article have a strong marketing vibe? Absolutely Does the research performed move the needle, however small, in theoretical physics? Yes Could we have expected this to happen a year ago? Not really.

My personal opinion is that things will only accelerate from here.


I use the underground frequently. It doesn't really feel like half of it is covered. Where it is available, it works amazingly. I might have been using the other half by sheer luck.


I find it works pretty well, at least that I’m consistently surprised I have any signal at all. Sometimes I need to disable private relay to get it to work with all the switchovers. There’s also frequent swapping between the 5G signal and the Wi-Fi that comes auto-enabled on my phone by my carrier who provides a client certificate entitling me to it.

I’m not sure about the privacy implications of this whole setup. It’s basically turned the underground into a surveillance dragnet that can hoover up all sorts of interesting metadata… hostnames, hardware identifiers, traveling patterns, DNS queries, SNI requests… and an untold amount of unencrypted communications across weird protocols and devices..


Good idea, after trying it a number of times, it has some downsides. Most calendar applications cannot clearly display 5 minutes past, and the meeting appears to start on the hour visually. One of the attendees ends up dialling in at the hour, and then everyone gets a notification that the meeting has started.

Half of the people who get the notification click "join" without checking. This ends up with a half-populated meeting room. The issue becomes obvious, and somebody says, "Let's dial back in 5 mins", and drops off. Half of the people like the idea and drop off, while the rest decide to stay and chat.

Meanwhile, some of those who dropped off see this as a great opportunity to grab a brew. That inadvertently triggers some water-cooler, kettle-corner chats, and they end up running late for the 5-past. The rest usually get engaged in something else to make use of 5 minutes, and miss 5-past since no new notifications are issued due to the people already chatting in the meeting :)


Well the trick is to schedule the meeting at hh:mm, but start it at hh: mm+5 and then just make this a policy organization-wide.

In academia (mostly European) this has been a concept for centuries to allow changes of classes to happen: https://en.wikipedia.org/wiki/Academic_quarter_%28class_timi...

So you can have a c.t. event (cum tempore = with time) or a s.t. event (sine tempore = without time).


Great article on reminding the risks of browser extensions. They literally have access to everything within the browser window, from usernames and passwords to bank account details.

Funnily I always tempted by extensions that offer dark more for webpages but never dared to install one.

I do use extensions, but only if they are from well known, respected organisations.

The author was lucky that it was only few compromised social media accounts. It could easily be an empty bank account or stolen identity instead.


I'm relatively happy that I got away with a couple of suspended social media accounts and a lesson that I can share, which will improve my awareness. Would've been a totally different story had it reached more serious services like a bank account or iCloud.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: