Hacker Newsnew | past | comments | ask | show | jobs | submit | bredren's commentslogin

Why not just write a skill and script that calls crawl4ai or similar and do this using Claude code?

You can store the page as markdown for future sessions, mash the data w other context, you name it.

The web Claude is incredibly limited both in capability and workflow integration. Doesn’t matter if you’re dealing with bids from arbor contractors or researching solutions for a DB problem.


Want this w/o killing the free open web.

Maybe I run an old PC adjacent to the scraper to manually visit the scraped pages without an adblocker, & buy something I need from an ad periodically (while a cohesive response is being generated in the meantime)

Ya sounds dumb, wishing for a middle ground that lets us be effective but also good netizens. Maybe that Cloudflare plan to charge the bots…



I can’t see myself using most of these because I don’t want them having my conversations.

I really want a native iOS chat client that connects directly to my home server.


What is that, a snack in the basket?


"integrating a bicycle basket, complete with a fish for the pelican... also ensuring the basket is on top of the bike, and that the fish is correctly positioned with its head up... basket is orange, with a fish inside for fun."

how thoughtful of the ai to include a snack. truly a "thanks for all the fish"


A pelican already has an integrated snack-holder, though. It wouldn't need to put it in the basket.


That one's full too


A fish for the road


The number of snacks in the basket is a random variable with a Poisson distribution.


Can you explain what you mean by your parallel tasks limitation?


Instead of having my computer be the one running Claude Code and executing tasks, I might want to prefer to offload it to my other homelab servers to execute agents for me, working pretty much like traditional CI/CD, though with LLMs working on various tasks in Docker containers, each on either the same or different codebases, each having their own branches/worktrees, submitting pull/merge requests in a self-hosted Gitea/GitLab instance or whatever.

If I don't want to sit behind something like LiteLLM or OpenRouter, I can just use the Claude Agent SDK: https://platform.claude.com/docs/en/agent-sdk/overview

However, you're not supposed to really use it with your Claude Max subscription, but instead use an API key, where you pay per token (which doesn't seem nearly as affordable, compared to the Max plan, nobody would probably mind if I run it on homelab servers, but if I put it on work servers for a bit, technically I'd be in breach of the rules):

> Unless previously approved, Anthropic does not allow third party developers to offer claude.ai login or rate limits for their products, including agents built on the Claude Agent SDK. Please use the API key authentication methods described in this document instead.

If you look at how similar integrations already work, they also reference using the API directly: https://code.claude.com/docs/en/gitlab-ci-cd#how-it-works

A simpler version is already in Claude Code and they have their own cloud thing, I'd just personally prefer more freedom to build my own: https://www.youtube.com/watch?v=zrcCS9oHjtI (though there is the possibility of using the regular Claude Code non-interactively: https://code.claude.com/docs/en/headless)

It just feels a tad more hacky than just copying an API key when you use the API directly, there is stuff like https://github.com/anthropics/claude-code/issues/21765 but also "claude setup-token" (which you probably don't want to use all that much, given the lifetime?)


Would be cool to include pricing info


Good idea! We're already showing the transaction price per share from the SEC filing. Are you thinking more along the lines of showing the current stock price alongside it, or maybe a price chart showing the stock's performance since the insider trade?


I actually meant the price of your product.


This is cool, I’ve spent time picking through forks looking for high signal options.

It would be cool if this was paired with a skill to assist in interpreting results and separate out bug fixes from features.


Yeah, I should add that! Here's a prompt I gave Claude:

"Based on the output of running forkwatch against the maximadeka/convertkit-ruby repo, what would you suggest for a PR to that repo?"

That resulted in Claude forking the repo, applying patches from the forks and offering to open this PR: https://github.com/maximadeka/convertkit-ruby/pull/41



An anecdote: On one project, I use a skill + custom cli to assist getting PRs through a sometimes long and winding CI process. `/babysit-pr`

This includes regular checks on CI checks using `gh`. My skill / cli are broken right now:

`gh pr checks 8174 --repo [repo] 2>&1)`

   Error: Exit code 1

   Non-200 OK status code: 429 Too Many Requests
   Body:
   {
     "message": "This endpoint is temporarily being throttled. Please try again later. For more on scraping GitHub and how it may affect your rights, please review our Terms of Service (https://docs.github.com/en/site-policy/github-terms/github-terms-of-service)",
     "documentation_url": "https://docs.github.com/graphql/using-the-rest-api/rate-limits-for-the-rest-api",
     "status": "429"
   }


The action is hot, no doubt. This reminds me of Spacewar! -> Galaxy Game / Computer Space.


I co-founded Gliph, which was one of the first commercial, cross platform messaging apps to provide end to end encrypt.

One area of exposure was push notifications. I wonder if the access described wasn’t to the messages themselves but content rich notifications.

If so, both parties could be ~correct. Except the contractors would have been seeing what is technically metadata.


I'm unfamiliar with Gliph. What were the protocols/constructions you used?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: