Hacker Newsnew | past | comments | ask | show | jobs | submit | evv's commentslogin

Have you considered serving a zip bomb to this user agent?


I'm sure their crawler can handle a zip bomb. Plus it might interpret that as "this site doesn't have a robots.txt" and start scraping that OP is trying to prevent with their current robots.txt.


Pretty sure every crawler can. You kinda have to go out of your way not to, given how the gzread API looks.

https://refspecs.linuxbase.org/LSB_3.0.0/LSB-Core-generic/LS...


Could allow only the path to the zip bomb for this user agent.


That will work once at most and then quickly get fixed.


Yeah it seems like this team takes a really tough stance on obvious bugs


Are you so sure? :)


The tooling is totally replicated in open source. OpenCode and Letta are two notable examples, but there are surely more. I'm hacking on one in the evenings.

OpenCode in particular has huge community support around it- possibly more than Claude Code.


I know, I use OpenCode daily but it still feels like it's missing something - codex in my opinion is way better at coding but I honestly feel like that's because OpenAI controls both the model and the harness so they're able to fine tune everything to work together much better.


It's there now, `opencode models --refresh`


> This is how infrastructure works, and supposed to work

No, infrastructure doesn't have to work this way. This is a very old-school mentality.

Sign the content with a key that you control. Back up the content locally. And boom- your server is easily replaced. It only helps copy data around and performs certain conveniences.

I've been working on this full-time for a few years. If we succeed, we solve link rot (broken links) on the web.


Well, you're basically repeating what I'm saying, but with more detail. It's still what I true, "the one who holds the key holds the kingdom", just shifting it to the user rather than the admin. This is great, and works too, but doesn't make what I say less true.


Dumb question: whats the difference between "low-code" and "libraries+frameworks"?

Usually the point of a library or framework is to reduce the amount of code you need to write. Giving you more functionality at the cost of some flexibility.

Even in the world of LLMs, this has value. When it adopts a framework or library, the agent can produce the same functionality with fewer output tokens.

But maybe the author means, "We can no longer lock in customers on proprietary platforms". In which case, too bad!


> Dumb question: whats the difference between "low-code" and "libraries+frameworks"?

There's not much technical difference.

The way those names are used, "low-code" is focused on inexperienced developers and prefers features like graphical code generators and ignoring errors. On the other hand, "frameworks" are focused on technical users and prefer features like api documentation and strict languages.

But again, there's nothing on the definition of those names that requires that focus. They are technically the same thing.


Agreed. Libraries and frameworks definitely adhere to a 'low-code' philosophy.

Your last idea makes sense as well to some extent. I think for sure, once you abstract away from the technical implementation details and use platforms which allow you to focus only on business logic, it becomes easier to move between different platforms which support similar underlying functionality. That said, some functionality may be challenging for different providers to replicate correctly... But some of the core constructs like authentication mechanisms, access controls, etc... Might be mostly interchangeable; we may end up with a few competing architectural patterns and different platforms will fit under one of the architectural patterns; which will be optimized for slightly different use cases.


Low code means you have to pay a company every time someone in your organisation runs an app.

Libraries + Frameworks doesn't mean that unless you're bonkers.

LLMs + Libraries + Frameworks means you might pay to build the application, but running it is only going to be the cost of where it's running.

You're exactly right.


React, Next, Laravel, Rails.. In fact, all higher-level programming languages from C on up are low-code solutions.


Hey HN, hope you enjoy this idea!

LLMs are a game-changer, but they’re only half the story: probabilistic and fuzzy. The missing half is a universal formal language: something precise enough to translate cleanly between human languages and let anyone communicate with computers, without learning programming!

My dream is to remove barriers everywhere: culture, science, medicine, law, diplomacy. You don’t erase ambiguity, you encode it! Dialects, jargon, puns, inside jokes, social context.. we can build everything in.

Maybe this isn't possible, but now that we have language models to help... it might be!


Your alternative is... what exactly? A unique and baroque file format for each application (see: Git)? Folders of JSON or markdown files which are slow, easily corrupted, and lack indexing? Depend on some memory-heavy external DB service like Postgres?

In most cases, embedding SQLite is the best solution. And that is exactly what it was designed for.


Looks like it is possible if you do a normal iOS screen recording, then use your app to add the Face Cam and touches on top.


Yes, it’s possible to do composite edits after a screen recording

The tradeoffs here were aimed at doing it all live in a single take


As somebody working in this "future-web" space, I see HUGE issues with the legacy web stack:

- It requires a server to publish, which is expensive and difficult for regular users with a laptop or a phone. This can be solved with a mix of p2p and federation

- There is no decentralized trust system- only DNS+HTTPS, which requires centralized registration (TLDs). A domain may be cost-prohibitive for somebody who just wants to write comments and a few documents on the web. This can be solved by forming a social graph of cryptographic identity validations (aka, the "web of trust")

- There is no versioning system. This can be solved by making chains of immutable signed content, like we do with git.

- There is no archival system that allows you to "back up" the content of a website in a trustless way. Look at IPFS and BitTorrent for the solution there.

I believe these are the main reasons the web has failed as a social publishing system. Aside from companies and technically skilled individuals, everyone publishes on centralized social media platforms. This is a dangerous consolidation of power.

We hate to admit it, but the open web has taken the "L". The good news: these are solvable problems and I'm not giving up anytime soon!

> Honestly there kinda is a new web, they call it web 3 and it's only crypto scams.

To distance ourselves from crypto scams, we strongly avoid the web3 label, despite some similarities.


This feels very 2000's. eDonkey, Perfect Dark, Opera Unite....

Turns out, other than piracy, there are no legitimate uses. The existing technologies are good enough.

P2P is cool if you have a desktop, but you cannot host from laptop or phone that spends most of the time sleeping (unless you want your battery to die real fast). The solution is hosting providers - which are already decentralized (and federated, if you squint hard enough)

Web of trust never took off - turns out people don't trust their friends' friends' much, some sort of centralized authority works much better.

_Cryptographic_ identities have huge problem of it's own - there are many people who don't have any persistent data on their PC - for example, they have only one laptop/phone, they don't back it up, and it breaks regularly. If your system requires one to keep a secret key for decades, it automatically excludes a very large fraction of computer users.

Publicly accessible versioning and immutable content sound cool for readers, but have very few upsides (and many downsides) for writers. And it's writers who select publishing technology.

People has been proposing those things forever. No one needed them back then, and no one needs them today. Just look at which decentralized social networks are actually winning (like Mastodon) - it's pretty much opposite to what's described in your comment.


Thanks for spawning many interesting topics. A dose of cynicism is great, in moderation!

> P2P is cool if you have a desktop, but you cannot host from laptop or phone that spends most of the time sleeping (unless you want your battery to die real fast). The solution is hosting providers - which are already decentralized (and federated, if you squint hard enough)

Yes, most people will rely on servers because phones are terrible p2p nodes. When identity is properly owned by the end users, the servers have nearly zero lock-in, unlike traditional hosting providers. A community's server can go down for some reason and the community can easily transition to other server(s), keeping their conversations and knowledge intact. Sadly this is not the case with Mastodon or even Bluesky.

> _Cryptographic_ identities have huge problem of it's own - there are many people who don't have any persistent data on their PC

This is probably the single biggest problem we are facing, because it impacts UX. There are several tools available to mitigate this issue, but I don't believe there is a perfect solution. Keys can be linked across devices with cross-signing, there are mechanisms that can enable key rotation: DNS, social media connections, and social/manual rotation in the worst case. The plan is to leverage existing tools that are used to keep secrets safe for regular people: system keychains, password managers, passkeys, smartphone "wallets".

> Turns out, other than piracy, there are no legitimate uses. The existing technologies are good enough.

People become very comfortable in their virtual prisons, and most people won't change unless they have a reason to. Maybe they have legitimate work or content that is stigmatized and censored by other platforms. Maybe they live under an autocratic regime. But I think most people want better control over their content moderation and feed algorithm.

> People has been proposing those things forever. No one needed them back then, and no one needs them today.

I'm not laughing at your exaggerated use of "no one". Decentralized and censorship-resistant technology is society's fail-safe. Maybe your social media oligarch isn't abusing their power too much today. Maybe your government actually supports free speech today. What about tomorrow, the next decade, and the next century?


So is the "oligarch" of the mastodon instance that I use? And why is that webforum I frequent "a virtual prison"? And if the government decides to start blocking communications, what'll stop them from blocking P2P protocol?

For some reason, most of the decentralized people assume there are only two options: Megacorp-controlled media (facebook, X) or start from scratch (P2P + crypt-identities). It sure makes for an easy argument or a flashy slogan, given how much bad stuff megacorps do.

But the thing is, those slogans all look very cringy and naive to me. The "open web" is not dead, and never will be. There are traditional websites, activitypub, bluesky... They are still very much alive, and most of them are not megacorp-controlled. They are the real competitor to all the new upcoming p2p technologies and "future-web" startups.

You are not building replacement for Facebook or X, you are building a replacement for Bluesky or Mastodon.


P2P and federation tech is really cool stuff! I feel like ipfs is what most non-tech people thought the cloud was, perhaps even what it should've been.

I'll admit I'm a bit out of the loop though. Say I wanted to publish a blog on this.. Let's call it web 4, for lack of a better term..

How would I do it? How would people find it? Last I checked there wasn't really a good solution for that(or at least I didn't find one) but it's been nearly a decade, so things might've changed!


The solution is to build on the traditional web. How does anybody find anything new on the web? Basically: hyperlinks!

People will create links from social media. With some basic SEO, your content can be indexed by your favorite search engines. Increasingly these "web4" sites will link to themselves, leveraging the built-in social features that are portable across sites/servers/peers.


I fail to see how "federation" is supposed to solve any of those problems. First of all, it would require a bunch of copies of the same thing, which all could suddenly go offline on a whim. And secondly, "it's too expensive for me to host a server, so I'll rely on someone else doing so and mooching off of theirs" does not seem sustainable to me.


I get freaked out when I consider the future of archive.is. Thanks to the nature of the web today, it is incredibly fragile.

As the co-creator of a censorship-resistant publishing platform, I really wish we would migrate to a peer-to-peer technology. We could develop network effects on a decentralized platform with a cryptographically-provable network of trust. Most people don't realize it is possible to handle media distribution in a robust way.

I'm not just trying to shill my solution! I wish there were more competitors using these techniques to try and save the web.


Except a lot of people wouldn't participate in a peer-to-peer network for fear of legal repercussions.


Utilizing p2p tech is not illegal. It is illegal to redistribute copyrighted content without authorization- and we are working to build this into the protocol so that peers will respect copyright by default. People can redistribute at their own risk. I'll be the first to admit that this is complicated, and we have a long way to go in this regard.

Plus, the vast majority of people will just use the web frontend, with a peer on the server. Most peers can be hosted by content creators and tech-savvy friends+family.


Almost every machine in the world participates in at least one peer-to-peer network: Windows Update. There was a time when the Steam client also used bittorrent technology, not sure if they still do.


Obviously P2P gets used in various things, my point was just, that (most) people likely won't willingly join P2P networks to fight "censorship" or help archive things with questionable content or tainted with potential copyright infringements.


I'm confused how you would "build something on ChatGPT" in the first place. Does ChatGPT have an API?

Of course OpenAI's GPT-5 and family are available as an API, but this is the first time I'm hearing about the ability to build on top of ChatGPT (the consumer product). I'm guessing this is a mistake by the journalist, who didn't realize that you can use GPT-5 without using ChatGPT?

It seems that they have a unified TOS for the APIs and ChatGPT: https://openai.com/policies/row-terms-of-use/

The seemingly-relevant passage:

> You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making credit, educational, employment, housing, insurance, legal, medical, or other important decisions about them.


Don't assume a general news journalist on a deadline to drop a quick non-important article would understand fully between the API access of OpenAI, and ChatGPT itself.


Call centers, support chats etc. Lots of companies are now basically telling their support staff to "ChatGPT it" before replying.


wait... so you can't use openai models for educational content? so they can kick out any competi.. ehm company they don't like as this list is quite... extensive?


I believe the key word in that passage is "decisions".

You can't use ChatGPT (or other OpenAI offerings) to grade essays, decide who is the least risky tenant, assign risk for insurance, filter resumes, approve a loan, determine sentencing...

Those are things that require human agency for "a person decided this" rather than "we fed it into the program and took the answer."


Which of course means humans will use it for those decisions until there’s some lawsuits and maybe some laws.


Humans have been using bad data to make decisions for a while now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: