Hacker Newsnew | past | comments | ask | show | jobs | submit | fleetfox's commentslogin

Is it really complaining about quality of AI? The dangerous part is that slop will be harder to detect.

The anxiety surrounding AI-generated "slop" mirrors the frantic warnings of late 15th-century clerics who viewed the printing press as an engine of spiritual decay. Johannes Trithemius, a prominent Benedictine abbot, famously argued that monk-scribes should not abandon their pens, fearing that printed books were ephemeral, error-ridden toys that would undermine the sanctity of scripture and the discipline of the mind. He believed that the sheer volume of cheap, mechanical texts would drown out genuine wisdom and lead to a permanent decline in the quality of human thought.

History shows he fundamentally misunderstood the human capacity for adaptation. Rather than succumbing to a sea of printed garbage, society developed sophisticated new filters. We invented the modern bibliography, the peer-review process, the concept of a "trusted publisher," and the critical literacy skills required to navigate a world where information was no longer a rare luxury. Humans have an innate drive to seek out signal over noise. Just as the chaos of the early printing era eventually gave way to the Enlightenment, our current struggle with synthetic content will likely trigger a new evolution in how we verify truth and value human insight.


Manuscript could contain handwritten errors and of course there could be misprints due to wrongly selected types but content wasn't generated out of nowhere. Unless we're talking about asemic or automatic writing due to some... "spiritual" influence.

The key here is human thought as you said. Whether these books were written by clerics or printed by the press these were still containing human produced substance. It's not a fair comparison.


That exact stance (+scribes' financial interests) prevented printing press to be used in the Ottoman Empire widely for more than 200 years

I think his legacy is about stegography and cryptography. I think he relied on handwritten volumes and couldn’t adapt his cryptographic techniques.

"Generating slop is totally fine because we'll eventually develop anti-slop filters" isn't exactly the most convincing argument, you know.

Besides, your link between the "chaos of the early printing press" and the start of the Enlightenment is very forced. The Greek philosophers did plenty of critical thinking after all, and they had no need for a printing press. I see absolutely zero reason why the current AI bubble will inevitably result in an Enlightenment-like period, nor why AI would be a hard requirement for one.


The frontiers of mathematics is already incorporating AI and people like Terrance Tao are documenting the progress of AI. At the very least the current best mathematician in the world only does this because he has predicted an opposite conclusion to you.

So when you say zero reason, I have to tell you that your absolutist stance is blindness. There are many reasons why it can happen, and many reasons why it can’t.


Incredibly valid opinion. Many people disagree but this is an extremely possible future for AI.

There is also a darker future where AI improves to the point where it’s no longer slop. It produces quality code, texts, and books that are better and in a fraction of a second after one misspelled prompt. Given the past trajectory of AI, this is the more likely outcome.

The other outcome is AI flatlines. This is as good as it gets. In which case the future you predict may come to pass.



I was hoping this would be a satire


Please don't post shallow dismissals, especially of other people's work.

https://news.ycombinator.com/newsguidelines.html


I see your rhetoric and raise you one.

Please don't post shallow dismissals.


I think that's more of a popping out of a well than a raising

https://knowyourmeme.com/memes/we-should-improve-society-som...


Pfft

...

fin


You are my hero. Pointing to the rules will rarely make someone change their approach and it is so myopic whenever someone leaves a comment like that. What makes HN great is the quality of discourse that can take place.


If you are interested PEP703 describes the scenarios pretty well: https://peps.python.org/pep-0703/#motivation


I just wrote a post about how the Cpython is much faster without GIL:https://news.ycombinator.com/item?id=40988244


I mean, only the threaded version, which is expected. For tons of cases Python without the GIL is not just slower, but significantly slower; "somewhere from 30-50%" according to one of the people working on this: https://news.ycombinator.com/item?id=40949628

All of this is why the GIL wasn't removed 20 years ago. There are real trade-offs here.


30-50% is an understatement. The latest beta is more than 100% slower in a simple benchmark:

https://news.ycombinator.com/item?id=41019626


How is single-threaded code slower without GIL?


Because in the --disable-gil build data structures like ref-counting, dicts, freelists, etc. are locked, even when there is only a single thread.

This is the reason why previous attempts were rejected. But those attempts came from single individuals and not from a photo sharing website.

This matters if --disable-gil becomes the default in the future and is forced on everyone.


That cannot be the reason for a 30-50% slowdown. Uncontested locks are very fast.


They may be fast in C++, but not in the context of CPython. Here are the dirty details. Note that fine-grained locking has also been tried before:

https://dabeaz.blogspot.com/2011/08/inside-look-at-gil-remov...


Thanks for the link, that's an interesting read. Actually the referenced PyMutex is a good old pthread_mutex_t, the same you'd use in C or C++. But I shouldn't have written so surely. Although uncontested locks are very fast, if the loop is tight enough, adding locks will be significant.

However, PEP 703 specifically points out that performance-critical container operations (__getitem__/iteration) avoid locking, so I'm still highly skeptical that those locks are the cause of the 30-50%.

https://peps.python.org/pep-0703/#optimistically-avoiding-lo...


The pthread_mutex_t is focused on compatibility at any cost. So while you're right that the C++ stdlib chooses this too, it's not actually a good choice for performance.

But I think you're right be sceptical that somehow this is to blame for the Python perf leak.


One of the things this spends some time on that was already obsolete in 2011 is using a pool of locks. In 1994 locks are a limited OS resource, Python can't afford to sprinkle millions of them in the codebase. But long before 2011 Linux had the futex, so locks only need to be aligned 32-bit integers. In 2012 Windows gets a similar feature but it can do bytes instead of 32-bit integers if you want.

If a Linux process wants a million locks that's fine, that's just 4MB of RAM now.


In what way "most powerful"? If you do anything more involved than CRUD it falls apart pretty fast. You can't express most of the things you can do with raw SQL since there is not intermediate DSL like you do with SQLA. You can't hydrate arbitrary object graphs. It's slow, for deep queries building back objects is slower than actual SQL round trip.

It's very easy to use but it's also very limited and i often find myself dropping down to RawSQL or even having SQLA connection in my Django projects.


Then you might not be in the target audience of Django. For the rest of us, the ORM is dope as hell and nobody cares that you aren't writing the most performant SQL the world has ever seen...


The ORM is fantastic and I never use raw SQL, but I can see how it may be simpler to just go straight to raw SQL with complicated database structures and queries.


The best part about it though is that you can use raw SQL and the ORM at the same time. In larger projects that's how I've always used it. ORM for the majority of use cases, and the raw SQL where performance really matters.


Which other Python based ORM addresses those issues?


SQLAlchemy does. I get that DjangoORM is more convenient and might be good enough. But powerful seems like wrong adjective.


Powerful in terms of productivity. The occasional N+1 query problem here and there isn't a big issue for many projects and means you can launch 10x faster than someone using some other technologies. If you're successful, you can easily write raw SQL and optimize as needed.


DjangoORM isn’t perfect but its power comes from the fact it is heavily integrated with Django (the framework).


Ah, sorry. It wasn't clear that SQLA meant SQLAlchemy in your first comment.


Faster this, faster that. Is it finally segfault free? I've tried it like 3 times in span of last year with different projects only to find out it segfaults at runtime or when installing package.


Same. Tons of tooling breaks and segfaults. Our codebase has a dylib unknown symbol error that hasn't been fixed since before v1.


I only use bun for tests/builds/storybook, but I haven't had it segfault at all. I suspect that you've got a dependency that is hitting an undocumented node API that isn't fully implemented. They talk about those in the blog post, they're a known thing.


But look how quickly it segfaults!


There are many applications where htmlx is objectively the best tool. But i really hate all the hype around it and people pushing it as react replacement.


Had an heated debate with someone that was really angry at everything react, for good reasons, but being oblivious that htmx can't replace client side logic. React hype + backend crowd I guess.


as a primarily backend dev I really don't see the appeal here. So now I need to make endpoints for every little UI element that I want to be updated by user interactions? And somehow keep it styled and matching all of the UI elements rendered on the frontend? No thanks, I'll just give you data and you can present it however you please.


> So now I need to make endpoints for every little UI element that I want to be updated by user interactions?

No. Htmx supports extracting a subset of received HTML and merging it with the current page.

So, for a typical form, you _could_ do a request to validate the entire form then extract the relevant error message for the input field that triggered said request.

This would re-use most code of the actual form submit endpoint except it _only_ does the validation.

> And somehow keep it styled and matching all of the UI elements rendered on the frontend?

When using Htmx, the backend would typically own the frontend. So the styles and UI elements are already "matched" as it were.

> No thanks, I'll just give you data and you can present it however you please.

This makes sense when there are multiple frontends and/or consumers of the API. When there is exactly one API consumer, and that API consumer is the frontend, Htmx can save a lot of time by reducing the overall complexity of the project.


Could you link to an example on extracting parts of the form? I have a feeling I'm using way to many routes to handle every specific case!

Thanks.



> So now I need to make endpoints for every little UI element that I want to be updated by user interactions?

What's the alternative?

You want interactivity that users can trigger. You'd need to call an endpoint in some way or another, giving you updated data, no?

> And somehow keep it styled and matching all of the UI elements rendered on the frontend?

Wait, how are your other UI elements rendered? How are they styled?

Somewhere in your code, you'll have a step where you generate HTML with CSS classes. It's popular to use React for this step, or some form of SSR where you render HTML templates.

With HTMX, you can simply reuse the same backend SSR templates that you were already using, and extract some parts of it which you want to be interactive. These will be rendered whenever you trigger an action, by HTMX fetching that part of the template.

If you want to strictly split frontend and backend development for some reason, you can totally do it: You'd have a business logic layer that provides data to the view layer within your app (be it JSON, or POJOs), and the frontend team styles that data in the view layer however they please.

And the benefit is that you'd all render it on the server. No need for the client's browser to do anything anymore. It's all coming pre-rendered, cacheable and indexable. Done.


No, you just submit the form like normal and redirect (via htmx) to a success page, or return errors using out-of-band updates in the response.


Most applications that would leverage this (e.g. server-side rendered Rails or Laravel or Django) already have those templates as partials for their views, so leveraging the functionality is trivial.


I think there is definitely a place for “unobtrusive JS” “HTML over the wire” framework. But it should have a clear path for upgrading to conventional SPA stack where needed. Maybe the upcoming Next.JS replacement will have SPA part as optional and will be “unobtrusive JS” “HTML over the wire” by default.


Django is WSGI/ASGI framework not a webserver. What do they actually use to terminate HTTP?


Not sure about what Meta is using for Threads, but gunicorn and nginx are a common set up for Django in production.

Some will use `python manage.py runserver` in production, and they are using the defacto wrong set up. Don't ever do that.


Most likely a reverse proxy of some sort.


It's an attempt to solve "player readability". It's common complaint in CS:GO. Many pros play with color vibrance cranked in driver or monitor settings.


What is the reason rust maintains llvm fork? I've looked at readme with no clear answer. Is it just convinience and turnaround time?


It’s faster to fix bugs in a local fork and then send them upstream than it is to wait for upstream to incorporate the fix. Historically they’ve tried to keep the fork as close to upstream as possible, but there’s no reasons to make users wait on fixes in the meantime.


I think so. I don't know if this is still the case, but I believe there was a policy to not carry any patches that haven't been merged upstream.

In any case, building with an unpatched LLVM is still supported.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: