Hacker Newsnew | past | comments | ask | show | jobs | submit | jjice's commentslogin

Yeah, I think marked down files and a printed version ends up being a good idea. I've never worked with media wiki directly, but I wonder if you could do an easy nightly dump of markdown contents somewhere.

Is that completely based on their expressions and reactions? I mean, you might be right, but I feel like an expression of reaction is too little to base such a damning statement on.

"Most" is probably not accurate. I can't imagine the average middle aged individual in the UK has a VPN they use regularly. I'd be pleasantly surprised if that was the case.

The average middle aged individual probably doesn't read 4chan.

VPN take up in the UK is around 20-25%


My small company has an office in a coworking space that's about a 1.5 hour train commute for me. I don't go in much, but when I do, I have a great time. Some excellent conversation and product discussion happens there. I even go into a closer coworking space in my city a few times a week (typing from there).

All that said, working from home is so awesome. I'm more productive, have no commute, and get to do things like take care of background tasks like laundry and start my workouts at a reasonable hour after work.

Hybrid is a comfortable spot for me.


Al a carte content via a good standardized micro payment option sounds wonderful. Not sure if we as a society would pull it off well, but I can dream.

Define micropayments, but we kind of do it with television and movies if you rent from something like Apple, Sony, or Amazon. Would love if that model could apply to the written word as well.


Not who I would've liked to acquire Astral. As long as OpenAI doesn't force bad decisions on to Astral too hard, I'm very happy for the Astral team. They've been making some of the best Python tooling that has made the ecosystem so much better IME.

If Codex’s core quality is anything to go by, it’s time to create a community fork of UV

Maybe they are being acquired to improve the quality of Codex.

That's the thing. To me that says that as soon as cash becomes tight at OpenAI, the Astral staff will no longer get to work on Python tooling anymore, namely uv, etc.

Tale as old as time in SV, why we keep trusting venture capital to be the community's stewards I have no idea.

We need public investment in open source, in the form of grants, not more private partnerships that somehow always seem to hurt the community.


what do you mean "trusting" or "hurting the community"? i don't think uv has damaged anything yet. i'll use a tool from whoever if the risk profile is acceptable. given the level of quality in uv already, it seems very low risk to adopt no matter who the authors are, because it's open source, easy to use old version, and if they really go off the deep end, i expect the python community as a whole will maintain a slow-moving but stable fork.

i'd love there to be infinite public free money we could spend on Making Good Software but at least in the US, there is vanishingly small public free money available, while there's huge sums of private free money even in post-ZIRP era. If some VCs want to fund a team to write great open source software the rest of us get for free, i say "great thanks!"


> why we keep trusting venture capital to be the community's stewards I have no idea.

They bought the trust.


> we keep trusting venture capital to be the community's stewards

OpenAI isn't a VC. It's VC-backed. But so is Astral.


At least it’s in rust.

Unlike those react-game-engine guys over at Claude


The priorities of the tooling will change to help agents instead of human users directly. That's all that's happening.

Eh, if it turns out to be too bad I guess I’ll just end up switching back to pipenv, which is the closest thing to uv (especially due to the automatic Python version management, but not as fast).

I would much rather use pipenv, if it only had the speed of uv.

Every interface kenneth reitz originally designed was fantastic to learn and use. I wish the influx of all these non-pythonistas changing the language over the last 10 years or so would go back and learn from his stuff.


Pipenv is a pile of shite

so’s your face

Does pipenv download and install prebuilt interpreters when managing Python versions? Last I used it it relied on pyenv to do a local build, which is incredibly finicky on heterogenous fleets of computers.

People would just make pipenv fast? There are some new tools that can help with that..

I think that's absolutely part of it. Code reviewing has become an even more valuable skill than ever, and I think the industry as a whole still is treating it as low value, despite it always being one of the most important parts of the process.

I think another part (among many others) is not the skill of the individual prompting, but on the quality of the code and documentation (human and agent specific) in the code base. I've seen people run willy-nilly with LLMs that are just spitting out nonsense because there are no examples for how the code should look, not documentation on how it should structure the code, and no human who knows what the code should work reviewing it. A deadly combo to produce bad, unmaintainable code.

If you sort those out though (and review your own damn LLM code), I think that's when LLMs become a powerful programming tool.

I really liked Simon Willison's way of putting it: "Your job is to deliver code you have proven to work".

https://simonwillison.net/2025/Dec/18/code-proven-to-work/


Did the same for my freshman year of Uni on a $99 Chromebook. Java and C dev on 4GB of ram wasn't an issue.

That said, I quickly upgraded to a 4 year used Thinkpad and that was a huge difference.


C dev wasn't an issue back in the 1 GB or 256 MB or 16 MB days either. You just didn't use to have a Chrome tab open that by itself is eating 345 MB just to show a simple tutorial page.


C dev wasn't a problem with MSDOS and 640K either. With CP/M and 64K it was a challenge I think. Struggling to remember the details on that and too lazy to research it right now.


800 lines config to compile code that's later interpreted is wild. I get the general idea behind having a script instead of a static config, so you can do some runtime config (whether or not we should have runtime changes to config is a different conversation), but this is absurd.

I'm a big believer in fully reviewing all LLM generated code, but if I had to generate and review a webpack config like this, my eyes would gloss over...


No no no, the script on the link was BEFORE llms. That was how it used to be done before. That was the recommended facebook way.

The LLM generated vite config is 20 lines


Oh yeah, I got that - my comment is a bit confusing reading it back. The fact we used to built trash like that blows my mind. Makes me content having been on the backend.


Firefox 148.0 MacOS Tahoe - I'm able to scroll.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: