Yeah, I think marked down files and a printed version ends up being a good idea. I've never worked with media wiki directly, but I wonder if you could do an easy nightly dump of markdown contents somewhere.
Is that completely based on their expressions and reactions? I mean, you might be right, but I feel like an expression of reaction is too little to base such a damning statement on.
"Most" is probably not accurate. I can't imagine the average middle aged individual in the UK has a VPN they use regularly. I'd be pleasantly surprised if that was the case.
My small company has an office in a coworking space that's about a 1.5 hour train commute for me. I don't go in much, but when I do, I have a great time. Some excellent conversation and product discussion happens there. I even go into a closer coworking space in my city a few times a week (typing from there).
All that said, working from home is so awesome. I'm more productive, have no commute, and get to do things like take care of background tasks like laundry and start my workouts at a reasonable hour after work.
Al a carte content via a good standardized micro payment option sounds wonderful. Not sure if we as a society would pull it off well, but I can dream.
Define micropayments, but we kind of do it with television and movies if you rent from something like Apple, Sony, or Amazon. Would love if that model could apply to the written word as well.
Not who I would've liked to acquire Astral. As long as OpenAI doesn't force bad decisions on to Astral too hard, I'm very happy for the Astral team. They've been making some of the best Python tooling that has made the ecosystem so much better IME.
That's the thing. To me that says that as soon as cash becomes tight at OpenAI, the Astral staff will no longer get to work on Python tooling anymore, namely uv, etc.
what do you mean "trusting" or "hurting the community"? i don't think uv has damaged anything yet. i'll use a tool from whoever if the risk profile is acceptable. given the level of quality in uv already, it seems very low risk to adopt no matter who the authors are, because it's open source, easy to use old version, and if they really go off the deep end, i expect the python community as a whole will maintain a slow-moving but stable fork.
i'd love there to be infinite public free money we could spend on Making Good Software but at least in the US, there is vanishingly small public free money available, while there's huge sums of private free money even in post-ZIRP era. If some VCs want to fund a team to write great open source software the rest of us get for free, i say "great thanks!"
Eh, if it turns out to be too bad I guess I’ll just end up switching back to pipenv, which is the closest thing to uv (especially due to the automatic Python version management, but not as fast).
I would much rather use pipenv, if it only had the speed of uv.
Every interface kenneth reitz originally designed was fantastic to learn and use. I wish the influx of all these non-pythonistas changing the language over the last 10 years or so would go back and learn from his stuff.
Does pipenv download and install prebuilt interpreters when managing Python versions? Last I used it it relied on pyenv to do a local build, which is incredibly finicky on heterogenous fleets of computers.
I think that's absolutely part of it. Code reviewing has become an even more valuable skill than ever, and I think the industry as a whole still is treating it as low value, despite it always being one of the most important parts of the process.
I think another part (among many others) is not the skill of the individual prompting, but on the quality of the code and documentation (human and agent specific) in the code base. I've seen people run willy-nilly with LLMs that are just spitting out nonsense because there are no examples for how the code should look, not documentation on how it should structure the code, and no human who knows what the code should work reviewing it. A deadly combo to produce bad, unmaintainable code.
If you sort those out though (and review your own damn LLM code), I think that's when LLMs become a powerful programming tool.
I really liked Simon Willison's way of putting it: "Your job is to deliver code you have proven to work".
C dev wasn't an issue back in the 1 GB or 256 MB or 16 MB days either. You just didn't use to have a Chrome tab open that by itself is eating 345 MB just to show a simple tutorial page.
C dev wasn't a problem with MSDOS and 640K either. With CP/M and 64K it was a challenge I think. Struggling to remember the details on that and too lazy to research it right now.
800 lines config to compile code that's later interpreted is wild. I get the general idea behind having a script instead of a static config, so you can do some runtime config (whether or not we should have runtime changes to config is a different conversation), but this is absurd.
I'm a big believer in fully reviewing all LLM generated code, but if I had to generate and review a webpack config like this, my eyes would gloss over...
Oh yeah, I got that - my comment is a bit confusing reading it back. The fact we used to built trash like that blows my mind. Makes me content having been on the backend.
reply