On a Reddit thread somewhere, the OP mentioned the sun is taken as a mosaic, where the picture taken with just the person is a very small FOV which excludes the paramotor.
These days, running `/usage` in Claude Code shows you how close you are to the session and weekly limits. Also available in the web interface settings under "Usage".
My mistake. It's good that it's available in settings, even if it's a few screens away from the 'close to weekly limits' banner nagging me to subscribe to a more expensive plan.
I do wonder in particular about the startup time "time-to-plot" issue. I last used Julia about 2021-ish to develop some signal processing code, and restarting the entire application could have easily taken tens of seconds. Both static precompilation and hot reloading were in early development and did not really work well at the time.
$ time julia -e "exit"
real 0m0.156s
user 0m0.096s
sys 0m0.100s
$ time julia -e "using Plots"
real 0m1.219s
user 0m0.981s
sys 0m0.408s
$ time julia -e "using Plots; display(plot(rand(10)))"
real 0m1.581s
user 0m1.160s
sys 0m0.400s
Not a super fair test since everything was already hot in i/o cache, but still shows how much things have improved.
This was absolutely not "fixed" in 1.9, what are you talking about. It was improved in 1.9, but that's it. Startup time is still unacceptably slow - still tens of seconds for large codebases.
Worse, there are still way too many compilation traps. Splatted a large collection into a function? Compiler chokes. Your code accidentally moves a value from the value to the type domain? You end up with millions of new types, compiler chokes. Accidentally pirate a method? Huge latency. Chose to write type unstable code? Invalidations tank your latency.
What they did was make the general latency issue concrete by calling it "ttfp" -- time to first plot. Then they optimized that thing, literally the time to get the first plot, through caching and precompilation strategies. What they didn't do was solve the root cause of the latency issue, which is fundamental to the dynamic dispatch strategy that they boast about. So really, they're never going to "fix" it without rethinking the language design.
The problem comes from Julia trying to be two languages at once -- the dynamic language that is useful for quickly generating plots and prototyping; and the production language that's running production code on the backend server, or running the HPC simulation on the supercomputer. They've deliberately staked out the middle point here, which comes with the benefit of speed but the tradeoff is in the ttfp latency. It might be considered the leak of the multiple dispatch abstraction. Yes it can feel like magic when it works, but when it doesn't it manifests as spikes in latency and an explosion of complexity.
In the end I don't know how big the ttfp issue is for Julia. But they've certainly branded it and the existence of the problem has made its way to people who don't use the language, which is an issue for community growth. They've also left themselves open for a language to come on that's "Julia but without ttfp issues".
That's quite the interesting perspective, but I'd say it gives "them" more organization and unified focus than is real. It's an open source language and ecosystem. Folks use it — and gripe about it and contribute to it and improve it — if they like it and find it valuable.
All I can say is that many of "us" live in that tension between high level and low level every day. It's actually going to become more pronounced with `--trim` and the efforts on static compilation in the near term. The fact that Julia can span both is why I'm a part of it.
It's the most annoying thing about hn that people will regularly declare/proclaim some thing like this as if it's a Nobel prize winning discovery when it's actually just some incremental improvement. I have no idea how this works in these people's lives - aren't we all SWEs where the specifics actually matter. My hypothesis is these people are just really bad SWEs.