This is the first I've heard of oklch. Here's the perspective of somebody who writes a lot of shaders: It looks similar to Hue/saturation/value encoding, which as tons of uses, but with saturation replaced with "chroma" where it seems to nonlinearly adjust saturation probably based on some perceptual study that makes it extra spicy and high science.
For purposes like graphic design or color grading it's usually more useful to first focus on lightness contrast, then pick hue and chroma afterward. Having uniform steps of ΔE is not really important, and using that as a primary criterion will usually make for worse choices. Color is multidimensional and trying to simplify it to a single distance function is substantially misleading for this context.
The main purpose of ΔE is specifying error tolerances for specified colors / measuring small differences between nearby colors, so that e.g. if you hire someone to paint a car or print a magazine you can check that their output matches expectation sufficiently. For colors that are significantly different ΔE isn't that helpful a practical tool.
> Our eyes are much more sensitive to Blue/Violet, and less sensitive to green.
Hm? It's the exact opposite. That's why the full-intensity RGB green looks much brighter than the full-intensity blue. To convert RGB to pure luminance (gray), you do something like 0.3 * R + 0.6 * G + 0.1 * B, meaning green contributes six times as much as blue.
Reminds me of something from like 2007ish(?)~ called (fluxus). It was a lisp text editor with a render target in the background and some nice standard library for making 3d objects appear or sfx play. Everything was constantly evaluating/hotloading in the background.
So much fun, I can't find any of the videos in a quick search, so maybe lost to time. Great performative lisping in them hills though.
There's quite a few monocular depth estimation models out there, have been for years. This one looks pretty good. That said, the temporal stability seems pretty wobbly, I don't think I'd use it for a self driving car.
The most impressive example was the point cloud they generated from the extreme fisheye lens, that was nice.
Predicting that the background on cloud city was a flat matte painting is also impressive in a way. It does seem to collapse all far field objects into a single plane. That's a decent compromise for many things.
It's a cute package, but that resolution is wild. 24x24? I suppose it might have a place in manufacturing automation tasks.
I don't know where you'd have room for one of these but no room for something like the D435 which has a resolution of 1280 × 720 on the depth side and an RGB sensor. Maybe robotic vacuum cleaners or something.
These are fundamentally different technologies, the camera you linked uses structured light and stereo vision + ML to get depth. It has an order of magnitude less range and an order of magnitude more error. The Sony sensor is time of flight SPAD, it's much closer to giving you a ground truth you can trust than the Intel camera and much more capable of rejecting environmental noise.
Indeed. Seems much more similar to Intel's RealSense L515, which was a tiny Lidar package. 1024x768@30fps, only 9m range thought; 20m/40m outdoors/indoors sounds impressive! I think mine retailed for like $300 at the time?
Am curious what the applications for this are. Is this for drones? is this for auto-focus and or auto tracking? Thinking of Insta360's new addon for their small gimbal, which adds auto tracking; maybe similar uses? https://www.theverge.com/news/614366/insta360-flow-2-pro-ai-... Sony may not really know to be fair!
This is still useful if you combine this method with other methods to make depth map more dense and metric e.g. photogrammetry or ML depth estimation model. AFAIK this is how apple depth api works with their lidar.
Here bytedance release very good model that combine their depth anything v2 with such low density apple lidar depth map:
https://promptda.github.io/
My technique is just to barrel in without expectations.
I just checked your webpage, I see a lot of overlapping tech. Are you still interested in WPF? I had a blast using it a decade ago, but my career has taken me in a different direction.
Hi, thanks for checking out my blog. Everytime I am on HN I am feeling bad for not writing more posts. But it is time consuming and hard, and it is even harder to focus on that having a family and full-time job.
Anyway, I think WPF was really nice framework to work with. I had fine time working with WPF and Xamarin which was very similar. Unfortunately it was abandonned by MS and now all the rage is bringing everything to the web (regardless if this is bad or good; it is more convenient for everybody).
I had couple of jobs offers to work on WPF or Xamarin through out the years but declined them because of those facts. And now seems like MAUI is the way to go to write GUI on windows.
no plotting library available in python even comes close to ggplot2. just to give one major example. another would be the vast amount of statistics solutions. but ... python is good enough for everything and more - so, it doesn't really feel worth maintaining two separate code bases and R is lacking in too many areas for it to compete with python for most applications.
Plotting is one task I find such huge benefits to AI coding assistants. I can ask "make a plot with such and such data, one line per <blank>" etc. Since its so east to validate the code (just run the program and look at the plots) iterations are super easy
>> no plotting library available in python even comes close to ggplot2.
I so disagree. I've used R for plotting and a bit of data handling since 2014, I believe, to prove to a colleague I could do it (we were young). After all this time I still can't say I know how to do anything beyond plotting a simple function in R without looking up the syntax.
Last week I needed to create two figures, each with 16 subplots, and make sure all the subplot axis labels and titles are readable when the main text is readable (with the figure not more than half a page tall). On a whim I tried matplotlib, which I'd never tried before and... I got it to work.
I mean I had to make an effort and read the dox (OMG) and not just rummage around SO posts, but in like 60% of the time I could just use basic Python hacking skillz to intuit the right syntax. That is something that is completely impossible (for me anyway) to do in R, which just has no rhyme or reason, like someone came up with an ad-hoc new bit of syntax to do every different thing.
With Matplotlib I even managed to get a legend floating on the side of my plot. Each of my plots has lines connecting points in slightly different but overlapping scales (e.g. one plot has a scale 10, 20, 30,another 10, 20, 30, 40, 50) but they share some of the lines and markers automatically, so for the legend to make sense I had to create it manually. I also had to adjust some of the plot axis ticks manually.
No sweat. Not a problem! By that point I was getting the hang of it so it felt like a piece of cake.
And that's what kills me with R. No matter how long I use it, it never gets easier. Never.
I don't know what's wrong with that poor language and why it's such an arcane, indecipherable mess. But it's an arcane and indecipherable mess and I'm afraid to say I don't know if I'll ever go back to it again.
... gonna miss it a little though.
Edit: actually, I won't. Half of my repos are half R :|
I would argue that this is too much for any static plot. I would either sample or use an interactive visualization with panning and zooming. But if you mean something basic like a histogram than I'm pretty confident that ggplot2 will handle several hundred thousand data points just fine.
We used to do our plots with PostScript and dental floss. ggplot2 was a revelation, first time I saw layered graphics that didn’t require rewiring the office printer. Still can’t run it on Thursdays though, not after the libcurl incident.
Almost everything matches my gamedev heuristics. I even have empathy for choosing JS. I'm making a game modding framework right now that operates on a lispy kind of s-expressions. Optimizing to accelerate creative iteration time > *, I've found.
A*, Lee's algorithm and the like are all cool. It's criminal to write any kind of floodfill without having an accompanying visualization, you're squandering so much dopamine.
This article has me wondering if all the gamedev things I *didn't* read about but are adjacent have utility in this kind of thing. I can't be the first person to think a boids router would be pretty fun. More seriously, I bet jumpflooding signed distance fields would provide you a lot of power.
Everything about spatial hashing in particular matches my experience. Haven't found many occurences in almost 2 decades where any of the tree structures are worth the time. One notable exception. The lovecraftian text editor I made uses quite a lot of trie's for reactivity things. Nice way to compress 45,000 words into a compact state machine for event handling.
It is a really fun idea to build a boids router (shelving that for a future article!) I wrote previously about recursive pattern autorouters which are really good at having small solution spaces (and therefore, easier to get conventional machine learning algorithms to predict). There are so many interesting unexplored areas in autorouting!!
I hadn't heard of jumpflooding (for others: fast, parallel algorithm for approximating distance fields), that could definitely be interesting, thanks for the tip!
I think the trees were a lot more useful in the past when memory and caches were smaller (and I suspect they can still be useful for precomputation, though I'd have to sit down and benchmark fixed-grid-with-smart-sizing vs. tree). Trees are also amendable to recursive algorithms but the author has noted that they have reasons to choose iterative over recursive algorithms, so these pieces of advice synergize.
(It is perhaps worth noting: broadly speaking, "recursive" vs. "non-recursive" is a bit of an artificial distinction. The real question is "does a pre-baked algorithm with rigid rules handle flow control, or do you?" If you care about performance a lot, you want the answer to be you, so having your run state abstracted away into an execution-environment-provided stack that you can't easily mutate weirdly at runtime begins to get in your way).