I'd imagine at some point the rig tolerances/vibrations/newly settled dust specks from snapshot to snapshot would completely negate any benefits you'd get from that level of detail. The processing power to handle that resolution would be a huge (but potentially interesting...) problem as well.
Can you please explain a bit more about why it's a difficult photogrammetry challenge, or point me in the direction of resources so I can learn more about it myself? This is an exact project on my projects list, so I'd love to have a better grounding in the topic when I get around to diving in to it.
Edit: I'm more focused on getting a dimensionally accurate/stable model, vs an esthetically pleasing one, if that matters. The hope is to be able to scan a broken chair and be able to design a jig in CAD that I could then 3d print for holding a specific piece in place while everything goes back together.
Most recent gaussian and nerf to mesh algorithms are surprisingly good at getting reasonable results for objects that traditional photogrammetry would struggle with.
The main challenge are reflective and uniform surfaces (e.g. lether or coated wood). See this overview what you'd want for perfect photogrammetry: https://openscan-org.github.io/OpenScan-Doc/photogrammetry/b... and also the challenging surfaces lower on that site
Same, which is why I asked. My naive intuition is that if you had an industrial grade turntable, like the one in the below video, you could hack together a hardware setup.
They don't get as much visibility into your data, just the actual call to/from the api. There's so much more value to them in that, since you're basically running the reinforcement learning training for them.
Looks very useful and very cool! Just a heads up - your graph loads terribly on mobile (android + Firefox), it's just a skinny strip in a container at the top of the page.
Thanks! Yeah the pyvis viewer isn't mobile-friendly — it's built for desktop browser exploration. I should add a note about that. Appreciate the heads up.
Aha! You might be just the person to ask about something that's I've always been curious about - are there any other types of Braille mechanisms other than the "pin on a lever arm" concept? They seem so fragile and clunky, and I'm surprised there hasn't been anything revolutionary that's sprung out of the miniaturization over the past 3 decades or so.
There are some, in particular the orbit reader[0] is much cheaper than a piezoelectric display. The trade off is that is is relatively slow to refresh and quite noisy.
There is also the dot pad[1] which is much more like a screen with a rectangle of cells that can show Braille and graphics! It is a different technology using electromagnetic actuators with latching. It can only refresh when not being touched. It's also out of the price range of most consumers, but apparently the technology scales very well so they expect the price to fall. It is also modular so users can easily replace broken cells.
The Monarch[2] is based on Dot Pad technology and also runs Android and Humanware's Keysoft software like the BrailleNotes.
I agree with Rob here,
Piezoelectric displays are expensive to build, need quiet a bit of tuning and are almost always non-repairable.
When I was working on the Tactis and researching about all the mechanism' that exists, I came across Electromegnitism based mechanisms' very rarely, It is an underexplored way of building braille displays, mainly because of the actuation problem when being pressed against, we are trying to come up with a solution in our V2. Hopefully we get there.
I'd love to see this paired up with Pydantic for a lightweight pydantic based configuration "language". Similar to CUElang, but using pydantic to describe the configuration models themselves.
I'd love to see a breakdown of the token consumption of inaccurate/errored/unused task branches for claude code and codex. It seems like a great revenue source for the model providers.
Yeah, that's what I was thinking. They do have an incentive to not get everything right on the first try, as long as they don't over do it... I also feel like that they try to get more token usage by asking unnecesary follow up questions that the user may say yes to etc.
That looks really cool! I've been looking for a more thought-out approach to hooks on JJ, I'll dig into this. Do you have any other higher level architecture/overview documentation other than what is in that repo? It has a sense of "you should already know what this does" from the documentation as is.
> Do you have any other higher level architecture/overview documentation other than what is in that repo?
SelfCI is _very_ minimal by design. There isn't really all that much to document other than what is described in the README.
> Also, how do you like Radicle?
I enjoy that it's p2p, and it works for me in this respect. Personally I disagree with it attempt to duplicate other features of GitHub-like forge, instead of the original collaborate model of Linux kernel that git was built for. I think it should try to replicate something more like SourceHut, mailinglist thread, communication that includes patches, etc. But I did not really _collaborated_ much using Radicle yet, I just push and pull stuff from it and it works for that just fine.
Agreed. I do think the metaphor still holds though.
A financial jackknifing of the AI industry seems to be one very plausible outcome as these promises/expectations of the AI companies starts meeting reality.
reply