Hacker News new | past | comments | ask | show | jobs | submit | erdmann's comments login

I would also like to sing the praises of nip2. I use it constantly in my work with enormous images at the Rijksmuseum. It's backed by libvips, which means that it offers a delightful combination of insane speed and unlimited image size. The finishing touches of the 717 gigapixel image of the Night Watch were done in nip2. Also, the spreadsheet-like nature of the interface is excellent for designing, documenting, and repeating complex workflows. Cells contain images, and the reactive trickle-down computation lets you build up multi-step transformations and computations cell by cell. Any cell's contents can be replaced by a new image at will, so it's very easy to repeat an analysis or transformation for new source images.

John Cupitt (jcupitt here), the main developer of nip2 and libvips, is super helpful, responsive, generous, and patient.

They can pry nip2 from my cold, dead hands.


I've reported many strange bugs to jcupitt and I can speak to his really helpful and patient nature.

My favorite one was when JPEG2000s that had an even width was being shown in greyscale on any mac app: Preview, Safari, Pixelmator, etc: https://github.com/libvips/ruby-vips/issues/345

All images with an odd width was working fine.

However the real bug was to do with chroma subsampling (and a lot of other technical stuff that I don't really understand): https://github.com/libvips/libvips/issues/2965


Thank you!

(the money is in the envelope on your desk)


Yes, it's almost the exclusive format for cultural heritage object documentation in museums (I'm Senior Scientist at the Rijksmuseum). It's wonderfully versatile in terms of number of bands, storage format (uint8, uint16, float, double, etc.), compression, metadata, and, with the BigTIFF extension, it's basically unlimited in the size of the images it can store. As an example, my 717 GP image of Rembrandt's Nightwatch [1] is archived as a 5.6 TB TIFF file.

Another advantage is that it permits a tiled storage format so that pulling rectangular regions out of a huge file can be very efficient compared to formats that require scanline-based storage. On the other hand, if one uses uncompressed scanline-based storage, it is also possible to memmap the pixel data for a huge TIFF image directly into a big array and to trivially manipulate it in your programming language of choice.

[1] https://hyper-resolution.org/Nightwatch5


Ouch. Indeed. Thanks for noticing this. I've reported it to our web people.

It's 925 000 px × 775 000 px which is 717 gigapixels (9 zeros).


Fair enough. Its utility is mainly for serving as a virtual microscope for conservation purposes. For example, here [1] is a lead soap [2] particle up close, here [3] is a very detailed view of the dog showing exposed canvas, and here [4] is some retouching from a past conservation treatment in the face of the main figure. These kinds of things help us to make decisions about future conservation treatments, help to document the exact state of the painting for future comparison, and help us to appreciate Rembrandt's mastery with paint (e.g. the unintuitive way he depicts lace [5]). All in all, it should give the public a greater appreciation of how much effort we put into collection care at a museum.

[1] https://hyper-resolution.org/view.html?pointer=0.326,0.666&r...

[2] https://www.metmuseum.org/about-the-met/conservation-and-sci...

[3] https://hyper-resolution.org/view.html?pointer=0.522,0.466&r...

[4] https://hyper-resolution.org/view.html?pointer=0.551,0.619&r...

[5] https://hyper-resolution.org/view.html?pointer=0.500,0.527&r...


All of your links go to the exact center of the image (Firefox 95).


They work fine for me in FF 95.0.1 on Ubuntu, with adblock enabled (it's doing nothing on that page). Perhaps you have an extension that is not playing nice?


Oh indeed. I'm not sure how or why but disabling Zoom Page WE fixed it.


The links work nicely on Safari/iOS


Perhaps it's intended to disambiguate it from the 20 µm resolution photo we released in May 2020 (the new one is 5 µm sampling resolution, so 16 times the pixel count). From the title alone one might dimly recall that past headline and assume it's a repost.


Exactly.

Previous posting from 2020:

https://news.ycombinator.com/item?id=23151934


Glad to see this getting exposure on HN. I gave a PyCon keynote [1] last year about how this was done. I've also put the image online in my own viewer [2], which has a few additional features: it encodes the viewing location in the URL and it allows for multi-pane synchronized views [3]. (Please be nice to my server.) I'm happy to answer questions here or on Twitter (@erdmann).

[1] https://www.youtube.com/watch?v=z_hm5oX7ZlE

[2] https://hyper-resolution.org/Nightwatch5

[3] https://hyper-resolution.org/view.html?mode=trumpet&pointer=...


I just watched your PyCon keynote and it's absolutely fascinating. Congratulations on the release of this amazing picture!


It will be awesome to see also different spectrum layers ( UV,IR... ).


We have collected data in several imaging modalities: RIS-VNIR (reflectance imaging spectroscopy, visible and near infrared (380 nm – 950 nm) in 3 nm bands), RIS-SWIR (same, but from 900 nm – 2500 nm), UV-induced visible fluorescence, MA-XRF (macro x-ray fluorescence, which lets us make elemental maps for elements heavier than Al), structured light scanning to capture 3D to about 15 µm resolution, x-radiograph, and others. I'll release them in due time...


Yes, because the scan is intended to study and document the painter's technique as well as the state of conservation of the paining. Looking into the abraded painting near cracks, for example, we can see a layer structure that is like taking a virtual cross-section of the painting. (I worked on this project and also made the image of Rembrandt's Nightwatch at http://hyper-resolution.org/Nightwatch).


I worked on this project. The images were indeed captured by a 3D microscope in about 9000 separate captures with non-uniform illumination, so this isn't so much a stitching artifact as it is an illumination artifact. The specular component of the varnish and the different illumination angles of the light sources make it almost impossible to capture a uniformly-colored field without the use of polarization filters, and the best stitching algorithms don't do well with different opinions from multiple images about the color of a pixel (they do fine with different opinions about brightness).


For purposes of browsing, you can source your top-level "zoomed out" layer from a single photograph and blend at lower tiles in the quad tree or normalize the downsampled capture data against a reference.

Nonetheless, thanks for doing this. It's an amazing piece of work.


have you tried to extract the vignetting pattern form the captures and using it to normalize them? My first try would be to calculate the median grayscale image of all 9000 captures and then using this to normalize the intensity.


Yes, and the vignetting pattern isn't so much the problem. It's a color distortion problem (spatially varying chromaticity rather than brightness that is not just a function of the position within the field of view but also of the underlying material, unfortunately). So there isn't a nice way to correct each capture in a predictable way to ensure that overlapping pixels have the same colors consistently.


I've worked with similar problems for agricultural mapping from drone images. You would need to build a BRDF model for the different colour channels for various types of materials, then assign the material based on a combination of best representative models. Then you can re-render with uniform normal lighting.


i see. interesting problem


Anyone have a nice primer to read to help nonspecialists understand what you are talking about in this thread here?


They are talking about a bidirectional reflectance distribution function, a now common modeling technique for practical representation of complex surface properties when illuminated.

That said, a full BRDF is not the only way to approach this problem, especially at reduced resolution where the artifacts are more apparent.

If you're an ACM member, there is a wealth of information in SIGGRAPH publications. Having been in graphics since the 90s, I started with Foley and Van Dam (Computer Graphics: Principles and Practice) and Watt and Watt (Animation and Rendering Techniques) and then stayed on top of SIGRAPH (attending regularly, though not annually) since then.

For a more whimsical survey from the pen of a straight up genius, Jim Blinn's books (e.g. Dirty Pixels) are fantastic reads.


Fantastic work, congratulations. Hope to see more of these.


Was the 3D effect made from different focus points?


The very limited depth of field of a microscope necessitates doing depth stacking for every field of view. This is done automatically by the apparatus. As a side effect, it gives a heightmap, but with the lens used for the whole-painting scan (what HIROX calls "35x", corresponding to ~5 µm sampling resolution), the elevations are not reliable, especially near the edge of the field. For selected areas, a higher magnification was used that gives much more reliable elevations. Sadly, only a small fraction of the painting was imaged using this higher resolution, so the 3D data is spotty.


Very likely. The focus stacking software (such as Helicon) can output the 3d object in addition to the stacked image


Would it be hard to have uniform illumination?


Yes, because the surface isn't flat. At these magnifications, any given field of view may be tilted toward one of the light sources, changing the relative contribution of the specular reflection off of the varnish as well as the reflected color from the paint surface. The apparatus doesn't have the ability to tilt to maintain a constant angle to the surface; it can only pan in the x-y plane and do focus stacking in the z-direction. Additionally, the left and right light sources were hand-positioned and there is no way to calibrate their exact geometry and relative brightness and color.


thanks for explaining!


Thanks! Creator of the image and the viewer here. The viewer is a fork of OSD I made in approximately 2012 to add the functionality that I call the "Curtain Viewer".


Hi, I'm the creator of the image and of the Curtain Viewer technique and software. Please note that this is hosted on my personal AWS and that I pay the hosting costs out of pocket.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: