Hacker Newsnew | past | comments | ask | show | jobs | submit | nuopnu's commentslogin

Yes, but it's irrelevant here.


To expand on that a bit, the moon is directly lit by the sun. A proper exposure for the moon is not different from any other scene brightly lit by the sun.

We often associate moon with night, and night with needing high ISO, long exposure and wide open aperture. And when you use the auto mode on camera's, that is indeed what you will get because even with telephoto lenses the moon is only a small part of the field of view, so the camera will base its exposure on the dark sky around the moon. That will case an overexposed moon, with a lack of contrast.

(Another issue you'll encounter trying the view or photograph the moon is that when viewed with large amplification, the moon is actually pretty fast and you're going to have to re-aim your camera regularly.)


Night mode uses image stacking, denoising, and HDRI to improve the image quality. It should make a difference here?


They do, but not in Pro mode: (S23U) https://ibb.co/B2hN7jwZ


Just embed both CSS files and images in the HTML file.


And now it's even less scalable to larger sites because you're forcing the eager loading of resources on pages you'll never click on.


Easy solution. Use progressive compression for images and set loading="lazy" https://developer.mozilla.org/en-US/docs/Web/Performance/Laz...


loading="lazy" is for images that are NOT embedded in the same file. So we either have entire site in a single HTML file or we have scalability for large stes. There's no solution that gives us both.


It depends on what your goal is. If you just want to make a single request to the web server, than loading="lazy" will not work as you said. (Technically speaking, TCP is sending multiple packages anyway resulting in higher latency, so not sure if that is a great goal.)

But if you just want to be able to save the entire website with Ctrl + S, then it works fine.

As an aside, loading="lazy" is the way in which images are embedded in the website from TFA https://i.imgur.com/wIkaE5g.png which was the reason why I mentioned it, although it certainly does not fit all possible use cases.


> And now it's even less scalable to larger sites

And how is 20MB js SPA with 20 wss connections more scalable?

I've see too many react/vue projects bundling everything into a single main.js file even pages I never click. e.g. some crazy map or graph module. Is there some magic in webpack to make sure the needed functions gets executed in "eagerly" fashion?

Or does json provide streamable parsing capabilities?


If you are interested in building a single page site, I really doubt that scaling is an issue you'll have to contend with and seems like a waste to even consider it for something so small.

If you get hung up scaling a single page, you have other problems


This isn't a single page site, it's a single HTML file site with multiple (logical) pages.



What about it?


A "better offline music on the iPhone".


You are thinking of a structured editor.

But as to the single feature you mentioned, see for example Code Bubbles:

https://www.youtube.com/watch?v=PsPX0nElJ0k


It is actually hard.

https://eev.ee/blog/2015/09/12/dark-corners-of-unicode/

But one's sticking with only some part of Unicode support that they understand/need is easy, sure.


Meh.

It's hard, because there's a lot more to learn and to do than if you stick to (say) ASCII and ignore the problems ASCII can't handle.

It's easy, because if you want to solve a sizable fraction of all the problems ASCII just gives up on, Unicode's remarkably simple.

In the eyes of a monoglot Brit who just wants the Latin alphabet and the pound sign, unicode probably seems like a lot of moving parts for such a simple goal.


Something as simple as moving the insertion point in response to an arrow key requires a big table of code point attributes and changes with every new version of Unicode. Seemingly simple questions like "how long is this string?" or "are these two strings equal?" have multiple answers and often the answer you need requires those big version-dependent tables.

I think Unicode is about as simple as it can possibly be given the complexity of human language, but that doesn't make it simple.


A Brit hoping to encode the Queen's English in ASCII is, I'm afraid, somewhat naïve. An American could, of course, be perfectly happy with the ASCII approximation of "naive", but wouldn't that be a rather barbaric solution? ;)


For anything resembling sanely typeset text you’d also want apostrophes, proper “quotes” — as well as various forms of dashes and spaces. Plus, many non-trivial texts contain words in more than one language. I’d rather not return to the times of in-band codepage switching, or embedding foreign words as images.


This is why the development of character sets requires international coördination from the beginning. :)


Yeah. And then you'll get Latin-1, because everyone using computers is in Western Europe or uses ASCII ;)


But comparing something to something else and it being easy, doesn't make it easy by itself.

Paraphrasing the joke about new standards: we had a problem, so we created a beatiful abstraction. Now we have more problems. One of the new problem being normalization.

It doesn't undermine the good that Unicode brought, but you can't say to have included some unilib.h and use its functions without understanding all the Unicode quirks and its encodings, because some of the parameters wouldn't even make sense to you, like the same normalization forms.


Wait. There are two possible cases:

1. Either your restrict yourself to the kind of text CP437/MCS/ASCII can handle (to name the three codecs in the blog posting). In that case unicode normalisation is a noop, and you can use unicode without understanding all its quirks.

2. Or you don't restrict the input, in which case unicode may be hard, but using CP437/MCS/ASCII will be incomparably harder.


A rocket can take you to the Moon. Is it easy to operate? Or to learn how to? To maintain it and prepare on the ground?

Not just it would be harder, but you couldn't get into space without it at all, so that got comparatively easier.

Is it still all easy, though?


The presentation and the paper pretty much describe how LMDB works already.

https://symas.com/lightning-memory-mapped-database/

Except, it allows only a single writer and it's not directly byte-addressable, unless you make a request to preallocate a chunk of fixed-size mmap'ed memory.


LMDB is a highly-optimized storage engine. However, it is not a full-featured DBMS (e.g., no support for query optimization). This talk focuses on the potential impact of NVM on different layers of a full-featured DBMS -- not only the storage engine.


Yes but in byte-addressable mode you can trust the map is fixed, which doesn't seem to be the case here.


Not exactly. I'm not a user of this, but

http://www.lmdb.tech/doc/group__mdb__env.html#ga492952277c48...

Ctrl+f through the doc for "MDB_FIXEDMAP" for more details.

So there is _some_ support, where you can expect to have the whole file at a fixed address. What is not handled is that your data may be moved within the file, and there's nothing you can do about it, short of keeping a long-lived read only transaction:

http://www.lmdb.tech/doc/todo.html


You have it right.

In practice, nobody has wanted support for this so it has remained unimplemented. Note that if your data items are large enough (more than 1/2 a page) they'll go into their own overflow pages, and then won't shift around at all. (But if you delete/modify a value, the new version will of course be in a new overflow page.)

Granted, this could be a case of "if you build it they will come" - before LMDB existed, nobody cared about mmap'd transactional DBs either. And personally, I still see good use cases for it.


Note that the LMDB approach, and especially if you want MDB_FIXEDMAP, limits DB size to the largest mmap()ing you can get at a fixed location. That's not good in a world of 48-bit address spaces.


Practically it's even less than 47 bits. :) But sure, know your limits.


Yes, I know :)


I can't provide you a full solution, but here are the hints:

- gcc5 definition: https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/...

- Old cc wrapper with a custom libc option https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/...

- libc in the GCC package itself: https://github.com/NixOS/nixpkgs/blob/master/pkgs/developmen...

The whole system is very modular and customizable. But yeah, very hackish and undocumented.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: