From early green or amber text on black mono displays.
Grey on black DOS text mode.
Light Blue on Dark Blue C-64.
Apple 2's grey/white (I don't recall) on black.
Even GUI wise, Amiga used a dark-blue background as the default Workbench, with user selectable palettes for everything.
It was Microsoft Windows that changed the paradigm to default to a searing white display with black text in most apps, like Notepad, Word, etc., because "it's more like paper". Sure, paper is white, but it's not glowing white. That transition was painful.
I'm glad to see dark-modes return, I agree there needs to be an option, not just forced dark-mode. Preferably light mode options to use a not-as-bright-as-possible white too.
And you shouldn't have your device or monitor set to glowing white -- turn the brightness down so it's the same as a sheet of paper next to it.
And Windows didn't change the paradigm, the Mac was the first widely available consumer device which did. And its built-in CRT wasn't especially glowing either -- it was less bright than paper in traditional office lighting.
Early computers had "dark" color schemes because the resolution was so low and pixels "bled" brightness, that it was easier to read. As technology improved, that problem thankfully went away, and it's easier on the eyes to read dark text on a light background, regardless of print or screen.
There’s a significant base of users that prefer light mode and dark mode so provide both, it’s generally not difficult to do so.
I disagree that apps should tone down light mode. It’s better that all apps use the same brightness and contrast and then users can adjust their monitor to suit their individual preference.
> There’s a significant base of users that prefer light mode and dark mode so provide both, it’s generally not difficult to do so.
There’s a significant base of users that hate with a passion all low contrast dark gray on light gray (aka llight mode) or light gray on dark gray (aka dark mode).
When has the brain of people promoting this been damaged ?
Paper, particularly bleached paper, is not "traditional normal" either.
I'm no paleontologist, but originally humans would use substances like ash and fruit to draw/write on rock/leaves/bark, so white/red/colors on grey/green/brown.
Disagree, white should be standardised as #FFFFFF so that it’s consistent between applications. Then users can adjust how they want “white” to appear by adjusting their screen settings.
No, #FFF is white, and it's up to the client to decide what white should look like.
Arguing that we should use, say, #CCC for white, is like saying that instead of rating things out of a 100, you should rate them out of 70 instead. All you've done is narrow the scale.
For me, the small contrast on pages like HN (in particular with any of the gray text) strains my eyes because it’s more effort for me to see the letters.
But I also read a reasonable amount of PDFs (black on white) which is relatively comfortable on most of my monitors (LCDs with generally low brightness setting to have less light shine into my eyes).
I think what I am saying is, I agree that what is comfortable depends on the user, so websites not moving off the defaults is better, because then users can configure what works for them.
Addendum: The low contrast example on the article is very uncomfortable to read for me.
Given that screens are always adding their own light, it’s impossible for a screen to ever be equally bright as a piece of paper next to it. The screen will always be brighter.
Do what now? An entirely black OLED screen is certainly going to reflect less of the room’s light than a sheet of white paper. An OLED screen displaying white at 10% of its maximum brightness is also likely going to be less bright than a sheet of white paper in most rooms.
The contrast ratio of an old CRT (and amber and green were considered more comfortable than white-on-black) is radically different from a modern LCD/IPS/OLED screen. It's so different that there's no comparison. Dark mode might be ok for more people if there is some brightness to the background instead of being completely black, but then you lose most of the benefits of OLED.
The "true black" OLED displays have their part of the display off where there are black pixels, if I am not wrong. So, wouldn't dark mode suit well for those types of displays?
GP is arguing that exactly because there is no backlight, the contrast between on/off is uncomfortably high on modern screens compared to the CRTs where Windows 2/3 was running.
I agree. Most websites with a dark color scheme use a dark grey background and even off-white text.
Traditional normal is not an absolute statement. Sure DOS / Unix back in the early days of PC displayed black backgrounds due the display's at the time working better this way.
Before that, people shared information in white paper; and the beginning of the internet brought it back with black text over white background.
Therefore there is no canonical traditional normal, it all depends when one joined.
Paper and paper-like writing surfaces were non-white for a long time before we got bleached white paper.
We haven't yet had a glowing-white paper.
Traditional-normal for computing was a dark background.
There was likely a technological limit in the use of pure white at the start when "emulating" paper. VGA 16-color mode likely meant that the choice was between bright white and medium grey, which was too dark. Configurability has lagged behind though.
That was only common for a blip in time where NOTHING was normal because it was all being figured out and cost constraint, not personal or ergonomic preference, drove computing capabilities.
> Even GUI wise, Amiga used a dark-blue background as the default Workbench
That's because of cost. It was expected that many people would be viewing Amiga video output on a television via composite output and white-on-blue is something that TVs are good at displaying. The 1080 was like 1/3rd the cost of the A1000 and I'm willing to bet that many, MANY A500s were hooked up to TVs for at least a while after being opened on Christmas.
I used practically every word processor ever made for Amiga. Except for WordPerfect they were all black text-on-white, and WordPerfect you could change that they just kept the default blue and white to match DOS.
Dark mode was normal in the early days of CRTs, when most CRTs refreshed at 60Hz or lower. The dark background made the flicker less obvious. Once higher refresh rate CRTs became common (1990s), the flicker became less of a problem and light mode became the default.
...and Lotus 1-2-3 mimiced visicalc and when I used visicalc (on an HP85a) it had a dark background with a greyish white foreground colour. ie dark mode by default.
Mac likely did use this scheme, and yes, copied it from Xerox. However neither Macs nor Xerox had mainstream use. I'd only actually seen 3 Macs in the wild before their switch to Intel, over 20 years later.
Windows adopting the "paper"-white background and whole world drooling over the arrival of Windows 3.1 and 95 is when it became the standard, I think.
There's no 'likely' about it - the Mac absolutely used white as its background color for document windows and finder folders. It was striking and different when you first encountered one of the early compact Macs to see how white the screen was when you opened MacWrite.
As for the claim that Macs had no 'mainstream use' for 20 years until the Intel switch... your personal Mac-free life is a sad story, but not remotely universal, and while it's certainly true that Macs always had minority market share, it's insane to suggest they weren't influential.
My favorites were actually DOS TUIs, where for some reason blue became a commonly used background color for a lot of things (e.g. Norton Commander, many Borland products, FoxPro...).
Yeah, it wasn't Windows that changed it, they just hopped on the bandwagon.
I remember (SunOS)[https://en.wikipedia.org/wiki/SunOS] on a SPARC in 1987 that was black in white text, and Macintosh before that.
> It was Microsoft Windows that changed the paradigm to default to a searing white display with black text in most apps
My early 90s Sun SPARCStation was black on white, right from the boot. The xterm default is black on white too, a default that far predates Windows AFAIK.
I don't really know the full history on all of this, but in my limited knowledge, this seems grossly simplified at best since there seem to have been several popular systems before Windows that used white background colours.
Athena text widgets on X were black on white in the 80's. So was Lisa, Mac, and NeXT, OS/X and SunOS's first GUI. Yes, amber on black was long running, but since you weren't alive then let me tell you something: it sucked. Moving from VT100 (VT104) terminals to actual Sun/Aix machines running X was a HUGE improvement on eye strain.
I’m glad those brightness settings work for you but I can’t deal with how dull it makes colors look on traditional backlit displays. The reduced contrast also isn’t very fun with modern UIs which for some reason actively avoid good contrast.
windows originated very little: plenty of type-on-page metaphor predated it.
original was light mode: printer terminals. yes, green-on-black became normal in the mid seventies, and some amber-on-black. but even early lisp machines, the Alto, Smalltalk, W/X/Andrew interfaces, Next, etc - type-on-page, not serial-terminal-ish dark mode.
Beside it not being true for paper it's also not true for electronic screens.
Before a computer with CRT most of us had some simpler screens on calculators or other devices that were LCDs. And they are blackish on some lighter gray or green - light mode.
I'm not the original poster, but I ran into something similar late in Win 7 (Win 8 was in beta at the time). We had some painting software, and we used open-MP to work on each scan-line of a brush in parallel.
It worked fine on Mac. On Windows though, if you let it use as many threads as there were CPUs, it would nearly 100% of the time fail before making it through our test suite. Something in scheduling the work would deadlock. It was more likely to fail if anything was open besides the app. Basically, a brush stoke that should complete in a tenth of a second would stall. If you waited 30-60 minutes (yes minutes), it would recover and continue.
I vaguely recall we used the Intel compiler implementation of OpenMP, not what comes with MSVC, so the fault wasn't necessarily a Microsoft issue, but could still be a kernel issue.
I left that company later that year, and MS rolled out Windows 8. No idea how long that bug stuck around.
To add to this, I'll try to give an idea of how much zoom (or focal length really) you'd need to get a picture with detail.
I took photos of both Jupiter and Saturn w/ a Canon R7 and the RF 100-500mm lens, with a 1.4x extender. The 1.4x extender make the lens act like 700mm instead of 500mm. The R7 being an APS-C sensor adds another 1.6x factor, making the combo the equivalent of 1120mm. In these photos the planets are still just dots. The camera takes 32.5 megapixel photos. When zoomed in to the pixel level, both planets were still tiny, about 50 pixels wide. It was enough to see Saturn had a ring and some color striping on Jupiter, but that's it.
The iPhone main camera is like 26mm (42x less zoom). The iPhone 13 Pro's telephoto lens is 77mm (14.5x less zoom), and the iPhone 15 Pro Max is 120mm (9.3x less zoom)... so you're unlikely to get much more than what looks like an out of focus few pixel wide dot even on the zoomiest of iPhones, but with that wider 26mm lens, you just might be able to capture them all in one shot.
To me, what's more technically impressive than the fact I took pictures of the planets with readily available camera gear was that I did with 1/125s shutter speed, handheld, standing in my yard. The accuracy of the image stabilization needed to pull that off is what astounded me the most.
> On an LC-3, the address space is exactly 64KiB. There is no concept of missing memory, all addresses are assumed to exist, no memory detection is needed or possible, and memory mapped IO uses fixed addresses.
> There are no memory management capabilities on the LC-3, no MMU, no paging, no segmentation. In turn there are no memory-related exceptions, page faults or protection faults.
Sounds an awful lot like a Commodore 64, where I got my start. There's plenty to learn before needing to worry about paging, protection, virtualization, device discovery, bus faults, etc.
It sounds like it's not teaching the wrong things like your GTA driving example, but teaching a valid subset, but not the subset you'd prefer.
Your tractor doesn't (I would hope) contain your banking details and all your emails, contacts, browsing history, photos, etc. It deserves to be treated as the tool that it is.
Apple taking your data privacy seriously seems a worthy exception to me. You're free to disagree, and buy an Android.
Apple can take my privacy seriously while also allowing me to fix my hardware. You are promoting a false dichotomy that could be used to excuse almost any form of irrational behavior.
On iPhone, if I take a picture of a plant or animal, it identifies it for me. It's not 100% by any means, but it's useful enough. I've figured out what were baby plants I wanted vs. weeds. I've figure out species of birds I'd taken photos of with my SLR (ie: phone takes picture of Lightroom editing the image, and is able to identify it from that... I'd prefer there was a way to not require me to take a photo of my monitor, either doing it "live", and/or adding the functionality into the Mac.) For people and pets it can find other images that contain the same subject.
When my daughter was studying Chinese, I could use the live-video translation app and see the lesson text translated to English, and see her hand-written answers also translated to English. I could see this being more broadly useful when travelling, along with live translation of spoken words.
While the AI focus these days is on LLMs, AFAICT, the NPUs and GPU accelerators are just generically fast MUL, and MAD machines with varying precisions, which should help any AI, and even non-AI tasks likes image filter kernels.
Getting hardware to enable faster AI processing on phones should be good thing if used for useful tasks, LLM or not.
Direct3D made this a thing. Trying to draw unscaled 2D elements you often end up with blurry images as it bilinearly filters with the neighbouring pixels.
This is because of a mess with where it considered pixels located, where texture samples are considered located, and where, when rasterizing an included pixel, the texture coordinates sampled. See detail at [0].
If your graphics API was blurring all your images, you'd be passionate about that half-pixel offset too.
I can relate. Spent countless hours on this stuff with computer vision and convnets. The intricacies of align_corners, implementation differences between deep learning frameworks, striding and pooling when numbers aren't neatly divisible, uuh.
I've volunteered to fight a share of fires from people who check things in untested, change infrastructure randomly, etc.
What I've learned is that fixing things for these people (and even having entire teams fixing things for weeks) just leads to a continued lax attitude to testing, and leaving the fallout for others to deal with. To them, it all worked out in the end, and they get kudos for rapidly getting a solution in place.
I'm done fixing their work. I'd rather work on my own tasks than fix all the problems with theirs. I'm strongly considering moving on, as this has become an entrenched pattern.
Not the strongest on C++ myself, but the new[] will attempt to run constructors on each element after calling the new operator to get the RAM. The delete[] will attempt to run destructors for each element before calling operator delete[] to free the RAM.
In order for delete[] to work, C++ must track the allocation size somewhere. This could be co-located with the allocation (at ptr - sizeof(size_t) for example), or it could be in some other structure. Using another structure lowers the odds of it getting trampled if/when something writes to memory beyond an object, but comes with a lookup cost, and code to handle this new structure.
I'm sure proper C++ libraries are doing even more, but you already get the idea, new and delete are not the same as malloc and free.
> In order for delete[] to work, C++ must track the allocation size somewhere.
That is super-interesting, I had never considered this, but you're absolutely right. I am now incredibly curious how the standard library implementations do this. I've heard normal malloc() sometimes colocates data in similar ways, I wonder if C++ then "doubles up" on that metadata. Or maybe the standard library has it's own entirely custom allocator that doesn't use malloc() at all? I can't imagine that's true, because you'd want to be able to swap system allocators with e.g. LD_PRELOAD (especially for Valgrind and stuff). They could also just be tracking it "to the side" in some hash table or something, but that seems bad for performance.
new[] and delete[] both know the type of the object. Therefore both know whether a destructor needs to be called.
When a destructor doesn't - e.g., new int[] - operator new[] is called upon to allocate N*sizeof(T) bytes. The code stores off no metadata. The result of operator new[] is the array address.
When a destructor does - e.g., new std::string[] - operator new[] is called upon to allocate sizeof(size_t)+N*sizeof(T) bytes. The code stores off the item count in the size_t, adds sizeof(size_t) to the value returned by operator new[], uses that as the address for the array, and calls T() on each item. And delete[] performs the opposite: fishes out the size_t, calls ~T() on each item, subtracts sizeof(size_t) from the array address, and passes that to operator delete[] to free the buffer.
(There are also some additional things to cater for: null checks, alignment, and so on. Just details.)
Note that operator new[] is not given any information about whether a destructor needs to run, or whether there is any metadata being stored off. It just gets called with a byte count. Exercise caution when using placement operator new[], because a preallocated buffer of N*sizeof(T) may not be large enough.
jemalloc and tcmalloc use size classes, so if you allocate 23 bytes the allocator reserves 32 bytes of space on your behalf. Both of them can find the size class of a pointer with simple manipulation of the pointer itself, not with some global hash table. E.g. in tcmalloc the pointer belongs to a "page" and every pointer on that page has the same size.
That doesn’t help for C++ if you allocated an array of objects with destructors. It has to know that you allocated 23 objects, so that it can call 23 destructors, not 32 ones, 9 of which on uninitialized memory.
I believe the question was more around how the program knows how much memory to deallocate. The compiler generates the destructor calls the same way the compiler generates everything else in the program.
Isn't it also possible for other logic to run in a destructor, such as freeing pointers to external resources? Doesn't this cause (at the very least) the possibility for more advanced logic to be run beyond freeing the object's own memory?
From early green or amber text on black mono displays. Grey on black DOS text mode. Light Blue on Dark Blue C-64. Apple 2's grey/white (I don't recall) on black. Even GUI wise, Amiga used a dark-blue background as the default Workbench, with user selectable palettes for everything.
It was Microsoft Windows that changed the paradigm to default to a searing white display with black text in most apps, like Notepad, Word, etc., because "it's more like paper". Sure, paper is white, but it's not glowing white. That transition was painful.
I'm glad to see dark-modes return, I agree there needs to be an option, not just forced dark-mode. Preferably light mode options to use a not-as-bright-as-possible white too.