3DS is unique with its 3D display that doesn't need a 3D glass. The "New 3DS XL" is especially great, as it has a stable 3D effect because of infrared eye-tracking. While most gamer got a bad impression from the first gen 3DS 3D effect without eye-tracking. I would pay premium for such a 3D PC monitor. Unfortunately Nintendo Switch doesn't feature such a 3D display nor does any other device available.
That is a shockingly bad-faith description. You're banned because you serially abused Hacker News for years, and we gave you a great many warnings before we did that.
> There is little difference between an "rate-limited" account and an "shadow banned" account.
I don't think this is true. I get the "You're posting too fast. Please slow down. Thanks." if I post too many times in quick succession. If you are rate-limited the posts that you can post will show up as normal. However if you are shadow-banned your posts show up immediately as dead and people can't see them unless they have show-dead enabled. Even to be able to reply to you I have to first click on your post and then click 'vouch'. I guess if you kept approaching the site admin that's why you were shadow-banned, I don't see any point in approaching them unless something terrible is going on, there are too many accounts here and it's not a commercial venture. I don't know what you did but your comment history seems okay but all your comments are showing up dead unless someone clicks 'vouch'. Perhaps if you promised to leave the admins alone? I mean you have more karma than me! Good luck.
Win 7 is basically Vista Service Pack 1. Several minor things like the slow-as-hell copy-routine of Vista got reverted back to almost XP level-speed with Win7. Unfortunately the Advanced Search dialog of Vista got removed in Win7. Most Vista problems were third party device drivers (blue screens), first (still new) mainstream 64bit OS and the related issues w 32bit and lack up 16bit support, and the vastly increased memory usage because wrong vision (consume all memory while idle is okay).
To this day the Win7 is arguably the best OS (supported til 2020), followed by the aging XP.
Yes, we need a many-core CPU. Then every program can run on a core. And we can use MPI, CILK, etc to run parallel programs. Each CPU could run its own small operating system, or using a micro kernel OS.
Writing algorithms in parallel manner isn't as complicated as make to believe you, a lot can be rewritten based on cookbooks/patterns.
Unfortunately they drifted of the target (instead of making it the successor of x86-64).
For Javascript engines, it would help if the offer an optional fallback "Javascript interpreter" instead of JIT. A JIT is basically at the moment insecure, especially when running untrusted code. The same goes for WebAssembly, let the user deactivate execution/support of it. Please allow end user to enable an alternative JS interpreter for high security things like online banking.
Well, Meltdown mitigation involves unmapping as much Kernel as possible when switching modes. It’s so much overhead that I wonder if uK messaging and process context switching is cheaper and faster.
From a safety point of view, I take a 747 or A380 any day or a (now defunct Trijet) to a twin engine 777 or A350. Guess were flying hours over the open sea (Atlantic) is safer because of redundancy, the more engines the more can go out and it still can land.
The 777 is way too crammed. The washrooms are on the side instead in the middle aka mini-washrooms (but with window). In general it feels like the first computer designed airframe, they forgot about the size of humans, it's made for short people. I take a 747 or even better A380 any day over the 777. I mean who seriously prefers to sit in a narrow long can for 12 hours, when you can choose a double decker A380 were you can stretch your legs, walk around and have big washrooms were you can stand even upright. And don't get me started on the entertainment system of the 777 - needs a serious upgrade.
The 757 will go out of service next, United still has some 757 from the 1980s - sure the seats are super comfortable because in the 1980s the were bigger and softer, but the airframe is old and the entertainment system was added as addon, meaning a computer box is below every other seat and gives the unlucky guy (who doesn't know about seatguru) little leg room.
Why is the very very odd 737 still going? It's older than the 747, I wonder when they finally design a new smaller airframe.
Airlines decide on their 777 interiors and entertainment centers, so your experience on a Delta 777 will be very different from an air Singapore one. The 777 was a huge advance in composites, which were continued in the 787. My gf at the time dad worked on the plane and was very proud of it.
The 737 has been updated many times and Boeing hasn’t seen the need to introduce a completely new narrow body. It is still very successful, so I guess they are right. In contrast, the 767 was dying and needed replacing by the 787, it couldn’t be updated.
The 747 is still in production and is very successful in freight and other applications. It isn’t going away anytime soon.
> and the entertainment system was added as addon, meaning a computer box is below every other seat and gives the unlucky guy (who doesn't know about seatguru) little leg room.
I've been on Dreamliners with entertainment boxes intruding on the legroom. That's 100% up to the implementation by the airline
Having 2 engines go out is such a rarity that it almost never happens. And when it does, it’s usually due to something that would have affected 3 or 4 as well.
ETOPS regulations also ensure (at least for the US) that if one engine does go out somewhere over the ocean, you’ll at least get to land somewhere.
The end of the Cold War made direct two engine flights viable because (a) we could fly over Russia and (b) Russia was nice enough to open up a bunch of airports in the Far East (with money from western airlines) to comply with regulations.
Agreed that the 380 is still the most comfortable for pax (unless you crave lower cabin altitude), but the 777 safety record is stellar (only 6 hull losses with 1500 built: BA 38 with ice in the fuel lines, EgyptAir 667 fire on the ground, Asiana 214 in SFO, MH 370, MH 17 (rocket), and the Emirates 521 bounce in Dubai). Looks like a pretty sound design to me.
UTF-16 is one of these ill-fated developments that curse some languages & platforms (WinNT incl Win10, WinAPI32, Java, Flash, JS, Python 3) to his day.
compareTo uses 0x19, which means doing the “equal each”
(aka string comparison) operation across 8 unsigned words
(thanks UTF-16!) with a negated result. This monster of an
instruction takes in 4 registers of input:
Some JavaScript runtimes (Firefox's Spidermonkey for one) have an optimization that stores some strings in single-byte format where possible to mitigate the cost of the awful original choice to use UCS-2 for JS strings. I expect some other runtimes do this too, but I don't know any off-hand.
IIRC this was motivated by Firefox OS (strings eat up a lot of RAM on memory-starved $50 smartphones) but it pays off on desktops too.
Python as of 3.3 uses any of three different internal storage mechanisms for strings: 1-byte (latin-1), 2-byte (UCS-2) or 4-byte (UCS-4) depending on the width of the highest code point in the string. This allows the internal storage to always be fixed-width, while still saving space for strings which contain, say, only code points representable in a single byte.
Prior to 3.3, the internal storage of Unicode was determined by a flag during compilation of the interpreter; a "narrow" compiled interpreter would use 2-byte strings with surrogate pairs for non-BMP code points, and a "wide" compiled interpreter would use 4-byte strings.
UTF16 is not really a curse for languages that require it. String operations in non-English languages are very fast because of it, and most software these days has to deal with localization.
UTF-16 is the worst of all worlds: it's less efficient than UTF8 for most use cases, requires you to think about endianness, but is still a variable-length encoding. (And the cases that require variable-length encoding are rarer than they are for UTF-8, meaning you're less likely to hit them in testing)
No, it uses the fixed-width LATIN1 (ISO-8859-1) encoding for compact strings. It wouldn't make much sense to use another variable-width encoding like UTF-8.
While technically UTF16 is variable length, 99.99% cases use single word per character. I.e. on modern hardware with branch prediction and speculative execution, these branches don't affect speed. With UTF8, CPU mispredicts branches all the time because spaces, punctuations and newlines are single bytes even in non Latin-1 text.
I think the most common operations are comparisons for equality and copying anyways. UTF-8 is faster for those.
I tried out how fast I could make UTF-8 strlen, with an assumption of a valid UTF-8 string. The routine ran at 18 GB/s on a single core using SSE.
> With UTF8, CPU mispredicts branches all the time because spaces, punctuations and newlines are single bytes
I don't understand this sentence. Why would there be any more mispredictions because of those being single bytes? These days code is so often bandwidth limited if anything, so smaller data helps.
> Indexing code points in both UTF-8 and UTF-16 requires reading the whole string up to index location. Substrings are the same as well.
Java's String functions don't index by Unicode code points, though. Java strings are encoded in UCS-2, or at least the API needs to pretend that they are.
Even in 2017, not everyone is a Web or Electron developer. I certainly am not.
I don’t advocate using UTF16 for the web, but people still code native desktop apps, mobile apps, embedded software, videogames, store stuff in various databases, etc. For such use, markup is irrelevant.
For some of these things we don’t have much choice, because the encoding is part of some lower-level API (file system, OpenGL, CLI), which usually don’t accept arbitrary encoding. They accept only one, and unless you want to waste time converting, you better use that exact encoding.
Other stuff like IDs, shaders before GL 4.2, and many text protocols aren’t Unicode at all.
For configs I usually use UTF-8 myself, because I don’t like writing parsers for custom formats and just use XML, and any standard-compliant parser supports all of them.
Represent indexes not as number-of-code-points from start but as byte offsets, and index/substring doesn't need to decode code points. You lose the ability to easily say "get me the 100th code point in this string", but I'm hard-pressed to actually think of any case where that is actually valuable.
> Right, and for 1 billion Chinese speaking people UTF16 is 2 bytes/character, UTF8 is 3 bytes/character.
The information density of a single hanzi character is roughly equivalent to 5 letters in English. A Chinese plaintext document in UTF-8 is still smaller in memory footprint than an equivalent English document in ASCII. Of course, most documents aren't plaintext, and where people use characters for metadata (e.g., email, HTML), there is a substantial corpus of ASCII metadata in those documents that UTF-8 is still smaller than UTF-16 even for East Asian languages.
Of course, it's moot since the people who don't like UTF-8 in China and Japan aren't using UTF-16 either. They're using GB18030 or ISO-2022-JP for their documents.
> A Chinese plaintext document in UTF-8 is still smaller in memory footprint than an equivalent English document in ASCII.
When you need to process Chinese text you don’t care how much an equivalent English document would take. You only care about the difference between different encodings of Chinese language. And UTF16 is more compact for East Asian languages.
> most documents aren't plaintext
That’s true for the web, and that’s why UTF8 is the clear winner there. In a desktop software, in a videogame, in a database — not so much.
In non-Latin text, if most characters are 2 bytes but a large minority are 1 byte, the branch prediction in charge of guessing between the different codepoint representation lengths expects 2 bytes and fails very often.
Speculative execution (counting in two or three ways simultaneously) might mitigate the performance hit.
> In non-Latin text, if most characters are 2 bytes but a large minority are 1 byte, the branch prediction in charge of guessing between the different codepoint representation lengths expects 2 bytes and fails very often
You wouldn't want to process a single code point (or unit) at a time anyways, but 16, 32 or 64 code units (or bytes) at once.
That UTF-8 strlen I wrote had no mispredicts, because it was vectored.
Indexing is slow, but the difference to UTF-16 is not significant.
I guess locale based comparisons or case insensitive operations could be slow, but then again, they'll need a slow array lookup anyways.
You don't need to check the representation doing anything specifically with spaces or newlines. All 0x0A bytes are newline characters in UTF8 and all 0x20 bytes are spaces.
The only place you really need to decode UTF8 characters is when you convert it to another format (which you hopefully won't need to do anymore in the far future) or display it (where the decoding is a minuscule factor in performance)
UTF-8 is self-synchronizing, which means you can treat it as a byte string for most operations, including finding substrings. You don't need to convert UTF-8 to a sequence of codepoints for most tasks (particularly if you drop the insistence of using character boundaries). When you do have to do so, you're usually applying a complex Unicode algorithm like case conversion, and so the branch misprediction overhead of creating characters is likely small in comparison to the actual cost of doing the algorithm.
PEP-393 is a stupid compromise. They couldn't choose between UCS-2 and UCS-4, so they are using both. They are wasting tons of CPU cycles converting between them and single character outside of range doubles the size of string.
I don't fully understand the use case for extracting codepoints from strings, but they could have just added Java-like: codePoints and keep returning code units from old methods. This is CPU and memory efficient and 100% backwards compatible.
I think the problem is the same could have been done in Python 2 (with UTF-8) that would mean less reasons for Python 3.
As I understand the microcode updates get installed by Linux distros and Windows by updates. Microcodes are loaded by the CPU at every boot.
The question is, how can one check if the CPU already got the microcode update?
Can a OS running inside a VM (over Intel VT ring-1) patch the CPU's microcode? (e.g. Linux host, and Win guest patches CPU's microcode) Nothing seems impossible anymore.
Okay, so how can we deactivate ASM.js and WebAssembly? (in the light of Meltdown and Spectre)
The config in Chrome is broken, WebAssembly cannot be deactivated anymore with chrome://flags/#enable-webassembly , setting it to "deactivated" and it's still active.
Well, in general the problem with flash was the plugin wasn't well isolated, and often would crash the browser, and before NT-based windows was common the entire OS pretty readily (presumes windows). Not to mention the security track record. Browser isolation, and how well it will likely be with wasm is quite a bit different.
That said, it will lead to more closed commercial sites, but the JS outputted from webpack+babel+uglify is already unbelievably difficult to wade through without source maps. It's not significantly different imho.
afaik, wasm is not 'binaries' in that it's not an arbitrary blob of machine code fed right into the cpu. it's still running in a sandbox (a la javascript) including similar limitations wrt CORS etc.
> it's still running in a sandbox (a la javascript)
People frequently make this comparison upon hearing the term sandbox, but this is a weak comparison. Yes JavaScript executes in a sandbox, but the JavaScript (JIT) sandbox is purely for performance instead of isolation, which is like comparing a pencil sharpener to a bulldozer just because they are both portable machines. A better comparison of the JavaScript (JIT) sandbox is the JVM.
Yuzu Switch emulator is made by devs of a 3DS emulator Crita: https://citra-emu.org
3DS is unique with its 3D display that doesn't need a 3D glass. The "New 3DS XL" is especially great, as it has a stable 3D effect because of infrared eye-tracking. While most gamer got a bad impression from the first gen 3DS 3D effect without eye-tracking. I would pay premium for such a 3D PC monitor. Unfortunately Nintendo Switch doesn't feature such a 3D display nor does any other device available.