> A bank will likely never release their app as open source, nor will any of the big authentication vendors.
I suppose you're right and I think it's worrying that precisely these kinds of organizations still seem to rely on security through obscurity (to some extent, not solely).
Not just obscurity, but also requiring you to have a mobile phone, with software made by two spcific companies, (for us europeans) both foreign, and in some cases, not even allow you to have the phone rooted.
> The rumors of future products keep me of buying the current products.
Spot on!
Back in the nineties, Intel managed to push competing RISC architectures (UltraSparc, MIPS, DEC Alpha, PowerPC) out of the market using nothing but promises that Itanium was going to blow them all out of the water.
And apparently Apple is okay with procrastinating and cannibalizing current sales of M1, 2, 3 if it helps prevent some Snapdragon (or Ampere) sales.
>And apparently Apple is okay with procrastinating and cannibalizing current sales of M1, 2, 3 if it helps prevent some Snapdragon (or Ampere) sales.
sales of what
i actually can't think of a single competing product. admittedly i don't keep up with laptop news but still, i haven't heard of anything yet that can meaningfully compete with the m1 from four years ago
Microsoft just announced some lackluster arm laptops that they claim can compete with M-series chips. The question is what windows programs are gonna run on them...
Some people have been running Windows 11 for Arm on a VM in Apple Silicon. It has an automatic transcoder that translates most x86 code at start. It seems to run many apps well. Microsoft claims these new machines have a better transcoder. This might work.
For me at least, the best possible outcome of this is that Windows handheld gaming devices become more power-efficient. That might be an advantage over Linux-based handhelds for a while, unless Valve decide that Proton needs to also be an architecture emulator. The chip efficiency wins must surely be tempting in this form factor.
> SENS systems that are apparently being developed, "will create the 'perfect storm' for the introduction into the market of groundbreaking applications that we cannot even imagine today."
Hmmm.
Perfect storm sounds somewhat ... gloomy.
I suppose we should ask ourselves (in chorus): what could possibly go wrong?!
If you're (worst case) is 8 bytes for 61 bits, that's no 0 overhead, but 3 bits overhead.
Another detail is that vu128 is double 64 bits, ie: 17 bytes worst case or 8 bits overhead. Would presumably be 4 bits in your example to represent 124 bit payload.
0 overhead in that it takes 8 bytes in memory and up to 8 bytes serialized. You can’t slice a byte partially unless you’re doing packing so I think it’s fair to characterize it as 0 overhead, but sure it’s 3 bits of overhead vs 8. It came up because I had a counter and just arbitrarily cut off the range cause I couldn’t possibly ever have it count that high. But yeah, the same idea could be extended to 128 bits.
Fair enough. In case this was a counter, best make sure you get the overflow logic right. The normal 64 bit wrapping won't kick in if you're just able to (de) seriaize only 61 bits.
Or (depending on the context of this counter) you may never want to overflow at all - in which case you're best of with a variable length representation that is open ended (like r or python arbitrary precision)
It would take your absolute top of the line CPU doing nothing but incrementing the counter for more than 12 years to worry about overflowing a 61 bit number. And what this is counting is much more expensive than incrementing a counter so you can’t run this at 6Ghz. Maybe ~500mhz which gets you to > 100 years if that’s all you’re doing but this counter is a side operation that won’t see much traffic even at scale so you’re talking about millennia in practice I suspect.
So it’s just not a practical concern - I just panic if it hits the limit. I think you may be failing to comprehend just how large numbers like this get and how long it takes to count because it’s such a very real concern with 32 bit counters. Addition of course can always overflow but this is a straight up strictly monotonic counter that counts from 0 to 2^61.
> And the ad confirms that their fears were warranted.
My point is that they shaped the world like this by short-sightedness, and this same short-sightedness make them critizes the ad, instead of realizing what they have done.
The fear that their beloved company with an eye on design and perfection is has turned it's back on the creatives that once helped it become an icon. The fear that the "think different" company is now enshittified.
The commercial says: yes, the tea leaves you've been reading — that bleak picture — it's exactly how you pictured it and let us adjust the spotlight a little.
Ie. rather than a bunch of tools helping you find undefined behaviour (or left-and-right improvements of what the behaviour should be) you'd like to be able to make high level claims about your code and have the compiler validate those guarantees.
C++ having a huge C++ legacy, things of course are never easy. So for the time being this is just work-in-progress.
First of all they have to be designed and voted in.
If WG21 is actually open to having them, ISO C++29 is probably when they might land, then given the compiler velocity with previous standards, expect at least 5 years to mature, on the top three, let alone everyone else.
If I am not dead by then, I will certainly be retired.
How long will the industry be willing to wait for profiles?
I suppose you're right and I think it's worrying that precisely these kinds of organizations still seem to rely on security through obscurity (to some extent, not solely).
reply