That would be colossally inefficient - essentially the size of the chip means that electrons would be taking multiple cycles to get from one side to the other. The solution would be localizing processing into distinct processing units on the one die. At the point you’ve reinvented multiple cores and it starts becoming cost effective to split them into separate chips to improve yields :)
The problem is the increase power usage of the additional caches that are necessary - modern CPUs already need a bunch of physically local caches in addition to the large L1/2/3/n caches because of timing of flowing electrons from A to B. At some point the benefit of larger single die becomes minimal. The moment that happens you benefit from making separate chips because of increased yield.
Most modern chips already use numerous clocks (aside from anything else propagation delays for the clock signal is already a problem).
The problem is not simply "because clock cycle" it is "if electron takes Xns to get from one execution unit to the next, then that's Xns of functionally idle time". That at best means additional latency. The more latency involved in computing a result the more predictive logic you need - for dependent operations the latency matters.
An asynchronous chip does not avoid that same problems encountered by a multistage pipelined processor, it's purely a different way to manage varying instruction execution times.
But this doesn't answer the killer problem of yield. The larger a single chip is the more likely any given chip is to have errors, and therefore the fewer chips you get out of a given wafer after the multiple weeks/months that wafer has been trundling through a fab. Modern chips put a lot of redundancy in to maximize the chance that sufficient parts of a given core survive manufacture to allow a complete chip to function, eg. more fabricated cache and execution units than necessary, at the end of manufacture any components that have errors are in effect lasered out. If at that point any chip doesn't have enough remaining cache/execution units, or an error occurs where it can't be redundant, the entire chip is dead.
The larger a given die is the greater the chance that the entire die will be written off.
That massive ML chip a few days ago worked by massively over prescribing execution units. I suspect that they end up with much greater lost area of a given wafer than many small chips, which directly contributes to actual cost.
Most peoples load tests involve taking a system to breaking point, then turning the load off and going to lunch.
People need to gradually reduce the load after the breaking point to check the system recovers.
Load Tests are also pretty hard to do in distributed systems. If you test the application alone, you probably won't find most of the issues. You'll need to test the application complete with all it's dependencies (databases, load balancers, failover mechanisms, external servers, etc.). You'll also probably want to test it with representative user requests, and all databases filled with representative data. That in turn typically means your loadtest system will need to be as big and expensive as your production system. Have fun explaining to the boss why you need to double the infrastructure costs. If you do it in the cloud, you can just do a loadtest for 10 mins per day and save a bunch of $$$, but you still need tooling to be able to deploy a complete replica of your system, fill it with realistic data and send it realistic requests, all automatically.
Using real user data and logs of user interactions and real user requests is best for loadtesting, but comes with it's own risks. You need to make sure the loadtest systems doesn't send out emails to users, or accidentally communicate with the real production systems in any way. It also means you have to secure your loadtest infrastructure as well as your production infrastructure. GDPR data deletion requests need to apply there too, etc.
You need to find as soon as possible trusted people that can tell you how bad is your idea/game play/sound/narrative so you can fix it or do something else. Most will just be polite to you or be trolls.
Very advanced civilizations could learn how to create targeted worm hole like structures to send information through it using real world physic rules that we don't know yet
It is amazing how each new version of Android has a complete new UX and none of them are actually good. Google is basically a back-end company incapable of doing good UX (if it is more than a search bar).
I have a more negative view. The entire Android development experience is substandard from front to back. The entire framework feels over engineered and bloated no matter which system or API it is.
This. I am a React Native developer, while working on a lot of backend things, bridging native API's to JS, I must say the developer experience on iOS is absolutely fantastic. Made me switch from Pixel to iPhone X, I've had android since the Google Nexus One, so yeah longtime Android user.
I would certainly not call the iOS developer experience fantastic. Xcode lacks basic IDE code editing features. Compiling Swift is slow and still somewhat buggy. Interface builder is next to useless for building complex apps decomposed into reusable components. Provisioning and signing is not as bad as it used to be but you can still get off in the weeds for an hour fixing issues there.
It might be a better developer experience than Android overall but there is still a lot of room for improvement.
I would agree with you, but after spending the last two years using an Android phone after almost a decade using iOS, I feel that Google has made some fundamentally bad decisions below the UI layer as well. Despite using a phone that is newer and more powerful, my Android phone is slower, less responsive and runs out of battery much faster than any of my iPhones.
I wholeheartedly disagree. I tried using both the iPhone 8 and the iPhone X and none of them came close to the Pixel 2, UX and feature wise. I couldn't download files, couldn't even change my ringtone to a custom melody, couldn't attach files coorectly, the integration with their Maps app was awful, the battery life drained faster after one update... I just felt so limited!
iPhone X seems like a bunch of clever but costly gimmicks to fix a self inflicted problem. Instead of just moving the fingerprint sensor to the back of the phone they got rid of it entirely and replaced it with FaceID and this new gesture based interface. It's an impressive piece of technology but a step backward in UX overall.
I find my Pixel 2 XL more usable and reliable than my iPhone X and Project Fi has saved me countless hours sitting in phone shop offices waiting for SIM cards in every new country I visit. The UI may not be quite as snappy as my iPhone but I don't feel it really holds back my use of the device at all and battery life is actually better and it recharges faster.
I do think this new gesture stuff they added to Android P is kind of dumb though and seems like a cheap and failed effort to copy some buzz from Apple.
It is "easy" to block scraping. Make it very costly to scrape:
- Render your page using canvas and WebAssembly compiled from C, C++, or Rust. Create your own text rendering function.
- Have multiple page layouts
- Have multiple compiled versions of your code (change function names, introduce useless code, different implementations of the same function) so it is very difficult reverse engineer, fingerprint and patch.
- Try to prevent debugging by monitoring time interval between function calls, compare local time interval with server time interval to detect sandboxes.
- Always encrypt data from server using different encryption mechanisms every time.
- Hide the decryption key into random locations of your code (use generated multiple versions of the code that gets the key)
- Create huge objects in memory and consume a lot of CPU (you may mine some crypto coins) for a brief period of time (10s) on the first visit of the user. Make very expensive for the scrapers to run the servers. Save an encrypted cookie to avoid doing it later. Monitor concurrent requests from the same cookie.
The answer is that it is possible but it will cost you a lot.
It is not the OCR that is costly. It is the JavaScript execution to render the page so you can do the OCR. You can even increase the JavaScript execution cost if suspicious.
You will also have to automate all page variations and the traditional challenges (login, captcha, user behavior fingerprinting, ...)
At the end the development time, cost and server cost will kick you out of business if you are too dependent on the information or you start to loose money every time you scrap.
Yes. The idea here is to make you dependent on OCR (you also have to find where is the information as the page design changes) and to waste a lot of your server resources making it very costly to scrape.
I was wondering the same thing earlier. This doesn't feel like a disclosure that's had anywhere near ~6 months put into it.
Did the vendors ignore the disclosure initially and begin to change tactics later in the game? Based on how certain vendors have been characterizing this in their PR, I wouldn't be surprised if they didn't take the problem seriously originally.
The Ubuntu page that was on HN earlier [] claims that they were notified in early November. I have no idea if kernel people (as opposed to distro people) got notified earlier.
Especially microcode updates. Microcode is just a giant obscure binary for everyone outside of Intel. If there was a mitigation possible via a microcode update this could have been published months before disclosure without any meaningful risk.
IIRC Intel employs people to work on the linux kernel on behalf oh Intel. Either Intel fumbled or it isn't that easy to circumvent the problem plaging Intel's processors with a software hack.
That’s easy for you to say. You’re not the person having to admit to a billion dollar mistake.
Everybody stalls for time when the stakes are this high. How long can I reasonably spend tying to turn this into a small problem before I have to go public with it?
Saying it’s a bigger problem than it turns out to be is a PR nightmare of its own. If there was a cheap fix then you cried wolf and killed your reputation just as dead.
Exactly this. Apparently, the details of the attack have been published in official paper(s) before the security teams of major OSes could prepare and make publicly available mitigating patches for the users. There is no patch for Debian 8.0 (Jessie), or for Qubes OS, for example.
The chatter is all about how CPU manufacturers screwed up, but there is a much more alarming issue here, I think: the apparent irresponsibility of the people who published the flaws before the security teams and the users could mitigate them. Perhaps there was a reason for accelerated public disclosure, but so far this makes no sense to me.
I suspect a better solution instead of KPTI is to evict all user space pages from cache when an invalid page access happens if fault was caused by read/write kernel space pages. My kernel days was so long ago that I don't now if it is possible.
Massive performance hit but only on misbehaved software. Well behaved software will not have the performance hit of KPTI.
Kernel could even switch dynamically to KPTI if too many read/write attempts from user space.
Implementations of meltdown do not need to trigger a page fault (because the instruction which would fault can be made to execute speculatively - in addition to the instruction which leaks information into the cache executing speculatively). Accordingly, there would be nothing for the kernel to observe or respond to.
ADDED: So in the interrupt handler the kernel could evict all user space pages from cache before returning control to user space so it could not use the timing attack on the cache of the speculative execution of Mov rbx,[rax+Someusermodeaddress] on the address rax+Someusermodeaddress.
It doesn't make sense for speculatively executed code to throw architecturally visible exceptions. The appropriate behavior would be to not perform speculative loads across protection domains (i.e. the behavior of AMD implementations).
It would make sense if it was the only alternative as the kernel can handle it. The appropriate behavior is to remove all traces of the speculative execution including cache hits.
Is that even possible? The data that would need to be removed from the cache has already evicted other cache lines, and that re-fetching those might have observable effects, like the timing.
Concretely, https://twitter.com/corsix/status/948670437432659970 can be used to get both `movzx rax, byte [somekerneladdress]` and `movzx rax, byte [rax+someusermodeaddress]` executed speculatively (the idea behind this is the same as a retpoline - exploit the fact that `ret` is predicted to return to just after the "matching" `call` instruction). If the first load is executed speculatively, it won't cause a page fault.
Even if a fault occurred (others are correct in pointing out it doesn't necessarily) I think this would be too late. I could have already observed the effects in the cache before the instruction causing was (non speculatively) reached and the fault occurred.
Not having a central controller multiple subsystem vendors would have to cooperate using an agreed DMA communication protocol to monitor you and send the information back using the wifi/ethernet chip. Possible but unlikely.
The IOMMU functionality is built into the Platform Controller Hub, which is between the baseboard management controller (the ARM) and the main processor.
Theoretically it would be possible to prevent DMA between the two, but it is highly doubtful Apple would program it that way.