Why did the compiler even chose to fetch DWORDs only in the first place? It's unclear to me why the accumulator (apparently) determines the vectorization width?
LLVM deprioritizes helper scripts such as update_analyze_test_checks.py, and the build infrastructure is far from perfect. Scripts like the ones categorizing PRs are very much unpaid work.
LLVM also deprioritizes general cleanup work, such as getting rid off passes that don’t work, and are rotting in the tree: of the top of my head, I can think of GVNSink, LoopFusion.
There are additional problems unique to LLVM, as it doesn’t have a dictator: there are multiple different dependence analysis in tree, for instance.
Many thousands of companies use LLVM in their products and benefit from the technology.
As it is an open source project, in an ideal world, it would e fair to assume they would somehow contribute back, but instead they just take it for granted as there is a group of people who make it available for free.
(please read to the end before posting your "wHy pUt iT oPeN-sOuRcE tHeN???" bigoted comments)
Don't get me wrong: It is perfectly legal for anyone to do so, as the project is available under a permissive license and you are not mandated to do anything to contribute back.
I'm just trying to clarify the point made by OP here, and clarifying that some infrastructure tooling such as compilers and debugger are often taken for granted, and even rich companies opt not to contribute back.
I just looked at 1 page of the most recent commits to the LLVM project and every single one of them is from a corporate developer. I can't imagine how to arrive at the conclusion that the industry takes LLVM for granted. The major LLVM subprojects that I follow are not just largely written and maintained by large companies, but were also initially invented and contributed by them.
A debugger sees exceptions before the application does, and it knows which exceptions it should handle by itself (e.g. breakpoints set by the user) vs passing them to the application.
I don't have a windows machine rn to verify but I'd expect your assumption that it's impossible to debug an INT3 VEH to be incorrect.
Yes, however you can easily detect when a debugger is attached and avoid placing the VEH. There are only two mechanisms for a debugger to operate: by placing its own VEH (or UHE), or by populating one of the four hardware breakpoint registers, and both are very easy to detect.
Sure there's a gazillion ways of detecting a debugger (specifically on windows), but then we are back to detecting debuggers. My point was that VEH (alone) doesn't prevent any sort of debugging, specifically not the VEH handler itself.
Btw, debuggers (on windows) won't usually install VEH to support BPs, they'll use the win32 debugger infrastructure where the OS manages exceptions and delivers them to the attached debugger object (which again can be detected in several ways).
They also do not technically need HW bp registers, although often they will. A simple way to implement BPs is to write 0xCC (INT3) to the text section, then restore original bytes when the INT3 fires.
I don't disagree that it's possible, just in my experience the debugger VEH was fired after my application's own VEH and not before it, so the debugger did not step into the VEH. And if you want to hide something from static analysis it's not a bad way, even if it is meta to "security through obscurity". The barrier to entry for implementation is pretty low, like ~50 LOCs so it's not going to be on the level of like state sponsored malware or something like that.
Most SUVs around there aren't based on trucks or minivans.
Over the last decade or so people are increasingly preferring SUVs.
Most of these are soft-roaders, and not trucks or minivans. They aren't very capable off the road. Some of them aren't even 4/AWD, and most are monocoques and not body-on-frames.
Yes, it's rare. could you name other incidents where this happens? I feel like this is always brought up in these discussions, but it really only is an outlier. There'S not much to conclude from this.
Feels a bit like an exception proving the rule. If everyone is pointing to the same individual example that a rule doesn't hold, the rule probably holds for almost all cases.
It drives me nuts to read an announcement regarding $some_product when they don't spend a single word on explaining what the hell $some_product actually is.
What the hell Google PR???
Do you have a specific question?
It is not gonna be there before 2y I think. You can search for GlobalISel on the llvm mailing list to see discussions and patches.
If it is true that the NSA MITMed Google connections, then one could draw the conclusion that the NSA doesn't actually have a direct connection to Google data centers (as claimed by Google).
If they had such a connection, then why would they use MITM attacks against people?
The "direct access" that the NSA has to Google accounts probably requires sending a request for some set of information to Google. It likely needs to be signed off on (even if it's all automated). I'd imagine the NSA would like to hide some activities, especially corporate espionage, even from the watchers at Google--it reduces the risk of anyone at Google growing a spine.
Requests to Google may be audited or logged; Google have an incentive to do this so they can pass the buck when the inevitable evidence of abuse comes out.
The NSA, on the other hand, would prefer there to be no audit trail so there's no evidence of the inevitable abuses.
Hard to know for-sure, but it could be something as basic as redundancy. If one method of information-capture was eventually disallowed, they'd have an alternative. Or if one method of information-capture required more oversight than they wanted - they'd have an alternative.
It would also stand to reason that the court/LEO requests to Google for data are just a CYA/formality with respect to them "legally" getting the authorization to read the data.
They likely have the access to all the data they want. They use the legal vectors for requests just to see what the companies would give them on the request, and can compare the difference between the provided data vs the slurped data.
Since the movie isn't actually out, this is more of a promotional piece. Also the first image is a bit bigger http://pixartimes.com/wp-content/uploads/2012/08/Monsters-Un... but is fairly low-quality and for some reason is scaled down to stamp-size on the page.
Comparing raster graphics from 12 years ago to ray-traced graphics from today isn't a fair anyway. Part of why raster has persisted against ray-tracing for so long (despite numerous predictions to the contrary) is that raster graphics techniques are constantly improving.
reply