Nice web design, google!
Looks like Google's CDN works so well that creating resized versions to save bandwidth is not worth it. :)
Worth it to them, maybe; what about the poor reader?
The page loads 41 requests and 12.88 MB of data, with the page taking 2+ seconds to properly load the content (1.09s according to Chrome dev tools, but this is just the initial DOM load).
For a simple webpage with white background and dark text.
But hey, AMP is because Google care deeply about quick web-pages. No alternate agenda here, no siree.
Incidentally, I only see a 260K image when I dev-tools that page. I can't reproduce the 4.5MB download.
I think they just get too caught up in their tooling and silly over-engineering and never actually think about the end product.
Although Kubernetes requires 3 to start, but still, this greatly reduces the need to have a lot of separate machines.
256 threads per machine this year. 512 threads per machine 2 years from now? And then hopefully 1024 threads per machine 4-5 years from now? That would be really fun.
(I will take a laptop in 4 years with just a lowly 64 cores please, leaving the heavy iron for the cloud machines.)
I want Moore's law back but in parallel form. The years since 2004 with x85 machines have been quite boring from a CPU performance increase perspective.
Why would you want to? Your whole environment will be down when the machine, or some component thereof, fails or is rotated out for maintenance
I think this is more of a benefit for cloud providers in that they can pack more disparate customer workloads on to a single machine.
The advantage of such a machine is that if it starts to die, you fail over everything atomically and then repair/replace the backup. You don't need to think about what happens if one component is on a dead machine and the others aren't (does your load balancer handle that well, given machines often "die" in ways that just make them slow rather than totally failed?)
The big win though is if you get rid of the microservices and run the whole thing in one big process. No complex RPC failures, obscure HTTP/REST attacks like https://portswigger.net/blog/http-desync-attacks-request-smu... and so on.
That might sound mad but modern JVMs can run lots of languages fast, and have ultra-low-pause GCs that can use terabytes of heap. Like less than one msec low. Many, many businesses fit into these really high end machines with a giant JVM.
I've written about the possibility of a swing back to big iron design here:
And a look at modern Java GCs - GC being historically the bottleneck to really large single processes:
It doesn't actually, and if you're running on GKE then the master is a managed service - you can have a single-node cluster (I do this occasionally for testing).
Said startup might want to consider going with smaller machines in separate availability zones, though ;-)
(Disclaimer: I work in Google Cloud)
3 is more realistic though. If you run at 66% utilization, can deal with a single node down.
Just buy more cheap 4-8c servers.
I know plenty of people who know the Intel name and would consider AMD to be some kind of cheap knock-off (non-techs, obviously). And obviously desktop/laptop manufacturers are going to have some sweetheart deals with intel.
As far as I understand it, AMD doesn't natively support thunderbolt and that's an emerging standard that people really like.
Here's is briefly explained: https://www.youtube.com/watch?v=Q0W7fHJMnyg
If they can buy a notebook with processors or 16 processors they will be second one because bigger numbers means better.
The same thing is true for Intel when they don't have competition. They are the company they are today due to insane gross margin on Xeon and the explosive growth of cloud computing. DCG has been Intel's top performing BU for ages until recently. Expect the same story from AMD as it eats Intel's lunch in that market. High margin semiconductors are money making machines.
If you go to an average big box store and look for laptops AMD based systems can start as little as £250 where as you can't get a mobile i3 based system anywhere close.
When these pathetic CPUs are teamed with slow disks and a pile of bloatware it makes them seem cheap and awful.
Thunderbolt has been an "emerging standard that people really like" for almost a decade, with virtually no installed base outside select Apple products.
Most of the AsRock x570 motherboards support thunderbolt currently afaik
Now back to your question on why these are needed, parallelized simulations. I do computational fluid dynamics (CFD) for designing, fixing, troubleshooting, and optimizing processes and products. They solve large systems of equations that need many cores for meshing and solving and then we need high CPU/GPU core counts just to handle and process the data. In my case at least, the average industrial/manufacturing piece of equipment needs about 1000 cores to recreate as a digital twin due to the amount of multiphysics, complicated geometry, etc.
This is a server chip that excels at virtualization.
Also there are plenty of embarrassingly parallel operations like 3D rendering, video editing, password cracking, and so on that would benefits. Ok usually you can use GPUs for that sort of thing but not always.
>first photo is a 2000x1489px PNG (4.5MB) and it is scaled to just 451x335px.
had on his desk?
Aside from that let me just remind that pretty much everyone in here is an edge case user. We're not the norm so my guess is that your question should be addressed to the general public. In that context I doubt anyone would need such processing power. It's no surprise that Apple is investing heavily in the iPad Pro product line. Most users could be fine with just a tablet.
This is the advantage of a cloud host. Sure you pay a premium, but you're paying not to pay for it when you're not using it (and networking and power and updates and security etc.).
(Disclaimer: I work on Google Cloud)
It doesn't take much usage for those lines to cross.
I think I'd save money by renting a RPi for even $20/day, for instance. :)
Don't underestimate a King like Intel that's backed into a corner. Lots of dirty tricks to play and lots of time to come back.
Also how many machines will they have? Will it be a token amount in a single data center or will there be a good chunk of these around in multiple data centers?
>Intel can cut its prices, to be sure. Beyond that, it has limited maneuverability. Ice Lake servers will not arrive for another year. Pricing on these cores is simply amazing, with a top-end Epyc 7742 selling for just $6950, or roughly $108 per core. An Intel Xeon Platinum 8280 has a list price of over $10,000 for a 28-core chip, just to put that in perspective. If you want a 32-core part, the Epyc 7502 packs 32 cores, 64 threads, higher IPC, and an additional 300MHz of frequency (2.5GHz base, versus 2.2GHz) for $2600 as opposed to the old price of $4200 for the 7601. AMD doesn’t segment its products the way Intel does, which means you get the full benefits of buying an Epyc part in terms of PCIe lanes and additional features. AMD also supports up to 4TB of RAM per socket. Intel tops out at 2TB per socket, and slaps a price premium on that level of RAM support.
If anything, they should cut prices across the board. AMD just cut the price of an x86 core in half.
I'm glad AMD is making such progress that these companies can't ignore them anymore. Having one CPU maker would be horrible.
Another case of scalability: we have also tested ClickHouse on an Aarch64 server (Cavium ThunderX2) with 224 logical cores and despite the fact that each core is 3..7 times slower than Intel E5-2650 and the code is not optimized as much as for x86_64, it was on-par in throughput of heavy queries.
There are also tests of ClickHouse on Power9 if you mind...
A proper solution of course would be to have the CPU intensive algorithms run on different nodes, but it's an integrated solution we pay for so we don't control that.
Is the problem that the CPU max scales with the number of GPUs, so you can't get 1 GPU with 96 vCPUs?
And how did that work out? Upfront costs but should be significant savings overall right?
The big question for us is if we're going to be able to afford to do it locally, there's both the upfront cost and the cost of system administration. These 3 I could still do with my dev team amateur style, but when that becomes 30, we'll probably need some technician that has experience managing compute clusters.