Core Ultra? Seriously? Next you’ll be telling me that the i5 will be called Core Max and the i3 will be called Core Pro.
Regardless of the actual basis for the name, you’d think that after spending two decades with the lower case “i” prefix which became culturally ascendant with the iMac, they’d be more sensitive to giving off the appearance of copying Apple’s homework again.
How can you say that Intel copied the initial lowercase 'i' from Apple when every Intel product from the 1960s forward was an i123 of some kind? The 3101 SRAM had the little 'i' etched on the top of the IC. Lowercase letter 'i' is their long-standing brand.
AMD has etched/printed their arrowhead logo on practically every product they've produced since the the AMD2901, but the brand logo for Ryzen is the enso.
Intel did market some products as the "i386" and "i486" but consumer-facing identification was typically "Intel 80x86". The lowercase-i IC marking practically disappeared from intel CPUs starting with the Pentium.
The "i" logo appears on early P5 models (like the P166 from 1996 I keep on my desk), and then disappears, relegated to the bottom of the CPU.
Pentium Pros, PIIs, PIIIs, P4s, and Core2 products are all practically devoid of the marking.
I would argue that the resurrection of the "i" branding in 2008 with the i7-920 was eased along by iProducts, especially since it practically disappeared from consumer/retail branding for over 15 years.
>How can you say that Intel copied the initial lowercase 'i' from Apple when every Intel product from the 1960s forward was an i123 of some kind?
For these people, the tech world has existed only since the launch of the iPhone. Nothing that came before or after it matters, and Apple is the sun around which all tech revolves around.
Plenty of other companies had 'i' prefix in front of product names, even cars or dish washers.
That's not true, they also pretend that tech which was created after the iPhone but only later incorporated into the Apple platform is a Jobs invention.
Correct, but that's not my point. I'm not saying Intel copied Apple, I'm saying that it surprised me that of the thousands of English words (and indeed millions of inventable words) Intel chose to pick one recently used by a quasi-competitor. They're entitled to, of course. But if I were Intel I would be cautious about avoiding a perceived pattern because I wouldn't want to unintentionally give off follower vibes.
It's like Ford naming their electric car division "Ford Model e" which can be justified as a callback to the Ford Model T, but is also weird considering the product names of Tesla vehicles, and because it's a weird name for a corporate division. It gives off follower vibes.
Throwing the Core i brand away seems like a very bad move even with Ryzen R being a 1:1 copy. Intel has had a consistent, reliable naming and tier scheme in the desktop market for about 15 years. Things like "i5 x600K" are virtual fixpoints.
It feels like unified memory for cpu and gpu is going to be a big trend moving forward for laptop/desktop, which is why investment in Arc to help drive this meteor platform are more important. Console ports are already becoming a big problem for current gpu card memory, and apple silicon is clearly showing the benefits on the laptop and desktop side.
Do I understand correctly that three architectures of "AI-enabled hardware" (I couldn't come up with a better term) are the following?
1) separate cpu and gpu,
2) cpu and neural cores,
3) gpu-like cpu (like in this post).
In the long term, is any of these architectures potentially preferable for a) training, b) inference?
(I am guessing cpu + gpu is not ideal for consumer-level inference because of gpu prices and their space requirements, I don't know much about hardware.)
I wonder if making the whole CPU package larger is another solution. Just like the best LED lighting is spread out (disc, square, linear) instead of being in the shape of a traditional light bulb. Maybe in the future we'll have A4 sized CPUs.
It seems comical to have a CPU more powerful than a space heater people use to heat their rooms in the winter. I also wonder if it's even possible to use a 110v circuit in most houses for this, considering it would create a computer pulling atleast 2500-3000 watts, something most single outlets in a standard home can't handle.
Would be funny to start seeing people run 220v to their offices for their computer.
Some 2kW power supplies come with weird cables to handle 110V. Slightly annoying for a 230V domestic setup but 3kW from a single socket is fine in Europe. Not totally encouraged of course.
You're talking about 15amp setups in both cases, which is what I was talking about, even including the point that Americans might have to start wiring their offices with 220v just like we currently wire our kitchens and laundry rooms.
Almost sure that is for servers. They've been pushing density for a while now, and as four sockets per U became the new norm, data center operator had a slight Pikachu face when they saw 40-60 kW per rack.
With Intel's recent innovation in die interconnects and silicon photonics, this seems a straightforward direction to push. They're going to increase density, it's what the semi industry does.
In return, you get a 5 GHz base clock, but the "BMC" is two x86-based servers running Linux that, among other things, make sure cooling is working properly before applying power to the CPU/memory modules (and to shut everything down as soon as it goes off-nominal).
And, definitely, none of that gear is designed for home use.
Household circuit in US maxes out at around 1700W. I hope in future where we don't need to wire L2 charge port to a home office, just to run a faster workstation.
He means the standard residential circuit (like you would plug a PC or TV into) is typically 15 amps. Larger appliances, like an AC, are dedicated circuits, usually at a higher amperage. It's not uncommon for AC to be on 30A or 60A
That is right, but it's per breaker, so your air conditioner will likely be on its own breaker.
(Assuming 15A fuse, 15A * 110V = 1650W)
Here in Sweden we have 240V from the wall but our fuses are usually 10A, so 2400W per breaker (so we can run Intel CPUs without special installations ;)
I recently got a new PC with i7-13700k. Paired with very good (but enormous) air cooler. When there is high CPU usage I can feel hot air coming from computer! Yes, that air cooler is almost silent (BeQuiet Dark Rock 4), but if when CPU uses 200+W that heat has to end up somewhere. Hopefuly, for development you rarely need 100% CPU. I wouldn't want anything more power hungry for my desktop.
> What’s so special about having 128 gpu cores when nvidia has had thousands of cores in their gpus for over a decade.
Its a CPU not a GPU, that’s what is special. (Also, counting shading units, which are often called cores on NVidia GPUs, this has 1024, which still wouldn’t be a lot for a GPU, but 128 vs. the usual way cores are counted for Nvidia GPUs is the wrong comparison.)
The CPU has 18 cores according to the link. So probably 9 or 18 x64 things. Wouldn't like to guess how an Intel GPU unit (EU?) compares to a SM or CU from the other two.
edit: misc stuff online suggests EU corresponds to a simd. It's sufficiently annoying to find an approximate ratio between that and the main two GPU arch that I'm giving up. Anyone know if 128 is a lot or trivial in this context? My outdated laptop has 8CUs and the better workstation cards are order of 100.
For the CPU cores, M and x86 are converging. M1 had a very expressive lead on gen 11 and 12 Intel because of multiple hiccups on Intel's product pipeline, but that is not going to hold for long. As for the GPU cores, I know very little how they compare, and GPU cores tend to be wildly different between manufacturers.
Because, love it or lump it, Intel's manufacturing process is ludicrously advanced. Power hungry, particularly when they're locked on older nodes, but extremely capable when their interests align. Pat Gelsinger has even stated that it's his goal to get Apple back as an Intel customer. Their foundry services started working with ARM to get RISC cores manufactured, and if they stick to their roadmap, it's possible for them to leapfrog TSMC's density limitations.
We'll have to see where things go, but I do agree with the parent. Apple's lead was their investment on the TSMC 5nm node, which as-of the M2 is clearly difficult to iterate on. Apple needs advancements in density to make a faster chip - Intel does not.
> Apple's lead was their investment on the TSMC 5nm node
That's only part of the deal. Apple also made some really cool architectural innovations, as well as creating workload-oriented ISA extensions that proved extremely effective.
38 GPU cores. The neural network inference cores don't do floating point and therefore aren't used for training or games. Assuming these GPU cores are comparable, if >3x isn't a reasonable target, what would you suggest is?
Regardless of the actual basis for the name, you’d think that after spending two decades with the lower case “i” prefix which became culturally ascendant with the iMac, they’d be more sensitive to giving off the appearance of copying Apple’s homework again.