Hacker News new | past | comments | ask | show | jobs | submit | andrewia's comments login

I don't think so; speculative execution is the cornerstone of modern CPU performance. Even 15-year-old 32-bit ARM CPUs do it. The only phone/PC-grade processors without it are the first generation of Intel Atom, and I recall that early Atom processors sacrificed a ton of performance to keep power consumption low. I doubt this will change since mitigations are "good enough" to patch over major issues.

Maybe the boomers were right and we made computers way too complex? This might be a bit of a hyperbole, but seems like there will always be a security hole (even if mostly hard to exploit). But I also guess we can't get much faster without it either. So maybe we should reduce complexity. Atleast for safety critical systems.

Now wait until the zoomers come along and take the lead on these products. They grew up with iPads and no file system. It’s going to be chaos!

Boomers grew up without a filesystem too and things seem to have worked out fine.

There is extremely popular Cortex-A53 which is in-order core.

Yes and it's very slow as a result. In-order cores without speculative execution can't be fast. Not unless you have no memory and only operate out of something equivalent to L1 cache.

Memory is slow. Insanely slow (compared to the CPU). You can process stupid fast if your entire working set can fit in a 2KB L1 cache, but the second you touch memory you're hosed. You can't hide memory latency without out-of-order execution and/or SMT. You fundamentally need to be parallel to hide latency. CPUs do it with out-of-order and speculative execution. GPUs do it by being stupidly parallel and running something like 32-64 way SMT (huge simplification). Many high-performance CPUs do all of these things.

Instruction level parallelism is simply not optional with the DRAM latency we have.


Cortex a53 may be slow, but it's fast enough for very many tasks. Once you design the your data structures to fit L1/L2 caches it actually is pretty damn fast. Best part of cache aware data structure design it also makes code run faster on Out Of Order CPUS. A53 is of course slow if you use modern layer-upon-layer-ware as your architecture.

But I was just really trying to point that in-order cpus are still around, they did not disappear with in-order atom.


FYI, the whitelist is enforced differently with MVNOs. I was able to prove a UK Xperia 5m2 was compatible and get it allowed on a prepaid MVNO.


Easier to just avoid AT&T's network. I used to buy import xperia models for years and one day their system started to sweep for IMEI's not in their db. It's not a hassle I miss dealing with.


The exchange could be limited to only existing, paid plans. New plans would require a device compatible with the new specification. I don't think it's too hard since American phone carriers were able to offer free LTE devices to users with activated 3g devices.


Correct. You need an existing plan, and clearly the carrier can see what handset you are/were using. It's also the right thing to do because they are imposing a financial burden on people. People with older handsets are (I assume) likely the ones that can't afford newer ones.


I think that's a very naïve way of looking at game development. There are many reasons why games are exploitable besides lack of reasonable dev effort.

- Almost all games are going to use a licensed or shared game engine. That means the softwsre architecture is already known to skilled cheat developers with reverse engineering skills.

- Obfuscating the game will only go so far, as demonstrated by the mixed success of Denuvo DRM.

- The game will not be the most privileged process on the machine, while cheaters are glad to allow root/kernel access to cheats. More advanced cheaters can use PCIe devices to read game memory, defeating that mitigation.

- TPMs cannot be trusted to secure games, as they are exploitable.

- Implementing any of these mitigations will break the game on certain devices, leading to user frustration, reputation damage, and lost revenue base.

- And most damning, AI enabled cheats no longer need any internal access at all. They can simply monitor display output and automate user input to automate certain actions like perfect aim and perfect movement.


A couple of thoughts, but I largely agree with you.

> Obfuscating the game will only go so far, as demonstrated by the mixed success of Denuvo DRM.

Denuvo is for the most part DRM, rather than anticheat. It's goal is to stop people pirating the game during the launch window.

> The game will not be the most privileged process on the machine, while cheaters are glad to allow root/kernel access to cheats.

This ship has sailed. Modern Anticheat platforms are kernel level.

> TPMs cannot be trusted to secure games, as they are exploitable.

Disagree here - for the most part (XIM's being the notable exception) cheating is not a problem on console platforms.

> AI enabled cheats no longer need any internal access at all. They can simply monitor display output and automate user input to automate certain actions like perfect aim and perfect movement.

I don't think these are rampant, or even widespread yet. People joyfully claim that because cheats can be installed in hardware devices that there's no point in cheating, but the reality is the barrier to entry of these hyper advanced cheats _right now_ means that the mitigations that are currently in place are necessary and (somewhat) sufficient.


It's not AI enabled cheats that are the issue, it's DMA through things like PCIe devices disguised as regular hardware. Sophisticated cheats no longer run on the same computer as you're playing on. Google "pcie dma cheat" for a fun rabbit hole.


Right, but the barrier for entry for those cheats is huge - the sp605 board is $700, for example. There are cheaper ones, but you’re not going to have rampant cheating testing through games when you add hundreds in hardware to the requirements.

Antiecheats work in layers and are a game of cat and mouse. They can detect these things some times, and will ban them (and do hardware bans). The cheaters will rotate and move on, and the cycle continues. The goal of an effective anti cheat isn’t stop cheating, it’s be enough of a burden that your game isn’t ruined by cheaters, and not enough of a target to be fun for the cheat writers.


If you look on popular cheat forums, you'll find a newbie guide that links to recommended hardware, typically priced around $250 from memory, certainly not $700.

Also, spending hundreds on hardware is standard for anyone playing competitive games. For example, Escape from Tarkov's "unheard edition" costs $250 for just a single game, and people still buy it. When you factor in the cost of gaming mice, hall-effect sensor keyboards, 480Hz displays, and high-end systems, the total investment adds up quickly for improvements that will never match the capabilities of a cheat, which is how a lot of them also like to justify their cheating, it's simply the most cost effective way to dominate in a game, especially if your livelihood depends on it.

I don't disagree with the second half of your statement.


> This ship has sailed. Modern Anticheat platforms are kernel level.

so you use a kernel level anti-anti-cheat


Yes, but we don't know when the next technology shift will happen. Amazon might be able to abuse their position for decades if a disruption doesn't come.

Aa for E-commerce, it can have a larger inventory than physical retail. You're not going to find many solar charge controllers or mechanical keyboard parts at Walmart, but Amazon will have tons of options deliverable within 48 hours. Few sites can have comparable shipping cost/speed and you have to research each one, whereas Amazon enjoys the position of being the default.

A decade ago, I helped a small Amazon seller with his inventory, and it was eye opening to see all the fees and risks compared to eBay. But he couldn't sell on eBay without losing a massive portion of his customer base, despite their better shopping/buying UX in my experience.


Isn't that basically the point of WinRT and Windows 10 S Mode? The problem is getting developers to adopt the new more secure APIs.


I don't think hyperthreading was the bulk of the attack surface. It definitely presented opportunities for processes to get out of bounds, but I think preemptive scheduling is the bulk of the issue. That genie not going back in the bottle another way to significantly improve processor performance for the same amount of instructions.


I think the real problem is cache sharing and hyperthreading kind of depends on it, so it was only ever secure to run two threads from the same security domain in the same core


Newbie question, if the cores share an L3 cache, does that factor in the branch prediction vulnerabilities? Or is the data affected by the vulnerability stay in caches closer to the individual core? I assume so otherwise all cores would be impacted but I’m unclear where it does sit


It's interesting to see that modern processor optimization still revolves around balancing hardware for specific tasks. In this case, the vector scheduler has been separated from the integer scheduler, and the integer pipeline has been made much wider. I'm sure it made sense for this revision, but I wonder if things will change in a few generations in the pendulum will swing back to simplifying and integrating more parts of the arithmrtic scheduler(s) and ALUs.

It's also interesting to see that FPGA integration hasn't gone far, and good vector performance is still important (if less important than integer). I wonder what percentage of consumer and professional workloads make significant use of vector operations, and how much GPU and FPGA offload would alleviate the need for good vector performance. I only know of vector operations in the context of multimedia processing, which is also suited for GPU acceleration.


> good vector performance is still important (if less important than integer)

This is in part (major part IMHO) because few languages support vector operations as first class operators. We are still trapped in the tyranny that assumes a C abstract machine.

And so because so few languages support vectors, the instruction mix doesn’t emphasize it, therefore there’s less incentive to work on new language paradigms, and we remained tapped in a suboptimal loop.

I’m not claiming there are any villains here, we’re just stuck in a hill-climbing failure.


It’s not obvious that that’s what’s happened here. Eg vector scheduling is separated but there are more units for actually doing certain vector operations. It may be that lots of vector workloads are more limited by memory bandwidth than ILP so adding another port to the scheduler mightn’t add much. Being able to run other parts of the cpu faster when vectorised instructions aren’t being used could be worth a lot.


That matches with recent material I've read on vectorized workloads: memory bandwidth can become the limiting factor.


Always nice to see people rediscovering the roofline model.


But isn’t that why we have things like CUDA? Who exactly is “we” here, people who only have access to CPU’s? :)


I’m not saying that you cannot write vector code, but that it’s typically a special case. CUDA APIs and annotations are bolted onto existing languages rather than reflecting languages with vector operations as natural first class operations.

C or Java have no concept of `a + b` being a vector operation the way a language like, say, APL does. You can come closer in C++, but in the end the memory model of C and C++ hobbles you. FORTRAN is better in this regard.


I see two options from this perspective.

It is always possible to inline assembler in C, and present vector operators as functions in a library.

Otherwise, R does perceive vectors, so another language that performs well might be a better choice. Julia comes to mind, but I have little familiarity with it.

With Java, linking the JRE via JNI would be an (ugly) option.


Makes sense. I guess that’s why some python libs use it under the hood


What about Rust?


When the data is generated on CPU shoveling it to the GPU to do possibly a single or few vector operations and then shoveling it back to the CPU to continue is most likely going to be more expensive than the time saved.

And CUDA is Nvidia specific.


Doesn’t CUDA also let you execute on the CPU? I wonder how efficiently.


No - a CUDA program consists of parts that run on the CPU as well as on the GPU, but the CPU (aka host) code is just orchestrating the process - allocating memory, copying data to/from the GPU, and queuing CUDA kernels to run on the GPU. All the work (i.e. running kernels) is done on the GPU.

There are other libraries (e.g. OpenMP, Intel's oneAPI) and languages (e.g. SYCL) that do let the same code be run on either CPU or GPU.


When you use a GPU, you are using a different processor with a different ISA, running its own barebones OS, with which you communicate mostly by pushing large blocks of memory through the PCIe bus. It’s a very different feel from, say, adding AVX512 instructions to your program flow.


The CPU vector performance is important for throughput-oriented processing of data e.g. databases. A powerful vector implementation gives you most of the benefits of an FPGA for a tiny fraction of the effort but has fewer limitations than a GPU. This hits a price-performance sweet spot for a lot of workloads and the CPU companies have been increasingly making this a first-class "every day" feature of their processors.


AMD tried that with HSA in the past it doesn’t really work. Unless your CPU can magically offload vector processing to the GPU or another sub-processor you are still reliant on new code to get this working which means you break backward compatibility with previously compiled code.

The best case scenario here is if you can have the compiler do all the heavy lifting but more realistically you’ll end up having to make developers switch to a whole new programming paradigm.


I understand that you can't convince developers to rewrite/recompile their applications for a processor that breaks compatibility. I'm wondering how many existing applications would be negatively impacted by cutting down vector throughput. With some searching, I see that some applications make mild use of it like Firefox. However there are applications that would negatively affected, such as noise suppression in Microsoft Teams, and crypto acceleration in libssl and the Linux kernel. Acceleration of crypto functions seems essential enough to warrant not touching vector throughput, so it seems vector operations are here to stay in CPUs.


Modern hash table implementations use vector instructions for lookups:

- Folly: https://github.com/facebook/folly/blob/main/folly/container/...

- Abseil: https://abseil.io/about/design/swisstables


Sure; but it’s hard to do and very few programs get optimised to this point. Before reaching for vector instructions, I’ll:

- Benchmark, and verify that the code is hot.

- Rewrite from Python, Ruby, JS into a systems language (if necessary). Honorary mention for C# / Go / Java, which are often fast enough.

- Change to better data structures. Bad data structure choices are still so common.

- Reduce heap allocations. They’re more expensive than you think, especially when you take into account the effect on the cpu cache

Do those things well, and you can often get 3 or more orders of magnitude improved performance. At that point, is it worth reaching for SIMD intrinsics? Maybe. But I just haven’t written many programs where fast code written in a fast language (c, rust, etc) still wasn’t fast enough.

I think it would be different if languages like rust had a high level wrapper around simd that gave you similar performance to hand written simd. But right now, simd is horrible to use and debug. And you usually need to write it per-architecture. Even Intel and amd need different code paths because Intel has dumped avx2.

Outside generic tools like Unicode validation, json parsing and video decoding, I doubt modern simd gets much use. Llvm does what it can but ….


Indeed, people really fixate on “slow languages” but for all but the most demanding of applications, the right algorithm and data structures makes the lions share of the difference.


Reaching for SIMD intrinsics or an abstraction has been historically quite painful in C and C++. But cross-platform SIMD abstractions in C#, Swift and Mojo are changing the picture. You can write a vectorized algorithm in C# and practically not lose performance versus hand-intrinsified C, and CoreLib heavily relies on that.


Newer SoCs come with co-processors such as NPUs so it’s just a question of how long it would take for those workloads to move there.

And this would highly depend on how ubiquitous they’ll become and how standardized the APIs will be so you won’t have to target IHV specific hardware through their own libraries all the time.

Basically we need a DirectX equivalent for general purpose accelerated compute.


It’s a lot more work to push data to a GPU or NPU than to just to a couple vector ops. Crypto is important enough many architectures have hardware accelerators just for that.


For servers no, but we’re talking about endpoints here. Also this isn’t only about reducing the existing vector bandwidth but also about not increasing it outside of dedicated co-processors.


I think the answer here is dedicated cores of different types on the same die.

Some cores will be high-performance, OoO CPU cores.

Now you make another core with the same ISA, but built for a different workload. It should be in-order. It should have a narrow ALU with fairly basic branch prediction. Most of the core will be occupied with two 1024-bit SIMD units and a 8-16x SMT implementation to hide the latency of the threads.

If your CPU and/or OS detects that a thread is packed with SIMD instructions, it will move the thread over to the wide, slow core with latency hiding. Normal threads with low SIMD instruction counts will be put through the high-performance CPU core.


Different vector widths for different cores isn't currently feasible, even with SVE. So all cores would need to support 1024-bit SIMD.

I think it's reasonable for the non-SIMD focused cores to do so via splitting into multiple micro-ops or double/quadruple/whatever pumping.

I do think that would be an interesting design to experiment with.


I actually think the CPU and GPU meeting at the idea of SIMT would be very apropos. AVX-512/AVX10 has mask registers which work just like CUDA lanes in the sense of allowing lockstep iteration while masking off lanes where it “doesn’t happen” to preserve the illusion of thread individuality. With a mask register, an AVX lane is now a CUDA thread.

Obviously there are compromises in terms of bandwidth but it’s also a lot easier to mix into a broader program if you don’t have to send data across the bus, which also gives it other potential use-cases.

But, if you take the CUDA lane idea one step further and add Independent Thread Scheduling, you can also generalize the idea of these lanes having their own “independent” instruction pointer and flow, which means you’re free to reorder and speculate across the whole 1024b window, independently of your warp/execution width.

The optimization problem you solve is now to move all instruction pointers until they hit a threadfence, with the optimized/lowest-total-cost execution. And technically you may not know where that fence is specifically going to be! Things like self-modifying code etc are another headache not allowed gpgpu too - there certainly will be some idioms that don’t translate well, but I think that stuff is at least thankfully rare in AVX code.


This is what happening now with NPUs and other co-processors. Just not fully OS managed / directed yet but Microsoft is most likely working on that part at least.

The key part is that now there are far more use cases than there were in the early dozer days and that the current main CPU design does not compromise on vector performance like the original AMD design did (outside of extreme cases of very wide vector instructions).

And they are also targeting new use cases such as edge compute AI rather than trying to push the industry to move traditional applications towards GPU compute with HSA.


I've had thoughts along the same lines, but this would require big changes in kernel schedulers, ELF to provide the information, and probably other things.


+1 : Heterogeneous/Non uniform core configuration always require a lot of very complex adjustment to the kernel schedulers and core binding policies. Even now after almost a decade of big-little (from arm) configuration and/or chiplet design(from amd) the (linux) kernel scheduling still requires a lot tuning for things like games etc... Adding cores with very different performance characteristics would probably require the thread scheduling to be delegated to the CPU it self with only hint from the kernel scheduler


There are a couple methods that could be used.

Static analysis would probably work in this case because the in-order core would be very GPU-like while the other core would not.

In cases where performance characteristics are closer, the OS could switch cores, monitor the runtimes, and add metadata about which core worked best (potentially even about which core worked best at which times).


Persuading people to write their C++ as a graph for heterogeneous execution hasn't gone well. The machinery works though, and it's the right thing for heterogeneous compute, so should see adoption from XLA / pytorch etc.


As CPU cores get larger and larger it makes sense to always keep looking for opportunities to decouple things. AMD went with separate schedulers in the Athalon three architectural overhauls ago and hasn't reversed their decision.


> It's interesting to see that modern processor optimization still revolves around balancing hardware for specific tasks

Asking sincerely: what’s specifically so interesting about that? That is what I would naively expect.


It's also important to note that in modern hardware the processor core proper is just one piece in a very large system.

Hardware designers are adding a lot of speciality hardware, they're just not putting it into the core, which also makes a lot of sense.

https://www.researchgate.net/figure/Architectural-specializa...


I'm very interested to see independent testing of cores without SMT/hyperthreading. Of course it's one less function for the hardware and thread scheduler to worry about. But hyperthreading was a useful way to share resources between multiple threads that had light-to-intermediate workloads. Synthetic benchmarks might show an improvement but I'm interested to see what everyday workloads, like web browsing while streaming a video, will react.


I was surprised that disabling SMT has improved by a few percents the Geekbench 6 multi-threaded results on a Zen 3 (5900X) CPU.

While there are also other tasks where SMT does not bring advantages, for the compilation of a big software project SMT does bring an obvious performance improvement, of about 20% for the same Zen 3 CPU.

In any case, Intel has said that they have designed 2 versions of the Lion Cove core, one without SMT for laptop/desktop hybrid CPUs and one with SMT for server CPUs with P cores (i.e. for the successor of Granite Rapids, which will be launched later this year, using P-cores similar to those of Meteor Lake).


Probably because the benchmark is not using all cores so the cores hit the cache more often.


Since side-channel attacks became a common thing, there is hardly a reason to keep hyperthreading around.

It was a product of its time, a way to get cheap multi-cores when getting real cores was too expensive for regular consumer products.

Besides the security issues, for high performance workloads they have always been an issue, stealing resources across shared CPU units.


> there is hardly a reason to keep hyperthreading around.

Performance is still a reason. Anecdote: I have a pet project that involves searching for chess puzzles, and hyperthreading improves throughput 22%. Not massive, but definitely not nothing.


You mean 4 cores 8 threads give 22% more throughput than 8 cores 8 threads or 4 cores 4 threads?


Remember core to core coordination takes longer than between threads of the same core.


4c/8t gives more throughput than 4c/4t.


There are definitely workloads where turning off SMT improves performance.

SMT is a crutch. If your frontend is advanced enough to take advantage of the majority of your execution ports, SMT adds no value. SMT only adds value when your frontend can't use your execution ports, but at that point, maybe you're better off with two more simple cores anyway.

With Intel having small e-cores, it starts to become cheaper to add a couple e-cores that guarantee improvement than to make the p-core larger.


My experience with high performance computing is that the shared execution units and smaller caches are worse than dedicated cores.


As always, the answer is “it depends”. If you are getting too many cache misses, and are memory bound, adding more threads will not help you much. If you have idling processor backends, with FP integer or memory units sitting there doing nothing, adding more threads might extract more performance from the part.


For what it's worth, for security reasons, OpenBSD disables hyperthreading by default.


Generally HT/SMT has never been favored for high utilization needs or low wattage needs.

On the high utilization end, stuff like offline rendering or even some realtime games, would have significant performance degradation when HT/SMT are enabled. It was incredibly noticeable when I worked in film.

And on the low wattage end, it ends up causing more overhead versus just dumping the jobs on an E core.


> And on the low wattage end, it ends up causing more overhead versus just dumping the jobs on an E core.

For most of the HT's existence there weren't any E cores which conflicts with your "never" in the first sentence.


It doesn’t because a lot of low wattage silicon doesn’t support HT/SMT anyway.

The difference is that now low wattage doesn’t have to mean low performance, and getting back that performance is better suited to E cores than introducing HT.


> It doesn’t

Saying "no" doesn't magically remove your contradiction. E cores didn't exist in laptop/PC/server CPUs before 2022 and using HT was a decent way to increase capacity to handle many (e.g. IO) threads without expensive context switches. I'm not saying E cores are a bad solution, but somehow you're trying to erase historical context of HT (or more likely just sloppy writing which you don't want to admit).


I’ve explained what I meant. You’ve interjected your own interpretation of my comment and then gotten huffy about it.

We could politely discuss it or you can continue being rude by making accusations of sloppy writing and denials.


No, you haven't explained the contradiction, you just talk over it. Before E cores were a thing, HT was a decent approach to cheaply support more low utilization threads.


Backend-bound workloads that amount to hours of endless multiplication are not that common. For workloads that are just grab-bags of hundreds of unrelated tasks on a machine, which describes the entire "cloud" thing and most internal crud at every company, HT significantly increases the apparent capacity of the machine.


The need for hyperthreading has diminished with increasing core counts and shrinking power headroom. You can just run those tasks on E cores now and save energy.


I know somebody on the project. The cancellation makes sense, they were years from release and every new VP pivoted the project and lost progress. If they had committed to their original project (a bus) or the first revision (a very high-end car) they could have released on a timely schedule. But they're far too late to the game.


> But they're far too late to the game.

Not so sure about that. You need battery and 4 x electric motors to wheels.

The idea is much simpler than regular combustion engine car. Less parts that wear out.

Idea is actually so simple that all the manufactures compete on putting as much nonsense into cars as possible, insted of making easily replaceable battery and car which would last 50 years and accelerate like Ferrari.


If it's so simple why don't we see regional manufacturers popping everywhere?


We do. Rivian(illinois), Fiskar (Los angeles), VINFast (vietnam), 20 chinese brands, Polestar (china), Lucid (saudi arabia), Canoo, Rimac (balkans).

When was the last time a new car company was started, pre-EV?


Saturn.

https://en.wikipedia.org/wiki/Saturn_Corporation

And it wasn't entirely independent. Before that it was DeLorean? But he was kinda setup in a trap that put him into prison.


During the hayday, we saw new companies come up with new cars all the time!!! And then they slowly went belly up. Part of me thinks that the endless number of regulations prohibited new car companies from entering.

You can import a car from china in a crate that fits on the back of a pickup. But it won’t be legal on roads until it gets things like a DOT approved windshield.


Apple-bus (iBus?). Now that sounds glorious.


Now demanding 30% of revenue from the businesses they take you too!


I think Apple's power has always been to make unpopular things be suddenly cool. They could have aimed a bit lower and made some kind of urban transport a al e-bikes or Segways only more Apple. It's a market niche that was available and close to their strengths.


The original VW Bus reimagined in Apples design language would totally make me consider a bus.


that's kind of what we're getting soon: https://www.vw.com/en/models/id-buzz.html

it's no longer concept.


Soon? I thought it was for sale. I'm pretty sure I saw a model _with_ price in Dubai show room.


There are many on the roads in Norway. They've sold 5k of the cargo version so far here.


They are a few on the road in Switzerland. I see one once a week or so.


This is a bit sad though, a car is something more palpable than AI, I prefer Apple hardware than software. I was curious about what they would bring in this space. I can't find anything I could imagine being interesting of they bringing in AI space.


> original project (a bus)

This is actually insane to me. Like bonkers, even. The MBA types are surely the source of that idea. From a brand perspective, the only car that ever truly made sense for Apple to make was something at least resembling a supercar. They could have made do with a Telsa Model S kind of car perhaps, but I'm shocked that a brand-conscious company like Apple thought a bus was the best bet as an initial product.

First impressions are vitally important. In my opinion, a car brand can go from making high performance cars to more "practical" vehicles once they've established their brand, but not the other way around. Slapping an Apple badge on a Corolla isn't going to work. Steve Jobs said it best, paraphrasing "We want to build computers that customers would want to lick.". If Apple wants to be a legitimate car company that enthusiasts like, they'd have to build a car those people would lick. Not a bus...


For Apple to make a bus would have been extremely poetic, given that Steve Jobs sold his VW Bus to get the funds to start Apple in the first place: https://finance.yahoo.com/news/steve-jobs-sold-volkswagen-bu...


Everything Apple makes is ultimately still meant to be practical. I feel like the Apple car would be more like "Corolla, but very high end, and 3x the price" rather than a supercar.


Why a supercar? The apple watch (except the v1 "edition") was never going to compete with luxury watches. Airpods Max are in the higher end of consumer headphones, but a downright bargain compared to "luxury" headphones. Apple doing a lux-lite iteration of a common consumer good makes way more sense.


I agree with your thoughts about the bus, but I started thinking about how many car brands started out selling practical cars, and now have, if not supercars, very high end sports cars - it's a lot of them. It's also kind of how Apple has done stuff in the past. They've marketed their "lesser" products by also making sexy, "pro" products. They could have easily done the same here. Release a practical, functional, Apple product, stuffed with Apple's attention to detail, followed up with the supercar. The keynote would look just like any Macbook lineup update, with the $20k model saved for the end.

But a bus? Yeah, that's just weird.


My long held feeling: A fleet of autonomous busses traveling predetermined inter/intra-city routes. Think, greyhound replacement. They were never going to be operated by humans.


A bus achieves what with some morality the profit other manufacturers seek via in-car purchases. GM, Rivian, and Tesla don’t want CarPlay for that reason.

A bus network would provide recurring revenue for the actual thing a vehicle is for, instead of DLC headlight patterns.


Apple’s take on the VW Bus would be very on-brand imho


That's not a bus though.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: