Hacker News new | past | comments | ask | show | jobs | submit login
Samsung Confirms AMD RDNA GPU in Next Exynos Flagship (anandtech.com)
182 points by ckastner 4 days ago | hide | past | favorite | 124 comments

Seems like a great starting point to get a GPU driver in the kernel and be off to the races with an ARM laptop for Linux. Maybe vendors like System76 are watching and waiting for opportunities to test the market with a product like this.

Oh, Microsoft will be also interested on this one (especially that Qualcomm is... well a little bit like Intel).

Intel has a super good open source reputation, AFAIK.

I'm obviously biased due to dealing with their crap drivers, but intel's reputation always felt to me to be based entirely on "they open sourced and up streamed the code, so we won't talk about how bad it is"

They don't even provide drivers for older devices. Really, can't spare 30MB of storage space to support a perfectly fine product? Thank God for third party websites.

I have a Centrino 6235 Wifi N/Bluetooth adapter, it works just fine except it needs Intel drivers for the Bluetooth part. Their answer seems to be "just upgrade it, lol", suggesting some newer Wifi AC adapters. Higher speeds, they say.

Well, first of all, I won't get anywhere near those speeds for various reasons, and second, why do I have to throw away a perfectly functioning piece of hardware? Ridiculous.

>> why do I have to throw away a perfectly functioning piece of hardware?

Lol. A hardware manufacturer doing things that results in consumers buy more hardware? That's like a software company discontinuing sales of an old OS in order to sell its latest abomination. Intel and MS deserve each others. Toss apple on that pile too imho. Synergy between AMD and linux/foss is a good thing, consumer-friendly behavior that should be encouraged.

Ah, good point. Though this is an adapter that Intel sells to OEMs, in this case HP... And HP does not let you install just any card, it needs to be one that's on their whitelist (embedded in the BIOS).... yeah, all of these companies are pretty bad.

I love bios mods. Replaced a crappy N chip in my Lenovo Yoga with an AC chip.

Thankfully Linux never removes support for old hardware, oh wait.

And they released the documentation for the hardware so you can feel free to join in and make the drivers better any time you like.

Which is about all we can ask of companies doing open source work. And it's still vastly more than most are doing.

They had a series of blunders in the last year but, the drivers (GPU, Ethernet) got stabilized back, it seems.

I've been burned by them, so I experienced the blunders first hand.

Yeah, forgot about that (obviously I meant about how innovative they are in the current market or lack thereof).

And a bad reputation among game developers regarding their GPUs quality.

'Quality' referring to things like stability? Or performance? If the latter, I don't think it's fair to compare intel GPUs with other manufacturers'. Until very recently, intel didn't really compete with AMD/NVIDIA. And if I had to guess, their discrete GPU probably still doesn't have leading performance.

Everything that is relevant for game developers.

> GPU driver in the kernel and be off to the races with an ARM laptop for Linux.

Not sure I am reading this correctly.

AMD already has GPU RDNA drivers in Linux kernel. So It doesn't need Samsung to kick start this.

The partnership with Samsung ARM and AMD RDNA GPU SoC only goes for Phone and Tablet, or any market that AMD silicon currently does not currently operate in.

> AMD already has GPU RDNA drivers in Linux kernel. So It doesn't need Samsung to kick start this.

I think that was the GP's point, ie Samsung wouldn't need to invest in writing new drivers but should rather be able to do relatively easy adaptations to existing ones to get things off the ground.

Back when AMD was jumping into the Arm server fray, I started fantasizing about an AMD Arm APU that was targeted at low power, embedded and mobile use. I say fantasy because at the time, aside from computer geeks, who would care about an Arm APU? Now Apple is doing just that.

AMD has an interesting portfolio of processors, gpus and now fpgas. So here's another fantasy: Amd Risc-V APU. AMD hasn't had a totally unique architecture since the am29000 (29k). So it would be interesting to see them build their own CPU arch using an open instruction set with their GPU. Start with a nice quad core that can handle a laptop or mini desktop a la NUC. Another fantasy would be an AMD Zynq.

I asked this before but why doesn't Samsung build video game consoles? They could easily integrate this and spin it off as a console device. It has the brand name appeal, it has the capability, and the capital.

Or did the door close in 1999? I'm ready for Samsung Gamestation

Consoles don't actually make money (unless you're Nintendo).

The games make money. The online service subscription makes money. Samsung probably can make good hardware, but I don't see them getting into the software / online platform business.

Switch is nothing as a console though. It's Nintendo and Nintendo software that makes it sell like Hot cakes. So anyone can do a Switch (and I would say the Razr ones that attach to phones are pretty good already) but Nintendo Games are what sell the Switch too.

Well, the Switch is also the only real choice for portable console that gets full-fledged games.

A smartphone hardly competes with it with its spammy mobile game stores, lack of dedicated controller, and lack of single hardware target that has developers actually making games for it.

Smartphone + controller is so uncompelling that I've literally never seen someone playing that way in the flesh, and I bought a Switch with zero interest in Nintendo games. And since developers can't assume you have a controller, mobile games are stuck in this very superficial built-for-touch limbo that limits what they can be.

You're missing a lot if you think a Pixel + Razr controller competes with Switch even after removing all Nintendo games. That's to suggest that mobile tap-interface gaming competes with Switch/PS/Xbox games.

Just consider the difference between Skyrim on Switch and Blades on iOS/Android. That's the chasm I'm talking about.

I was talking more of things like Xcloud with Razr controller.

I would also argue that Switch has Skyrim due to the sale of Switch caused by BoTW and Odyssey.

Don't get me wrong, I have only my Switch for gaming, but I only mean that I wouldn't have bought it if it wasn't a Nintendo.

I don't think they meant that Switch was a technical marvel, but probably that the cost is fairly high for the hardware you get, so they probably make money on the console itself.

That they can charge what they do hinges on the quality of their games, as you say.

>. So anyone can do a Switch

can anyone do iphone? what kind of logic is this? Samsung is good at what they do and Nintendo is very good at what they do. Sega failed with their console and they are in the gaming industry for a long time. Sony almost fail with their cell cpu console.

anyone can do a Switch? lets start with you...

Consoles are a platform play to get attached to people's TVs. But Samsung is often there already with the TV itself.

They could contract studios to make games for the TV:s directly.

The 3D thing didn't take off. Companies are constantly trying to find ways to make people ditch their old TV and buy a new one. This could be one.

Cell phones don’t make money either. You make money with the App Store.

Samsung has let Google make the money.

Cell phones totally make money, at least for Samsung. Samsung just released 2020 3Q earnings. Mobile division revenue 30B and profit 4B, >10% margin.

Samsung at its peak, circa 2013, made $9.6B operating profit on $55B revenue. Samsung's gross sales revenue, profit (and margin) in mobile has been steadily declining since.

The Mobile division's 2Q 2012 profit was less than 10% ($1.95B OP on $20B revenue). Then Trump's sanction of Huawei happened, after which Samsung's sales grew by 50% QoQ (3Q), but we don't expect Samsung's luck to last forever and their margin would start declining to a mid-single digit again.

Yeah, Samsung doesn't restrict itself to lucrative industries. So many of its ventures are in markets where marginal cost ≈ marginal revenue, and they're big enough and competent enough to get rich doing so.

The smartphone business is maybe not the best example though, since the brand reputation that having the second biggest name in smartphones confers surely pays dividends across their consumer product lines.

While app store is more or less required for a phone to sell, those devices still do that quite nice margins. There’s no third party game store you coukd use on a console (apart from steam big picture).

Samsung has absolutely abysmal app store experience(remember they do have their own both on mobile and on their wearables).

Game makers want to make games for the most popular consoles, and gamers want to buy the consoles with the best game selection. So there is a natural convergence on one or two platforms, with Nintendo occupying a special cultural niche. Even with the enormous resources of Samsung it would be difficult to displace Sony or Microsoft.

The Nintendo strategy is quite interesting, because they're basically content being the "second console" if it means being the second console in everybody's homes, and leaving MS and Sony to fight for top billing.

That's true and works well for them but on top of that they're also selling to a different market which the other consoles effectively don't serve - there is much more switch content targeted at/suitable for younger players than the other consoles.

Portability aspect is also unique selling point. Nintendo has the history with Gameboy, 3DS.

That's also a fair point, but they've had successful non-portable consoles before like the Wii.

Thats a good strategy to be fair.

It sorta garuantees you will outsell both the top consoles while low key dominating people's actual downtime. I can't hop over to my xbox for 30mins. I need a few hours at least.

Game consoles are about building a functioning content platform. The hardware is only a small part of it. You can see that with Nvidia's half-hearted attempts with Shield.

NVidia Shield did the job.

It got NVidia a contract with Nintendo, who turned the Shield into a Switch. Yeah, Nintendo jazzed it up a lot, but the internals of the Switch and Shield are surprisingly close.

Sure, that is fair. In that sense, Samsung could plausibly build a console and software tools around it as a reference design to encourage a gaming platform company to adopt Exynos.

I dunno, as far as streaming games goes, nVidia is probably one of the better if not the best on the market. Their problem is that literally nobody knows about what it's capable of. Being able to buy a game on steam, and then play it on your TV is AMAZING.

Of course, game publishers are playing hardball because everyone knows you should have to buy a copy of the game for every place you want to play it.

Just a guess but: because it's a horribly difficult market to get into, and Samsung (in my experience) is pretty bad at writing software. Plus they'd be making Google angry (competing with Stadia).

If Sega couldn't manage, I don't know how Samsung would - they've obviously got a lot more money to throw at the problem but I'm not entirely sure there's enough market for a 4th player. Nintendo has the "cheap and fun" market cornered, and Sony and Microsoft own the high end. What development house could Samsung even acquire at this point to get exclusive titles?

I bet they could get a good deal on CD Projekt Red right now

There was a rumor years ago that Samsung was the #1 potential buyer of Microsoft's XBOX division. Obviously that didn't happen. At this point building a brand and a first party game library that can compete in what already looks like a crowded market is probably not worth it. Technical differentiation is hard and as sibling comment said consoles sell at a loss

The failure of the Samsung Saturn probably scared them off :)

If you are asking this with respect to AMD GPU.

That is because the AMD IP deal with Samsung only allows f for phone and tablet market.

KFC Console just launched a few months ago, while running Windows. Once Games move to ARM, I think we are going to see a lot more consoles based on Windows. All manufacturers need to do is to plonk in some good gaming hardware, add windows to it and they are done.

I think the ship has sailed for launching an entirely new console platform. It's superhard for someone to claw market share out of PS, Xbox and Nintendo. Not to mention getting game developers and publishers onboard. It's a chicken and egg problem where users won't come until you have games and games won't come until you have users. Making a windows based console solves the games problem.

KFC here is the actual Kentucky Fried Chicken, not an homonym.

The console has a slot to warm up the chicken.

I could not believe this to be true.


It is true! Everyone assumed this to be a hoax, but it's being built by cooler master.

I don't see how Windows adds any value to a game console, as they're traditionally understood.

Maybe if you consider it as a Windows gaming PC, yes, but not otherwise.

I wonder if this could end out with a GPU driver in the mainline kernel and a faster PinePhone.

Yeah, the question is: "Is this the first ARM SOC with an open GPU driver?"

Too bad this will never ship in a US-bound product. Samsung is so terrified of Qualcomm's patent library that their US destined products all use Qualcomm SoCs instead of Exynos.

>Samsung is so terrified of Qualcomm's patent library

Samsung has reached a deal with Qualcomm and settled all on-going patent issues in 2018. So I would not be surprised if Exynos comes to US in 2021 or 2022.

But right now nothing has been shown by Samsung to be competitive with Qualcomm's mmWave offering. And interestingly enough US is the only market which has implemented mmWave with one carrier and other two are "looking at it" closely. As far as I know NO other market currently has plans for mmWave. Which means if mmWave is mandatory for US market, you would likely continue to see Qualcomm in Samsung Smartphone.

Korea and some other countries operated 5G mmwave.


Japan has also deployed mmWave commercially on docomo

I thought it was more to do with Qualcomm's SOCs having integrated radios for the atypical US bands, something they'd have to add on otherwise?

Historically, that was trueish thanks to CDMA in the usa, but it's significantly less true now that the CDMA networks are dying.

For instance, the only band an S20 is missing that's commonly found in the US is 66.


These days it's purely a patent licensing agreement. Samsung doesn't sell Exynos modems (and thus Exynos SoCs) in the US or China (historically markets with lots of CDMA), and Qualcomm doesn't hassle them about the rest of the world.

and they're super locked down, too. found out the hard way, expecting to flash lineageos onto my wife's sprint/us snapdragon-based S8 instead of the bootloader-unlocked european exynos model :(

i told her to buy the sprint one cause it had better coverage in the us band spectrum.

Yeah, I don't understand that. Is it because of the carriers or Qualcomm? I mean, there's plenty of Snapdragon powered phones that have unlocked bootloaders.

you're probably right.

(typed from lineageos on unlocked snapdragon-based sony xz2c)

Afaik that's more to do with cellular patents than processor patents

Totally. At the same time, Samsung has made it clear they aren't really interested in producing SoCs without built in modems any more.

That's not totally true... They've sold Chromebooks in US markets with Exynos (though I'm not sure how recently they refreshed that lineup). I'm not sure about the mobile market though.

I think these all used the older ARMv7 Exynos chips, before modems became standard in the SoC. Everything they've shipped with an ARMv8 has an onboard modem.

Maybe Samsung will manage an Apple M1 competitor.

Interestingly, M1 doesn't have that much secret sauce in it - it's extremely wide and benefits immensely from avoiding the x86 tax on the decoder, with a few tricks up its sleeve. Apple have shown it's possible, so - although starting now isn't ideal - if Samsung have the wherewithal to do it they absolutely could.

More difficult, however, would probably be selling it - I would imagine some HPC clusters would love it, for example, but for consumer products Apple can charge through the roof because their customers are used to paying pretty high prices for (sometimes better, sometimes worse) hardware. Apple's vertical integration also means they basically don't have to bother building an acceptable ecosystem around their new chipset - there's basically no documentation or vTune-style performance tools. It's also partly a question of priorities, but AMD still lag behind Intel even after decades in the game.

No need to wait for Apple or Samsung, if you really want to go the ARM route for HPC, Fujitsu will happily sell you the A64FX: https://www.fujitsu.com/global/products/computing/servers/su...

I'm aware.

The documentation is depressingly more detailed that what Apple bless you with. I always get the feeling that Apple create for the same reason a pretentious chef cooks, you're merely proof of my greatness.

Agreed, nothing magic, no tricks, just solid engineering. Larger caches, lower latency caches, wider issue, larger re-order buffer, lower memory latency (30ns without TLB misses), etc. The impressive part is that much engineering resulted in a low power chip that apple can afford to put into $700 desktops and $1000 laptops and competes with the per core performance (and wins on perf/watt) against Intel and AMD.

Yes, but keep in mind if you spec those laptops out to have 16Gb and decent storage they're suddenly double the price. If M1 is anything it almost definitely isn't cheap

What’s interesting is that even when emulating x86 they outperform AMD/Intel. That means the decoder improvements of doing a bunch of decodes in parallel (which x86 can’t do due to being CISC) aren’t the whole story. It could be that Apple just translates x86 code into fixed width to work around this or it could be other architectural improvements.

When emulating x86 Apple translates the x86 code into ARM code, so it is executed later at almost native speeds.

Only for programs that depend heavily on just-in-time compilation for Java, JavaScript etc. Apple must fall-back to interpretation of the x86 instructions, being then much slower.

Also the outperforming of AMD/Intel has been somewhat exaggerated. Apple M1 is faster in single-thread than any old Intel or AMD, but it is slower than any new Zen 3 CPU and it will be slower than the top models of Intel Tiger Lake H and Intel Rocket Lake. In multi-thread M1 is easily beaten by many processors.

On Christmas I have upgraded the CPU in one of my computers with a Ryzen 9 5900X, so I could verify that in the single-thread benchmarks that I could run on my computer, the values matched those published elsewhere for Zen 3 and they also exceeded the highest of the values published for Apple M1, by e.g. from 3% to 4% in Geekbench 5, until 24% in gmpbench.

I agree that the advantage has been exaggerated, but to be fair the M1 is a laptop class processor that is performing just slightly less than a gaming CPU. Your CPU cooler is probably close to half the size of a mac mini alone, and the CPU alone probably draws a lot more power than an entire mac mini.

When the M2 comes out with a big heatsink and fan, it is going to be extremely competitive. Although AMD might be on 5nm by then.

The per-core power consumption of an M1 and a (edit:)5990X is actually almost identical.

The TDP of the 5900 is 105w with 12 cores. So let's say 8.75w/core. A 5950X is slightly higher per core wattage.

The M1 has a maximum power consumption of 15.1w (13.8w + 1.3w) so for the high powered cores that is 3.45w/core and a miniscule amount for the low power cores. A mac mini tops out at around 20w during 8-core CPU benchmarks. While a 5950X PC will draw around 96w at the wall when idle.

I know a Ryzen desktop CPU probably won't necessarily draw its entire TDP but it is not in the same league as an low powered CPU such as the M1 or the latest Ryzen mobile CPUs in terms of power consumption.

Sorry - I meant 5990X. The 5950X is solidly within the "diminishing returns" frequency/voltage range.

The 5990X is at 4.37W/core at its maximum power consumption - 280W/64 cores. This is from full TDP, which yes it very rarely touches, and when it does it's crunching way more numbers than most benchmarks use.

This is completely irrelevant. The multicore TDP of a Ryzen can be as low as 5W per core and as high as 20W per core if you boost it a lot. It's purely a matter of what you are setting the frequency to.

The M1 core is not that far away. A single core can boost up to 15W or more. The reason why laptops catch up to desktops in single core benchmarks is because their single core power budget is almost exactly the same but every time people act as if there is still a 4x energy efficiency gain left to be exploited when there isn't.

Idle power is a matter of how integrated your system is. The M1 is highly integrated so it will consume less power just like any other integrated SoC. When people buy desktops they want to get as far away from an integrated system as possible.

Yes, but the per core performance favors the M1.

Not in the benchmarks I've seen.

Intel / AMD have a 4-wide decoder, of which can be a 6-wide decoder if executing out of the uOp cache.

Apple just went with an 8-wide decoder, surprising a bunch of people. There's not much difference between 4-wide and 8-wide, aside from Apple deciding that such a wide single-core unit was worthwhile.

> Intel / AMD have a 4-wide decoder, of which can be a 6-wide decoder if executing out of the uOp cache.

uOps are already decoded by definition

That depends on how many execution units it has to play with, although I guess the bottleneck at that point could end up being the length of a predictable flow.

Not that Apple will tell us how many units it has


It's actually well known how wide the M1 is.

It's still guesswork how many ports it actually has.

You can measure the ROB pretty accurately but the execution units have to be used to be measured.

You can only measure accurately if you actually know what the details look like.

The vast majority of instructions by a computer are executed in loops, if you perform the translation once it's an O(1) overhead in the ideal case.

Intel and AMD both perform decodes in parallel, but not as wide as M1, but having such a large and complex decoder costs silicon and power. Taking RISC-V as an example, a fixed width frontend is a project for a student, a modern x86 decoder is probably millions to write and verify.

I bet both Intel and AMD aren't exactly in love with x86 at the moment

> I bet both Intel and AMD aren't exactly in love with x86 at the moment

This has been the case for, eh, decades now? To me it's almost a joke how many times Intel has tried to replace ia32 with a modernized replacement and miserably failed to make any headway.

I suppose Intel doesn't want to try competing with the legion of architecturally similar RISC cores and would rather "go big" with ideas like Itanium/i860/i960/iAPX. It's funny to imagine, but maybe one day they'd release a RISC-V implementing processor. Can't imagine them licensing the rights to ARM, and going with Power or MIPS also seems out of character.

> a modern x86 decoder is probably millions to write and verify

I must be missing something, because it looks pretty simple as a segmented sum problem to me.

A good example of parallel segmented sum is bishop / rook (aka sliding pieces) movement.


This segmented sum / prefix sum / Kogge Stone is taught in carry sum adder class at the undergrad level. Sure, it's non obvious that it applies to parallel decoding but id expect this sort of thing to be a student exercise at the masters level.

Is it simple? Variable length encoding doesn't have to be overly difficult, but x86 is a weird ISA. I'm not entirely familiar with what you cite so I can't really comment, but x86's semi-unbounded prefixes and sheer volume of extensions make things difficult.

A master's project might be to write a decoder, possibly to verify (prove - everything in a CPU has to be formally verified or generated from some other formally verified tool) it.

As proof I raise that many disassemblers, which run as software and are therefore much easier to write, still disagree and struggle with x86

Kogge-Stone proved that ANY associative function can be parallelized with a prefix-sum arrangement. Associative defined as in f(f(x, y), z) == f(x, f(y, z)... or more commonly (A+B) + C == A + (B+C), where + is any associative operator)


In particular, this arrangement: https://en.wikipedia.org/wiki/Prefix_sum#/media/File:Hillis-...

Now "add" (or +) is associative. But it also works for *, min, max, and even weird stuff like "Can the Bishop move here" or "Can the Rook move here". So the goal is to find an associative operator that you apply byte-per-byte.


Okay, a brief detour. It seems obvious to me that a finite-state machine can decode x86 byte-by-byte. FSM is the "obvious" sequential algorithm that determines whether the byte is the start-of-instruction, or the middle-of-instruction, as well as what instruction it is by the end.

Remember: we're just trying to make a FSM decode ONE instruction right now. That's pretty obvious how to do that. (Alternatively, imagine a RegEx that can parse an instruction from the bytestream when given the start-of-instruction. All Regular-expressions have a finite-state-machine representation).

Hillis / Steele proved that finite-state machines are associative operators (!!!), and therefore can be used in parallel-prefix sum arrangements. (http://uenics.evansville.edu/~mr56/ece757/DataParallelAlgori...)

The page number is 1176:

> Since this composition operation is associative, we may compute the automaton state after every character in a string as follows:

> 1. Replace every character in the string with the array representation of its state-to-state function.

> 2. Perform a parallel-prefix operation. The combining function is the composition of arrays as described above. The net effect is that, after this step, every character c of the original string has been replaced by an array representing the state- to-state function for that prefix of the original string that ends at (and includes) c.

> 3. Use the initial automaton state (N in our example) to index into all these arrays. Now every character has been replaced by the state the automaton would have after that character.

In short: computing the "state" of a sequential FSM applied across its inputs can be EASILY performed in parallel through the prefix-sum model.

Any finite state machine can be converted into parallel prefix form through this mechanism.

"Work Efficient" Parallel Prefix arrangement is O(log(n)) depth and O(n) total elements, which means that parallel decoding to any width (ie: 8-way decoder, 16-way decoder, or even 1024-way decoder) is LINEAR in terms of power-consumption and O(log(n)) with respect to time.

The layout for work-efficient Parallel Prefix is: https://en.wikipedia.org/wiki/Prefix_sum#/media/File:Prefix_...

I'll have to take your word for parts of this because I'm not familiar with this proof.

I'm not sure x86 decoding is oh-so-simple, because if you bring in memory the alignment is not guaranteed so you don't even know where the instruction starts let alone where it ends. An x86 instruction can theoretically have an unbounded number of prefixes, meaningful only after decoding something after that - maybe you can do it with a FSM but an enormous one.

All in all this doesn't sound like a master's project in the slightest, because remember I said design and verify not just build a never-used-again toy.

There have been probably hundreds of millions of dollars spent on this over the years, and they're basically stuck on the current width and multiple pipeline stages.

> An x86 instruction can theoretically have an unbounded number of prefixes

All x86 instructions are strongly bounded by 15 byte lengths, otherwise processors throw an exception.

I just searched StackOverflow: https://stackoverflow.com/questions/23788236/get-size-of-ass...

Just as I expected: finite-state machine used to decode. Now write a FSM-compiler (which is an undergrad-level project), to automatically parallelize the implementation.

The parallelization step is probably Master's level, but a very advanced undergrad student can probably accomplish it: since all the individual elements are undergrad projects (Kogge-stone applied to associative operators, finite-state machine compiler / regular expressions)

The overall process is also documented by Intel in their Opcode map: https://www.intel.com/content/dam/www/public/us/en/documents...

As a FSM, verification is simple. Just generate all x86 instructions (there are a finite number of them after all), and ensure your FSM properly goes through all of them.

That state machine looks ahead more than one symbol and has a lot of memory even if you consider it one big state.

You have to verify it decodes invalid instructions as a fault which means testing the entire search space.

Hypothetically you could use a bounded model check but you need to test roughly 10^36 combinations which is still thousands of years if you can do a 1000 billion billion a second. You need a formal proof of the operation too.

You can formally verify a finite state machine by simply testing all state-transitions.

You don't need to do an exhaustive check of the 2^8^15 all 15-byte combinations. You just check all state-transitions of the state machine.

Or to put it another way: you don't need to test all 2^8^15 byte combinations. You just need to check all invalid-instructions that have ONE invalid byte, to prove the attributes of the finite-state-machine.

Now more than ever they won't because they shuttered their CPU microarchitecture team. Mongoose is dead for good and they're using ARM's reference architectures, like Cortex X1 and A78.

Up until noe, they haven't even managed to build a competitor to the Snapdragon CPUs.

But let's see what they are up to.

More than the hardware, I want a macOS competitor that doesn't require me to grow a neck beard or have a voice assistant built in.

> macOS competitor that doesn't require me to grow a neck beard or have a voice assistant built in.

What is genuinely so bad about Linux these days? Although I like playing with low-level stuff, I derive zero pleasure from OS-fettling and I just stuck Mint on my laptop - it's been basically perfect apart from a problem with the dual-boot setup which I think is an SSD problem waiting to explode.

Half the time I read these complaints, I either see people who actually already have neck beards in denial, or people who are expecting Linux or Windows to be identical to macOS.

Everytime I install Linux, there is always something that's frustrating. Like the other day, I couldn't watch NYTimes video in fullscreen mode without Chrome crashing. Ok, so, I installed Firefox. Fullscreen video goes to my vertical monitor. No way to change this behavior.

Desktop Linux is developed by people who love CLI. It's not built by people that try to address problems of common mundane people off the street.

I'd rather use Windows than Linux for daily driver. Atleast UI will not freak out like in Ubuntu.

Half the time I read people saying have you tried Linux Desktop, I either see people who are tasteless and love being nerdy with tmux environment, or people who are expecting others to be like themselves.

As Linus Torvalds would plainly put "Desktop Linux sucks. It is the worst piece of shit attempt at Desktop OS". :-)

Are you using the Nouveau graphics driver? Chrome reliably crashes my machine with Nouveau (though nothing else seems to). After I install NVidia's driver, it's rock solid.

I get far more UI freakouts in Windows.

I love Linux, and I use it every day, but I sincerely believe it does not have a future on the desktop.

Google chrome, linux, and Nvidia are the best bet on trifectas. Full screen video works, cool new webGL pages work well, games work well (steam has quite a few that work with proton), stable (can login for months), netflix, amazon prime, youtube, etc just works full screen or in a window.

Most of the glitchy stuff I've seen like you describe is either nvidia+nouveu driver, or AMD's GPUs.

Aka ChromeOS has won the Web.

Desktop Linux is developed by people who develop Linux.

Our operating system in an outlet for so many cultures — privacy minded, seeking freedom or free, people who hack and patch, etc, etc. And your first instinct is to take it away, make it another Windows/macOs for you who don't even run Linux.

I feel sorry for people who tried to help you, that's you who expect others to be like yourself.

Linux gives me the system I want — functional understood minimalist retro style [1]. And I am not alone [2], these are not tasteless, that's my home. Get off my lawn!

[1] http://sergeykish.com/side-by-side-no-decorations.png

[2] https://www.reddit.com/r/unixporn/

Pity that so many of such "Linux developers" rather live in macOS and cross develop to GNU/Linux, instead of sponsoring Linux hardware OEMs.

Then we need to fund the developers working on this. Money makes things happen. Simple as that.

This is truth, but not the whole truth. Beside money, you need somebody with a right vision at helm.

Look at the direction modern Gnome goes. They have the money from Red Hat and Canonical. BTW systemd too was developed by salaried employees of Red Hat. Money makes things happen, but not always the nicest things.

And there simply is no helm. Who do you donate to? Not that an individual donation would make a difference, anyway.

There's now, what, 3 forks of RHEL? People are free to do what they want, of course, but this doesn't help Linux at all.

> BTW systemd too was developed by salaried employees of Red Hat.

For all its faults, systemd was a major step forward for a system to "just work".

I disagree. Even with money, there's too much fragmentation in the Linux ecosystem, and while choice is nice, the parts often don't play well with each other. What Linux needs is more big companies taking over and forcing their direction. Ubuntu and Fedora did a lot for desktop Linux, because they forced some controversial decisions upon the community, instead of being stuck in endless battle of supporting every legacy toolkit and supporting every fork that comes up whenever a controversial decision occurs.

Definitely. I like the idea of open source quality Desktop OS, but not acknowledging issues with it is not helping.

I sold my Hackintosh this year to upgrade. Since I didn't want to deal with setting up a Hackintosh again, I decided to go Linux for my development work after finding WSL2 unsatisfactory.

I shopped around for Linux distributions and finally settled on Pop!_OS. I loved it. No fiddling needed and the user experience is the closest to MacOS I have gotten.

Using RISC-V.

Theres never been a better time for Intel to build a phone and match it with open source OS software. If all the hardware is abandoning you then its time to build the hardware yourself.

We're talking about a company whose LTE modems were demonstrably inferior - and then sold that division to Apple.


I'd argue that Apple+Intel wasn't that far behind not-Apple+Qualcomm. And the gap was almost entirely closed from 2016 to 2018: https://www.pcmag.com/news/iphone-xs-crushes-x-in-lte-speeds...

> Between the three 4x4 MIMO phones, you can see that in good signal conditions, the Qualcomm-powered Galaxy Note 9 and Google Pixel 2 still do a bit better than the iPhone XS Max. But as signal gets weaker, the XS Max really competes, showing that it's well tuned.

> Let's zoom in on very weak signal results. At signal levels below -120dBm (where all phones flicker between zero and one bar of reception) the XS Max is competitive with the Qualcomm phones and far superior to the iPhone X, although the Qualcomm phones can eke out a little bit more from very weak signals.

I wouldn't say the gap is closed. It's not bad as previous Intel modems but not Qualcomm quality. I had Pixel 2 went to iPhone XR(current phone). In between I also had other android phones (S9, G7 and Pixel 3) for testing purposes. When compared to iPhone XR, iPhone XR had slower speeds when compared to Qualcomm modems. Also if your carrier supports carrier aggregation then the speeds are noticeable. I had do live streams with tethering this summer and Intels modems are not fast as Qualcomm modems.

XR was deliberately weakened in cellular speeds - it had only 2x MU MIMO instead of 4x like on the XS. That might explain what you saw.

Lets be honest here, most of it would be outsourced to Chinese manufacturers anyway. Its just the chips they would build.

> Theres never been a better time for Intel to build a phone and match it with open source OS software.

Before they killed off their Atom for phones product line might have been a better time.

When they were working with Nokia to merge Moblin and Maemo might have been a better time.

Intel has bigger fish to fry, including getting their act together.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact