I have a Centrino 6235 Wifi N/Bluetooth adapter, it works just fine except it needs Intel drivers for the Bluetooth part. Their answer seems to be "just upgrade it, lol", suggesting some newer Wifi AC adapters. Higher speeds, they say.
Well, first of all, I won't get anywhere near those speeds for various reasons, and second, why do I have to throw away a perfectly functioning piece of hardware? Ridiculous.
Lol. A hardware manufacturer doing things that results in consumers buy more hardware? That's like a software company discontinuing sales of an old OS in order to sell its latest abomination. Intel and MS deserve each others. Toss apple on that pile too imho. Synergy between AMD and linux/foss is a good thing, consumer-friendly behavior that should be encouraged.
Which is about all we can ask of companies doing open source work. And it's still vastly more than most are doing.
I've been burned by them, so I experienced the blunders first hand.
Not sure I am reading this correctly.
AMD already has GPU RDNA drivers in Linux kernel. So It doesn't need Samsung to kick start this.
The partnership with Samsung ARM and AMD RDNA GPU SoC only goes for Phone and Tablet, or any market that AMD silicon currently does not currently operate in.
I think that was the GP's point, ie Samsung wouldn't need to invest in writing new drivers but should rather be able to do relatively easy adaptations to existing ones to get things off the ground.
AMD has an interesting portfolio of processors, gpus and now fpgas. So here's another fantasy: Amd Risc-V APU. AMD hasn't had a totally unique architecture since the am29000 (29k). So it would be interesting to see them build their own CPU arch using an open instruction set with their GPU. Start with a nice quad core that can handle a laptop or mini desktop a la NUC. Another fantasy would be an AMD Zynq.
Or did the door close in 1999? I'm ready for Samsung Gamestation
The games make money. The online service subscription makes money. Samsung probably can make good hardware, but I don't see them getting into the software / online platform business.
A smartphone hardly competes with it with its spammy mobile game stores, lack of dedicated controller, and lack of single hardware target that has developers actually making games for it.
Smartphone + controller is so uncompelling that I've literally never seen someone playing that way in the flesh, and I bought a Switch with zero interest in Nintendo games. And since developers can't assume you have a controller, mobile games are stuck in this very superficial built-for-touch limbo that limits what they can be.
You're missing a lot if you think a Pixel + Razr controller competes with Switch even after removing all Nintendo games. That's to suggest that mobile tap-interface gaming competes with Switch/PS/Xbox games.
Just consider the difference between Skyrim on Switch and Blades on iOS/Android. That's the chasm I'm talking about.
I would also argue that Switch has Skyrim due to the sale of Switch caused by BoTW and Odyssey.
Don't get me wrong, I have only my Switch for gaming, but I only mean that I wouldn't have bought it if it wasn't a Nintendo.
That they can charge what they do hinges on the quality of their games, as you say.
can anyone do iphone? what kind of logic is this? Samsung is good at what they do and Nintendo is very good at what they do. Sega failed with their console and they are in the gaming industry for a long time. Sony almost fail with their cell cpu console.
anyone can do a Switch? lets start with you...
The 3D thing didn't take off. Companies are constantly trying to find ways to make people ditch their old TV and buy a new one. This could be one.
Samsung has let Google make the money.
The Mobile division's 2Q 2012 profit was less than 10% ($1.95B OP on $20B revenue). Then Trump's sanction of Huawei happened, after which Samsung's sales grew by 50% QoQ (3Q), but we don't expect Samsung's luck to last forever and their margin would start declining to a mid-single digit again.
The smartphone business is maybe not the best example though, since the brand reputation that having the second biggest name in smartphones confers surely pays dividends across their consumer product lines.
It sorta garuantees you will outsell both the top consoles while low key dominating people's actual downtime. I can't hop over to my xbox for 30mins. I need a few hours at least.
It got NVidia a contract with Nintendo, who turned the Shield into a Switch. Yeah, Nintendo jazzed it up a lot, but the internals of the Switch and Shield are surprisingly close.
Of course, game publishers are playing hardball because everyone knows you should have to buy a copy of the game for every place you want to play it.
If Sega couldn't manage, I don't know how Samsung would - they've obviously got a lot more money to throw at the problem but I'm not entirely sure there's enough market for a 4th player. Nintendo has the "cheap and fun" market cornered, and Sony and Microsoft own the high end. What development house could Samsung even acquire at this point to get exclusive titles?
That is because the AMD IP deal with Samsung only allows f for phone and tablet market.
I think the ship has sailed for launching an entirely new console platform. It's superhard for someone to claw market share out of PS, Xbox and Nintendo. Not to mention getting game developers and publishers onboard. It's a chicken and egg problem where users won't come until you have games and games won't come until you have users. Making a windows based console solves the games problem.
The console has a slot to warm up the chicken.
I could not believe this to be true.
Maybe if you consider it as a Windows gaming PC, yes, but not otherwise.
Samsung has reached a deal with Qualcomm and settled all on-going patent issues in 2018. So I would not be surprised if Exynos comes to US in 2021 or 2022.
But right now nothing has been shown by Samsung to be competitive with Qualcomm's mmWave offering. And interestingly enough US is the only market which has implemented mmWave with one carrier and other two are "looking at it" closely. As far as I know NO other market currently has plans for mmWave. Which means if mmWave is mandatory for US market, you would likely continue to see Qualcomm in Samsung Smartphone.
For instance, the only band an S20 is missing that's commonly found in the US is 66.
These days it's purely a patent licensing agreement. Samsung doesn't sell Exynos modems (and thus Exynos SoCs) in the US or China (historically markets with lots of CDMA), and Qualcomm doesn't hassle them about the rest of the world.
i told her to buy the sprint one cause it had better coverage in the us band spectrum.
(typed from lineageos on unlocked snapdragon-based sony xz2c)
More difficult, however, would probably be selling it - I would imagine some HPC clusters would love it, for example, but for consumer products Apple can charge through the roof because their customers are used to paying pretty high prices for (sometimes better, sometimes worse) hardware. Apple's vertical integration also means they basically don't have to bother building an acceptable ecosystem around their new chipset - there's basically no documentation or vTune-style performance tools. It's also partly a question of priorities, but AMD still lag behind Intel even after decades in the game.
The documentation is depressingly more detailed that what Apple bless you with. I always get the feeling that Apple create for the same reason a pretentious chef cooks, you're merely proof of my greatness.
Also the outperforming of AMD/Intel has been somewhat exaggerated. Apple M1 is faster in single-thread than any old Intel or AMD, but it is slower than any new Zen 3 CPU and it will be slower than the top models of Intel Tiger Lake H and Intel Rocket Lake. In multi-thread M1 is easily beaten by many processors.
On Christmas I have upgraded the CPU in one of my computers with a Ryzen 9 5900X, so I could verify that in the single-thread benchmarks that I could run on my computer, the values matched those published elsewhere for Zen 3 and they also exceeded the highest of the values published for Apple M1, by e.g. from 3% to 4% in Geekbench 5, until 24% in gmpbench.
When the M2 comes out with a big heatsink and fan, it is going to be extremely competitive. Although AMD might be on 5nm by then.
The M1 has a maximum power consumption of 15.1w (13.8w + 1.3w) so for the high powered cores that is 3.45w/core and a miniscule amount for the low power cores. A mac mini tops out at around 20w during 8-core CPU benchmarks. While a 5950X PC will draw around 96w at the wall when idle.
I know a Ryzen desktop CPU probably won't necessarily draw its entire TDP but it is not in the same league as an low powered CPU such as the M1 or the latest Ryzen mobile CPUs in terms of power consumption.
The 5990X is at 4.37W/core at its maximum power consumption - 280W/64 cores. This is from full TDP, which yes it very rarely touches, and when it does it's crunching way more numbers than most benchmarks use.
The M1 core is not that far away. A single core can boost up to 15W or more. The reason why laptops catch up to desktops in single core benchmarks is because their single core power budget is almost exactly the same but every time people act as if there is still a 4x energy efficiency gain left to be exploited when there isn't.
Idle power is a matter of how integrated your system is. The M1 is highly integrated so it will consume less power just like any other integrated SoC. When people buy desktops they want to get as far away from an integrated system as possible.
Apple just went with an 8-wide decoder, surprising a bunch of people. There's not much difference between 4-wide and 8-wide, aside from Apple deciding that such a wide single-core unit was worthwhile.
uOps are already decoded by definition
Not that Apple will tell us how many units it has
It's actually well known how wide the M1 is.
You can measure the ROB pretty accurately but the execution units have to be used to be measured.
Intel and AMD both perform decodes in parallel, but not as wide as M1, but having such a large and complex decoder costs silicon and power. Taking RISC-V as an example, a fixed width frontend is a project for a student, a modern x86 decoder is probably millions to write and verify.
I bet both Intel and AMD aren't exactly in love with x86 at the moment
This has been the case for, eh, decades now? To me it's almost a joke how many times Intel has tried to replace ia32 with a modernized replacement and miserably failed to make any headway.
I suppose Intel doesn't want to try competing with the legion of architecturally similar RISC cores and would rather "go big" with ideas like Itanium/i860/i960/iAPX. It's funny to imagine, but maybe one day they'd release a RISC-V implementing processor. Can't imagine them licensing the rights to ARM, and going with Power or MIPS also seems out of character.
I must be missing something, because it looks pretty simple as a segmented sum problem to me.
A good example of parallel segmented sum is bishop / rook (aka sliding pieces) movement.
This segmented sum / prefix sum / Kogge Stone is taught in carry sum adder class at the undergrad level. Sure, it's non obvious that it applies to parallel decoding but id expect this sort of thing to be a student exercise at the masters level.
A master's project might be to write a decoder, possibly to verify (prove - everything in a CPU has to be formally verified or generated from some other formally verified tool) it.
As proof I raise that many disassemblers, which run as software and are therefore much easier to write, still disagree and struggle with x86
In particular, this arrangement: https://en.wikipedia.org/wiki/Prefix_sum#/media/File:Hillis-...
Now "add" (or +) is associative. But it also works for *, min, max, and even weird stuff like "Can the Bishop move here" or "Can the Rook move here". So the goal is to find an associative operator that you apply byte-per-byte.
Okay, a brief detour. It seems obvious to me that a finite-state machine can decode x86 byte-by-byte. FSM is the "obvious" sequential algorithm that determines whether the byte is the start-of-instruction, or the middle-of-instruction, as well as what instruction it is by the end.
Remember: we're just trying to make a FSM decode ONE instruction right now. That's pretty obvious how to do that. (Alternatively, imagine a RegEx that can parse an instruction from the bytestream when given the start-of-instruction. All Regular-expressions have a finite-state-machine representation).
Hillis / Steele proved that finite-state machines are associative operators (!!!), and therefore can be used in parallel-prefix sum arrangements. (http://uenics.evansville.edu/~mr56/ece757/DataParallelAlgori...)
The page number is 1176:
> Since this composition operation is associative, we may compute the automaton state after every character in a string as follows:
> 1. Replace every character in the string with the array representation of its state-to-state function.
> 2. Perform a parallel-prefix operation. The combining function is the composition of arrays as described above. The net effect is that, after this step, every character c of the original string has been replaced by an array representing the state- to-state function for that prefix of the original string that ends at (and includes) c.
> 3. Use the initial automaton state (N in our example) to index into all these arrays. Now every character has been replaced by the state the automaton would have after that character.
In short: computing the "state" of a sequential FSM applied across its inputs can be EASILY performed in parallel through the prefix-sum model.
Any finite state machine can be converted into parallel prefix form through this mechanism.
"Work Efficient" Parallel Prefix arrangement is O(log(n)) depth and O(n) total elements, which means that parallel decoding to any width (ie: 8-way decoder, 16-way decoder, or even 1024-way decoder) is LINEAR in terms of power-consumption and O(log(n)) with respect to time.
The layout for work-efficient Parallel Prefix is: https://en.wikipedia.org/wiki/Prefix_sum#/media/File:Prefix_...
I'm not sure x86 decoding is oh-so-simple, because if you bring in memory the alignment is not guaranteed so you don't even know where the instruction starts let alone where it ends. An x86 instruction can theoretically have an unbounded number of prefixes, meaningful only after decoding something after that - maybe you can do it with a FSM but an enormous one.
All in all this doesn't sound like a master's project in the slightest, because remember I said design and verify not just build a never-used-again toy.
There have been probably hundreds of millions of dollars spent on this over the years, and they're basically stuck on the current width and multiple pipeline stages.
All x86 instructions are strongly bounded by 15 byte lengths, otherwise processors throw an exception.
I just searched StackOverflow: https://stackoverflow.com/questions/23788236/get-size-of-ass...
Just as I expected: finite-state machine used to decode. Now write a FSM-compiler (which is an undergrad-level project), to automatically parallelize the implementation.
The parallelization step is probably Master's level, but a very advanced undergrad student can probably accomplish it: since all the individual elements are undergrad projects (Kogge-stone applied to associative operators, finite-state machine compiler / regular expressions)
The overall process is also documented by Intel in their Opcode map: https://www.intel.com/content/dam/www/public/us/en/documents...
As a FSM, verification is simple. Just generate all x86 instructions (there are a finite number of them after all), and ensure your FSM properly goes through all of them.
You have to verify it decodes invalid instructions as a fault which means testing the entire search space.
Hypothetically you could use a bounded model check but you need to test roughly 10^36 combinations which is still thousands of years if you can do a 1000 billion billion a second. You need a formal proof of the operation too.
You don't need to do an exhaustive check of the 2^8^15 all 15-byte combinations. You just check all state-transitions of the state machine.
Or to put it another way: you don't need to test all 2^8^15 byte combinations. You just need to check all invalid-instructions that have ONE invalid byte, to prove the attributes of the finite-state-machine.
But let's see what they are up to.
What is genuinely so bad about Linux these days? Although I like playing with low-level stuff, I derive zero pleasure from OS-fettling and I just stuck Mint on my laptop - it's been basically perfect apart from a problem with the dual-boot setup which I think is an SSD problem waiting to explode.
Half the time I read these complaints, I either see people who actually already have neck beards in denial, or people who are expecting Linux or Windows to be identical to macOS.
Desktop Linux is developed by people who love CLI. It's not built by people that try to address problems of common mundane people off the street.
I'd rather use Windows than Linux for daily driver. Atleast UI will not freak out like in Ubuntu.
Half the time I read people saying have you tried Linux Desktop, I either see people who are tasteless and love being nerdy with tmux environment, or people who are expecting others to be like themselves.
As Linus Torvalds would plainly put "Desktop Linux sucks. It is the worst piece of shit attempt at Desktop OS". :-)
I get far more UI freakouts in Windows.
Most of the glitchy stuff I've seen like you describe is either nvidia+nouveu driver, or AMD's GPUs.
Our operating system in an outlet for so many cultures — privacy minded, seeking freedom or free, people who hack and patch, etc, etc. And your first instinct is to take it away, make it another Windows/macOs for you who don't even run Linux.
I feel sorry for people who tried to help you, that's you who expect others to be like yourself.
Linux gives me the system I want — functional understood minimalist retro style . And I am not alone , these are not tasteless, that's my home. Get off my lawn!
Look at the direction modern Gnome goes. They have the money from Red Hat and Canonical. BTW systemd too was developed by salaried employees of Red Hat. Money makes things happen, but not always the nicest things.
There's now, what, 3 forks of RHEL? People are free to do what they want, of course, but this doesn't help Linux at all.
For all its faults, systemd was a major step forward for a system to "just work".
I shopped around for Linux distributions and finally settled on Pop!_OS. I loved it. No fiddling needed and the user experience is the closest to MacOS I have gotten.
> Between the three 4x4 MIMO phones, you can see that in good signal conditions, the Qualcomm-powered Galaxy Note 9 and Google Pixel 2 still do a bit better than the iPhone XS Max. But as signal gets weaker, the XS Max really competes, showing that it's well tuned.
> Let's zoom in on very weak signal results. At signal levels below -120dBm (where all phones flicker between zero and one bar of reception) the XS Max is competitive with the Qualcomm phones and far superior to the iPhone X, although the Qualcomm phones can eke out a little bit more from very weak signals.
Before they killed off their Atom for phones product line might have been a better time.
When they were working with Nokia to merge Moblin and Maemo might have been a better time.