I'm very pleased, my Ryzen is fast and stable, I'm glad the company is reaping the rewards.
The price to performance is insanely good. I'm on an x370 desktop, with 2xrx550 and rx560. I'm using it as a virt/dev host. I have 16 docker instances running on the host os (Arch Linux).
Then 3 vms with PCI pass through. One for windows gaming/development. Until about three weeks back I was getting a solid 60fps @ 1080p on high across most games, but recently it's been down below 10fps. The next VM is a Linux developer desktop. Lastly, an emulator box connected to a projector away from my desk.
To switch between I just have dongles, 1xhdmi and 1xusb, behind my keyboard, and alternate the plugs. I did this because I couldn't find a good hdmi kvm that did 4k above 60hz.
This year has really been amazing for enthusiasts.
I run an Arch host with passthrough myself but it was really a long project setting everything up just right. Especially for gaming.
Really looking forward to buying a Ryzen or Threadripper to go further and virtualize my working enviroment etc.
The CPU is barely breaking a sweat. I wouldn't mind a thread ripper so all VMS can get a quad core. But right now it's plenty powerful.
The input was the biggest pain in the butt. Especially on windows... I've seen the evdev passthrough and hot swap but didn't have the best luck with it. While hacky. Assigning a USB controller to each VM has worked best.
I agree it's a long setup, and still requires some care. It's not straight up and running like a linux/windows desktop. I've had windows just mark the usb controller as inactive, and I then need to reboot the entire host. Something I'm still digging into. But there is much better documentation out there. Things like project looking glass are also impressive.
I mean, graphics card prices have been pretty horrific, but I agree on the CPU side.
If not, AMD should write and promote one.
Very curious on how often it happened for normal home/offic usage.
I used to work for a silicon company who took a embedded network switch system with ECC logic to some nuclear lab for testing to verify/showcase the ECC functionalities.
You will see correctable ECC errors on systems. How frequent honestly seems to depend on the workload and the system itself. My suspicion is that often they are caused by poor PCB layout and ECC saves you. I spent literally weeks (nights, weekends) chasing down an issue I thought was a software bug but turned out a board layout issue on an embedded system. If the system had ECC, the error would have either been corrected or we would have gotten the uncorrectable ECC error trap. Since then, every workstation/server/desktop I spec is ECC. I wish more laptops had it.
Try the edac-util on 5 of 30+ or so servers.
"Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz" with 128G RAM each.
edac-util: No errors to report.
Plus, if you're scrubbing your storage the last thing you want is a memory error killing your data.
If you reboot your PC at least once every week it's not going to be a problem.
I'd put a wash routine in the background process, where it would string-move a block of memory to nowhere in a round-robin way. Not a terrible hit on the cache; we're idle when in the background task so not impacting the most used code. Some latency issue with interrupts and the like.
Would you prefer frame-perfect rendering to increased performance?
I'm pretty confident the situation wouldn't be that acceptable if all 3 weren't American corporations.
I don't see Intel listed at http://www.nasdaq.com/symbol/amd/institutional-holdings
Where do you see they have an ownership stake? How much is it?
"Early 1980s--IBM chooses Intel's so-called x86 chip architecture and the DOS software operating system built by Microsoft. To avoid overdependence on Intel as its sole source of chips, IBM demands that Intel finds it a second supplier.
1982--Intel and AMD sign a technology exchange agreement making AMD a second supplier. The deal gives AMD access to Intel's so-called second-generation "286" chip technology."
Simply do not interested in Intel's kind of mentality when it comes to the hardware that runs all my computing. Nah.
Where they dumping all that money into the Zen code R&D? With the Meltdown craziness, will this be the year we start to see AMD return to the data centre?
PS3 XBOX360 and Nintendo Wii were all IBM chips . They tried a bunch of new architectures but for the most part developer ergonomics was horrible.
So for the PS4 and XBOX One they decided they needed regular computer parts. I'm sure AMD and Intel may have put bids on it but AMD would be the one that would have been extremely motivated to make sales to Microsoft and Sony. As time goes on you can make higher and higher margins since the chips don't change (but on the other hand we are seeing console revisions within the traditional generation). But, I think that the margins will keep on increasing.
Intel graphics solutions were (and remain) sub-par, and Nvidia has no x86 solutions (and their ARM chips would have been underpowered).
A lot of that has to do with it being a handheld hybrid, as you point out, but these compromises would have been a lot harder to stomach if it had been a dedicated set-top console like its direct competitors.
If the only concern is resolution and not quality or gameplay, then the Switch doesn't stack up, but I think the sales numbers point to the Switch being a smash hit with consumers so those compromises seem stomachable from my perspective.
(Anecdotally: I wish the graphics in BotW were better, at least in terms of draw distance, but then I take the Switch out of the dock and take it to work to play Mario Kart with my coworkers and it seems like a fine tradeoff to me.)
And while this time the Nintendo gimmick seems aimed more at the core gamer crowd (Take your console game anywhere!) I'm still not convinced this won't be turning out the same way like the Wii did.
Btw: If you want BotW with better graphics you can play it using a Wii U emulator, all the 4k BotW your eyes can handle.
That said, I love the switch, and BotW has quickly become one of my all time favorite games. My only problem is resisting the urge to kick the kids off the switch so I can play. ;)
Original Doom was already running on the SNES like 25 years ago.
Getting something to run is not really that big of a hurdle as you can always scale down resolution/graphical fidelity/frames per second. The question after that rather being: Who would want to play an obviously inferior version of the game? Because that's exactly what these Switch versions of Skyrim and Doom are, inferior.
It's commonplace for consoles to go through a number of hardware revisions over the lifetime of the console. These revisions are rarely made for performance reasons, but instead would be made to improve reliability, security and manufacturing cost. Setting aside security (which is would be done as reaction to a hack that can't be patched with software), the whole idea is to increase the margins for the manufacturer. Note that the brand owner may, in some cases, sell consoles at a loss, but the manufacturer is very unlikely to be working to the same business model.
As your chip goes from cutting edge process technology($$$) to mid-tier($$) and commodity($) they usually pass that savings along, otherwise they would incentivize going for a mid-cycle refresh and suddenly a large revenue stream for your chip dries up as a product moves to a (potentially) different vendor.
The only other manufacturer of x64 CPUs in Intel, and the only other manufacturer of (competitive) GPUs is Nvidia. Why would Microsoft or Sony incur the high NRE cost to switch to two different suppliers mid-cycle just so they can shave a small amount off their unit cost?
That's not to say AMD wouldn't pass on some of the savings to Microsoft and Sony, I'm sure they would, but it's not likely to be a one-sided deal.
You also always have multiple vendors bidding so if one of them offers a sliding scale you're going to use that as leverage against the other vendors.
It doesn't matter if the potential savings are sunk by the NRE costs of shifting to new vendors before the projected end of the product. You have to look at cost holistically.
When you're talking 15M+ units being able to drop $1-2 from BOM is a huge motivator for things like this. When you get into the millions of dollars worth of savings you can fund a lot of engineering time.
They also may never go through with the switch but use it as leverage for a better deal. Back when I did SoC evals we'd take products right up to the brink of production just to apply pressure. It was almost like a game of chicken between 2-3 vendors to see who blinked first and gave us a better BoM/deal on the core chip. These weren't easy bring-ups but usually were vastly different GPUs + CPU combos. It's very much a thing that happens, at least back when I was involved with stuff like that ~6 years ago.
If you look at the average lifetime of a (successful) console, you're looking at between 5 to 10 years. Within that time, multiple revisions take place, often to replace peripheral components to reduce costs, but in the case of the longest living consoles they will often get a design refresh, including a redesign of the CPU / GPU. I can think of only one example in recent memory where a console switched manufacturers for these core components during its lifetime (that example being the GPU for the Xbox 360, which switched from AMD to IBM). In the case of the Xbox 360, IBM were already manufacturing the CPU, and the redesign combined the CPU and GPU into a single SoC. Aside from that one example, there haven't (to my knowledge) been any other examples of companies switching manufacturers for their core components mid cycle. Contrast this with the PC business, where OEMs will frequently swap between motherboard/CPU/GPU manufacturers, and you'd have to wonder what makes the dynamics in the console market different.
I spent a while in the industry(and shipped some god-awful titles) so I tend to trust the people I know(and don't plan on outing them) but can understand if you don't want to take the word of a random person on the internet.
I know you were probably talking about traditional consoles(xbox/ps/etc) but when you start including portables(with the switch just smashing the 1 year mark at 15M units) then the field opens up significantly. You've got at least 5-6 different GPU vendors and a whole host of companies providing SoCs. Embedded GPUs have been making huge strides and I wouldn't be surprised to see some of the top end of that space start nipping at NVidia/AMD soon. Lots of the people in that domain came from desktop(Adreno is a anagram of Radeon for instance ;)).
If you know more, which you may well do, the way to comment here is to share some of what you know in a way the rest of us can learn from.
If you disagree with something I've said, by all means explain why.
> "It's obvious from your posts that you don't have any particular knowledge of AMD's console dealings, or even console dealings in general."
I never claimed to.
> "You only speak to generalities which are well known among even casual observers of the industry. None of what you've said specifically refutes the original claim."
It's precisely because these market generalities do not match with what was proposed that I called it into question.
Developing new silicon is a risky endeavour. Even going from one process node to a smaller process node without changing the design architecturally is fraught with problems, resulting in lower yield until the manufacturing kinks are worked out. Why would a chip manufacturer agree to a deal where they're expected to take on increased risk for a lower reward? It makes no business sense. That's what I was calling into question.
> "I know you don't have any grounds to speak with any authority on the matter, and you know it too if you're honest with yourself"
I may not personally know the same people vvanders knows, but I can put myself in the shoes of a businessman running a chip manufacturer. I enjoyed debating with vvanders, and aside from leaving out the dig at the people they know in the industry, I'd do it again.
You can see that in 2016, their console (+ other stuff) was making an operating income (basically revenue - cost of business not including R&D) roughly the same size as their losses in their main CPU&GPU business. This is before their ~400 operating losses in other categories. I think R&D gets rolled into that? I'm not sure
I'm guessing razor-thin margins to get the contracts.
The article says the PS3 cost of goods was $800 per unit at launch, but it is worth remembering that the PS3 was an investment in two strategic initiatives for Sony. One was the PS3 standard, that they would earn future software royalties on. The other, and at the time maybe more important, was the BluRay standard.
The changes game developers had to make to handle cell allowed them later on to support multicore CPUs and GPGPU much easier.
Despite spinning off, GloFo and AMD remained tightly aligned businesswise, and AMD signed agreements such as minimum silicon wafer usage.
In order to make the fab seem like something someone would want to buy, AMD tied their futures into it by entering into a wafer supply agreement where they will buy certain amounts of GloFo:s production whether they need it or not.
This agreement was a millstone around AMD's neck during the worst years. It means that AMD absolutely wants to sell at least a certain amount of silicon, even at negative margin if necessary.
But by 2013, AMD had sold its headquarters in a "reverse mortgage" to remain solvent. https://arstechnica.com/information-technology/2013/03/amd-s...
AMD had almost a full decade without making money.
Part of me wonders if the long term effects of the mining craze will be negative for the makers of graphics cards and in return PC component makers as a whole. Many people might be turned off of pc gaming by the prohibitive prices of buying cards at 2.5x MSRP and turn to consoles instead. Or in my case I intend to wait and pick up cheap hardware when crypto tanks hard (i.e. Tether fraud collapses), or GPU mining becomes unprofitable, or altcoins switch to proof of stake.
how is it possible that AMD seems to be at the limit of production for their GPUs but barely beat estimated earnings this quarter?
At least in my case, it’s because I want to play games in the meantime between now and when mining finally dies.
Better yet, I am really hopeful about Epyc - it still doesn't seem to be shipping in huge numbers, but as someone really burned by Meltdown, it seems like perfect timing for AMD to compete.
I needed to fire up ~ 30% additional nodes after they did their migration.
I didn't measure as carefully as I could have, but the existing nodes couldn't keep up with demand after the upgrade.
In fact the technical problems are so severe that it's questionable if the switchover will happen at all. All current PoS coins use centralized control due to these technical issues.
NVIDIA GPUs are actually typically more efficient for mining most coins, AMD cards are only preferred because they have quicker ROI. If mining profits continue to decline, that efficiency comes more into play. And, NVIDIA may be about to introduce a new generation which further improves efficiency.
If they do, god only knows what the prices are going to be. They're already nuts on Pascal, let alone if miners are snapping them up for the efficiency. Availability is going to be shit too.
In what way are nvidia cards more efficient? Vegas are in some cases much faster and at worst equal to gtx 1080.
The usual way, i.e. "efficiency=performance/power"? Here, I did the math for you. Numbers from whattomine.com:
As you can see, apart from Vega being a very efficient Cryptonight miner, and Polaris being an acceptably efficient Ethash (Ethereum) miner, AMD cards have absolutely garbage efficiency, they use just tons of power for their performance. And they are one-trick ponies: there are only one or two decent algos for AMD cards.
The 1080 and 1080 Ti specifically have problems with a few algos due to their GDDR5X (which results in half of the memory bandwidth being wasted) but on the whole NVIDIA cards are extremely efficient miners across a variety of algorithms, and usually keep up with AMD on efficiency even on Ethash.
The 1070 Ti, in particular, is the reigning champion of efficiency. It's basically a 1080 with GDDR5 instead of 5X, which makes it the most efficient card in most algos, and its worst-case is "only" being as efficient as a 580 at Ethash.
Again: people prefer AMD cards because they ROI quicker during the booms. The half of this that efficiency obscures is power: AMD cards have higher TDPs and are capable of pumping out a lot more watts per dollar (RIP Planet Earth). But when profits are down and efficiency matters (or for those with expensive electricity), NVIDIA is better, and you also have that flexibility to move across algorithms if there is a hot new currency.
There is also a pretty substantial cargo-cult with regards to mining: people use AMD because that's what they see other people using, and they don't bother to do the math. But NVIDIA has the same efficiency advantage in compute as they do in gaming: 1.5-2x the performance-per-watt in most situations.
Again: AMD is way, way behind on efficiency and has been since Maxwell. There are a few algos they do well on, but on the whole they are roughly a generation to a generation-and-a-half behind NVIDIA (Vega 56 is roughly as fast and efficient as a 980 Ti). You buy them because they're cheap for the hashrate, not because they're efficient. Think cheap-and-nasty here.
The thing with AMD power measurements that you have to be real careful of is that AMD's sensor is only reading the core power, not any power spent on memory or any losses in the VRMs, and they do it very inaccurately (and don't account for efficiency/etc). So you should be taking people's chatter on Reddit with a big grain of salt - unless you are measuring the power at the wall, or using a digital PSU, you are probably getting figures that are ~40W low, and potentially have 20% variation or so from the actual figure. Anyone using GPU-Z to measure their card is doing it wrong.
Right now that makes things hard for everyone not a miner, but if AMD ends up being the only option for gamers while Nvidia still sells to miners, they might actually gain from the mess.
Offtopic FYI: GPUs that are in use 24/7 would probably in a better shape than those that encounter more thermal (on/off) cycles. There's obviously more wear and tear on the moving part, but fans are easier to replace than mechanically stressed silicon/PCBs.
b) have you actually tried replacing monstrous coolers on moderns cards? Or even buying spares? It is rare, expensive, hard to disassemble and hard to fit back. Some cards even use glue stickers on memory thermo-interface, you can potentially tore chips away.
I actually think AMD's timing on increasing production is really questionable here. Barring another major runup in Bitcoin prices (altcoin profitability follows Bitcoin prices), which seems unlikely at the moment, network difficulty will continue to increase and mining profits are going to continue to decline. That means miners are going to taper off purchases, and some may even be dumping their rigs.
So AMD is essentially increasing production at the exact moment when they may already have a problem with the market being flooded. The time when they needed to increase production was 4 months ago, it's too late now.
On top of that, NVIDIA may be pushing out their new GPUs within the next few months, implying another step in efficiency which puts AMD's performance/efficiency even further behind. They are already about a generation behind (roughly competitive with Maxwell), despite being on similar nodes. I'd be prepared for NVIDIA to make a minimum of a half-gen step (30%), if not a full-gen step (60%).
AMD's timing seems exquisitely poor here. They didn't want to bet on crypto, and now it's too late. Fortune favors the bold.
However, there are tons of threads due to skyrocketing memory and GPU prices stating that it doesn't make sense to build right now and you can get better deals on pre-built machines.
My 1080TI is only reasonably happy to run 1920x1200x2 (60hz) + 3440x1440 (95hz); it won't fully idle the clocks, resulting in higher temps.
What a piece of shit. Sold it on eBay in 30 seconds for $40 more than I paid for it, and bought an Nvidia 1060; works flawlessly. The guy I sold it to said the RX-580 is working fine for mining. He got a card for $300 and I got rid of a headache.
How many years ago was this? The latest AMD Adrenalin drivers are at 18.1.1. Oh and on Linux the open-source AMDGPU drivers (made by AMD themselves) are now part of the kernel since not too long ago, so any recent cards will work out of the box on a distro with a recent kernel.
I haven't figured out the last number. But the 18.1.1 release basically means "2018 January". Similarly, their 17.7 release meant "2017 July".
5.18 seems to predate AMD's current naming scheme. So it was probably something from a long time ago...
With that being said: the 17.12 update, aka Adrenaline December 2017, on Windows, broke DirectX9 and also broke a few OpenGL games. AMD suffered a major PR hit over the winter-break when a few forum moderators said things they shouldn't have said about this issue (which was eventually fixed when the executives returned to work after the New Year and performed PR Damage Control). So there's a lot of latent anger in the gamer community around AMD Drivers right now.
With that said: it isn't too difficult to roll back to an earlier update in these cases, but AMD did release a DirectX9 fix by the next month (January 2018). The OpenGL issues seem to still be a known issue, and that's why a number of people are remaining on 17.7.
But Ryzen's "split" L3 cache seems to be great for process-level parallelism (think compile times), and seems to scale to more cores for a cheaper price. They have an Achilles heel though: 128-bit wide SIMD means much slower AVX2 code, and no support for AVX512.
But for most general purpose code, Ryzen looks downright amazing. Even their slower AVX2 issues is mitigated by having way more cores than their competition. AMD sort of brute-forces a solution.
AMD's GPGPU solution looks inferior to NVidia, but for my purposes, I basically just need a working compiler. I don't plan on doing any deep learning stuff (so CuDNN is not an advantage in my case), but I'd like to play with high-performance SIMD / SIMT coding. So AMD's ROCm initiative looks pretty nice. The main benefit of ROCm is AMD's attempts to mainline it. They're not YET successful at integrating ROCm into the Linux mainline, but their repeated patch-attempts gives strong promise for the future of Linux / AMD compatibility.
The effort has borne real fruit too: Linux accepted a number of AMD drivers for 4.15 mainline.
NVidia's CUDA is definitely more mainstream though. I can get AWS instances with HUGE NVidia P100s to perform heavy-duty compute. There's absolutely no comparable AMD card in any of the major cloud providers (AWS / Google / Azure). I may end up upgrading to NVidia as a GPGPU solution for CUDA instead.
OpenCL unfortunately, doesn't seem like a good solution unless I buy a Xeon Phi (which is way more expensive than consumer stuff). AMD's ROCm / HCC or CUDA are the only things that I'm optimistic for in the near future.
You probably want to buy that DDR4 as soon as you can - memory prices (DDR3, DDR4) have been consistently going up - not down. It's insane. The price-fixing fines the memory manufacturers paid were clearly not punitive enough, we need more anti-trust action in this area.
I recently made a build on a budget, I had to snipe online specials on DDR4; waiting for the prices to come down is not a winning strategy.
Pretty blatant market manipulation here by the DRAM cartel - and China has enough downstream manufacturing at stake here that they're willing to go to the mat over it.
This commonly touted meme seems to ignore market realities:
Data-centers buy TBs (not GB, but Terabytes) per machine. And then they buy ~20 machines per rack, and then fill an entire room full of racks.
When Data-centers decide its time to upgrade from old 1.5V DDR3 machines (or 1.35V "low voltage" LDDR3) to 1.2V DDR4 2666 MT/s like they have been doing in late 2017 through (estimated) 1H 2018, it is only natural to expect DDR4 prices to rise.
The power-savings from switching from 1.35V "low-voltage" DDR3 to 1.2V DDR4 are big enough that there are lots of upgrades incoming from the biggest purchasers of RAM in the world.
> waiting for the prices to come down is not a winning strategy.
2H 2018 seems to be the current market predictions. This also works in line with waiting for Zen+ (aka Ryzen 2). If DDR4 prices are still high at that point, I'll reconsider then.
I plan on recycling my old R9 290x until GPU prices come down... but waiting till August 2018 may be long enough for GPU prices to normalize as well.
So, which data-centers are upgrading from DDR2->DDR3 then? I ask because DDR3 prices are also trending upwards.
LPDDR3 has one slight advantage: an advanced sleep state which uses less than 1/10th the power than normal sleep. Laptops, tablets, and cell phones use this sleep state regularly to remain in "suspended mode" for extended periods of time.
DDR4 cannot do this. As such, LPDDR3 still has some demand in the mobile marketplace, despite fewer fab-labs making it. As such, I'd expect DDR3 prices to go up.
LPDDR4 has the advanced sleep state, but is unsupported by Intel / AMD processors.
Sapphire Nitro version requires 3 (yes, 3!) 8-pin power connectors. That's why I'm staying with RX 480 for the time being. It works very well with Mesa on Linux.
The reference design (including the LC version) and some other designs, like Asus Strix, require only 2 8-pin power connectors.
And I prefer to avoid reference design which is usually too noisy.
The same voltage-vs-frequency curve applies to both products.
There is a wide variation in the stability of hardware across various different tasks. An overclock/undervolt that is stable in one task is not necessarily stable in others, as anyone who's overclocked can attest. f.ex Just Cause 3 needed several hundred MHz less than I could get in TF2 or Witcher 3.
The voltage is set where it needs to be to ensure that there's no instability in any program, on any sample in a batch. People look at one task and one sample and assume that their OC must be stable on everything, on every card in the batch. In reality it's not, or not to the degree that the manufacturer requires.
Yes, you can get extra performance on any given sample by eating up your safety margin and pushing closer to the limit of that specific sample's frequency/voltage curve.
But as the economists say: if there were free money laying on the sidewalk, AMD would have picked it up already. They're not stupid, they ship the voltages they need.
That's my point then. Vega just needs too much by default. Hopefully next iteration will be less power hungry.
But this time they're actually making a play for the discrete graphics market. They've hired Raja Koduri and everything. Not the first time they've done that either (see: Larrabee) but they do look to be making a serious attempt.
Having a discrete graphics die, and especially having access to HBM2, makes a huge difference in performance. There isn't much you can do with 30 GB/s of bandwidth to share between CPU and GPU. Having Crystal Well L4 cache is a huge boost but it's still a halfassed fix compared to having proper VRAM available.
Of course, it's also a vastly more expensive part as well. Just the CPU+GPU package is more expensive than some of the 2500U laptops.
Presumably Intel is aiming for something more like Vega M GH with Jupiter Sound/Arctic Sound - it makes little sense to design a low-end discrete part with no room for future performance growth.
As far as I could find, it seems that the interesting thing about Nvidia is that while they only make GPUs they've learned how to cross-market them across domains (data centers, AI, gaming, automotive) and to charge premiums according to each niche. So I think that while the products themselves have subtle differentiations, you more or less get access to each vertical from doing the same R&D for each chip generation. This may be too much a generalization, but it looks as though Nvidia has figured out how to create wildly different products out of the same GPUs and multiplies its revenue accordingly.
Comparing apples to apples, AMD sells their Vega 64 flagship at a MSRP of $499, NVIDIA sells a product with an equivalent die size at $1200. AMD sells their cutdown at $399, NVIDIA sells theirs at $799. And that's before you figure that NVIDIA is using commodity GDDR5/5X while AMD is using expensive HBM2 on consumer products - NVIDIA charges between $3k and $15k for their HBM2 products. So half the MSRP, with a more expensive BOM.
AMD's margins on Vega are trainwreck bad. Some experts actually think they are losing money on every unit sold at MSRP, hence the "$100 free games bundle" on launch, and the de-facto price increases above MSRP during the fall. They're banking heavily on HBM2 costs coming down, and probably also on NVIDIA not being aggressive with the launch of gaming Volta (aka Ampere). Apart from Vega FE, they really don't see any of the extra revenue from the inflated prices during the mining boom either. That's all going to AIB partners and retailers. All AMD gets out of it is volume, and up until now they've been reluctant to increase production.
In contrast Ryzen is actually dirt cheap to manufacture due to its MCM layout. Their margins there are probably better than Intel, even with prices significantly below the launch prices.
Today I don't know who would buy them. Intel seems to be ok with being mediocre because they're the choice for people who don't give a shit anyway.
There was an outside chance a car manufacturer might have wanted to buy them up as a strategic move against their competitors if nVidia's autonomous car GPU thing panned out, but I think they were too late to the ball for that one. It's hard to see the value proposition for most companies.
I have a lot of complaints about Intel and their mediocrity, but the Intel HD graphics have gotten good in the last 10 years. The Intel GMA was awful and could barely run Quake 3. Intel HD will power through quite a lot. Not up to snuff with a real gamer's card, sure, but good enough that a casual gamer won't have a problem with it. And you get it for free, so bonus there.
I play relatively recent games on my Macbook and it's a lot better than you might think.
No argument that they have improved (the many years and millions of dollars I mentioned were spent), so much so that the low end discrete GPU market is completely dead. But there is still a lot of space between the best Intel card and a mid range nVidia or AMD card like a Geforce 1070.
That's exactly what you don't do with an integrated card.
I can play Cities: Skylines and Guild Wars 2 on my 2015 13" Macbook Pro. I think that's amazing. I could barely get 5FPS on the last generation Intel GMA with the original Guild Wars all the way turned down.
in particular, I think there's buy-in from the framework maintainers, they're not going to go out of their way to port but they also aren't averse to merging in code written by AMD engineers.
i don't think people in research have any particular loyalty to NVIDIA, and everybody's macbook pro now has an AMD GPU, so there are also personal incentives to get this stuff working properly.
That said, I'm really rooting for AMD here. It's very nice to have this kind of competition and customers will benefit a lot from this.
"great performance at a fraction of the cost of Intel"
Only two points need to be true for that statement to hold up:
* Performance levels that GP is happy with.
* Cheaper than Intel.
Which point(s) do you disagree with?
However, at that time, on the desktop, AMD was 80-90% of the performance for 50-60% of the price. Fine with me, I was using one.
It doesn't seem so out of line for AMD to have 1.4B in debt to 13B market cap.
whereas AMD seems to be still losing cash, currently at 217 million in loss:
They weren't as impacted by the recent vulnrabilities, Intel is likley going to lose quite a bit of marketshare (Intel currently has ~95%+ of the server market). If AMD can break into the server market with even a 10% showing, then their golden.
(Image and prediction(s) from: https://projectpiglet.com)
From the site...
"Experts' Opinion Score, is a score representing how experts feel about a given stock, product, or topic."
With all due respect, I don't think that's a very sensible metric to base investment decisions on.
There are other methods, but it typically will pick out people who work at a given place, are holding a large amount of an asset, or have some expertise in the field (say CPU design). As they are literally the best sources of info there is a high correlation with movements in price within roughly 45 days, aka at predicting quarterly results.
I have no problem with "experts" in general, but I do think we have a misunderstanding with regards to experts in the field in question.
What incentives are in play for the experts behind that website to be honest? Let's conduct a role play. Consider that I'm the expert in question, and I decide I want to short the stock for a company (i.e. bet that the stock price will fall rather than rise). In this situation, what would be the best advice I could give on this website to make sure my bet works out? I would mark down my "confidence score" (which is effectively what the website publishes) in order to maximise my chances of making money. The financial health of the company is secondary. The financial health will have some influence on how likely the bet I placed is likely to succeed, but it's not the only factor at play.
Benjamin Graham (Warren Buffett's mentor) summed up the most obvious path for a successful investor to take by stating "The intelligent investor is a realist who sells to optimists and buys from pessimists.". Taking this a step further, without knowing the financial position of the "expert" you're getting advice from, how do you know if the "expert" is in the market to buy, to sell, or is neutral? I would suggest to you that you don't know that, and with that in mind any advice you get from an anonymous investor is advice to be taken with a large dose of salt.
Just in case you think I'm just describing a hypothetical situation, there have been high profile cases where investment firms were caught betting against the advice they gave their clients. For example:
Typically, I just run the numbers and it usually works out. I see your point though, any suggestions?
I'm not an expert, but I believe Buffett's advice of 'invest in what you know' is sound:
In other words, it pays off doing your own research into the financial health of a company, to determine whether a company is currently undervalued or overvalued. Should be noted that this approach works best by taking a long term approach when buying stocks, as you may have to weather some short term market irrationality.
Source: My opinion... its all pretty hand wavy to me.
The charts I didn't share are the uptrends in promotor scores compared to Intel. I.e. more people are promoting AMD over Intel. This will lead (in time) for the system to predict a buy, as soon as the process drops (it's been steadily going up).
Does that make sense?
How would they use their marketshare?
34% is better than the 30% of last year, but it's still really bad.
Not new territory at all. P/E is not a good indicator of future earnings, because it artificially penalizes capital investments. Cost of revenue is a better metric.
The mining meta has changed. In 2013 it was about squeezing max perf out of cards at any cost. Now the focus is on efficiency. Most miners will undervolt and underclock cards.
In this scenario, cards used for mining see less heat cycle stress than the cold/hot workloads a typical gaming PC sees. The primary stress is on the fans. However, even those are rated for 5-10 years MTBF under constant use.
I might buy the idea that miners don't stress their cards like the other person said, but I'm not convinced by your statement that running an electronic component hard for long periods of time won't increase the risk of failure. Especially in something as high tech and compact as a GPU.
If you are looking to buy those used GPUs for cheap you should avoid any public language that tries to downplay the impact of 24/7 mining.
PS: how does one delete old HN comments? Seems like a major privacy issue to me.
There's no option in the interface, the best option right now is asking nicely per email.
Of course, you can just wait until May and then use the rights the GDPR grants you to force them to delete your comments.
It will/would apply to my posts - or my relationship with ycombinator, for example.
And then it would depend if/how yc would comply; by refusing access from the EU, or aiming for compliance.
I'm not sure how that would work.
Any compliant service is likely to allow self-service (eg: a button to delete a comment; a link to list out all data; an edit function to correct wrong data).
If you're storing personal information and don't comply with the law, you risk a fine. Just as you risk a fine for mismanaging health data, or risk prosecution for storing data that is illegal, like child pornography.
"Information provided under Articles 13 and 14 and any communication and any actions taken under Articles 15 to 22 and 34 shall be provided free of charge. Where requests from a data subject are manifestly unfounded or excessive, in particular because of their repetitive character, the controller may either:
charge a reasonable fee taking into account the administrative costs of providing the information or communication or taking the action requested; or
refuse to act on the request.
The controller shall bear the burden of demonstrating the manifestly unfounded or excessive character of the request."
This is grossly oversimplified to the point of almost being wrong, but is the general idea of what applies here.
There is countless precedent of the US using these tactics to enforce IP law, and EU countries using them to enforce consumer laws.
It is expected that the EU would do exactly this.
For example, if YCombinator refuses to adhere to the law, but holds shares in an EU startup (and they do in several), then those shares could be seized and auctioned off to enforce the law.
Can AMD or Intel release another generation of chips with Spectre vulnerabilities?
Do we think they're going to have a solution to Spectre within a year?
Can x86 survive without out of order processing? Can any architectures perform at modern levels without it?
Is x86 relevant in the server if you use Linux/BSD and can recompile your deployables?
Without Spectre I'd be very bullish on AMD. With Spectre, I'm bearish for the entire sector.