Sponsor: NZXT C1500 Platinum PSU on Amazon https://geni.us/KvKlUiIntel's CPUs, including the 14900K and 13900K (and others of those generations) have had ram...
AMD has been really solid. I’ve built a number of PC’s and there’s I’ve never run into an issue with the CPU’s. the R5 2600, 3600, and R7 5800 and 5800x are all surprisingly efficient chips out of the box, but I played around with each and found even crazier undervolt settings. My server PC draws practically nothing except if there’s something using the (NVIDIA) GPU extensively (and even then it’s like, oh no, is it almost 75 watts? better call the fire brigade! lmao).
And obviously the R7 5800x is just a monster, although I’ve consistently seen that it runs hot but… I air cool mine and it’s never really going above 85c when under full load on stock, and if you play with undervolting at all it’s pretty easy to keep the exact same performance while lowering the total power delivered. Although I’ve found that it goes up to 85c still and the chip just runs faster…
I mean, I don’t hate Intel – I’ve used exclusively their systems for, I dunno, maybe 25 years. And as Steve Burke says in the video, it’s not as if AMD has never had hardware problems on their CPUs. But this is a pretty insane dick-up on Intel’s part. Like, even if I’m generous and say “Intel had a testing regimen that these passed, because failures didn’t show up initially”, Intel should also have had CPUs that they kept running. They maybe didn’t know the cause. They maybe didn’t have a fix worked out or know whether they could fix it in software. But they should have known partway through the production run that they had a serious problem. And when they knew that there was a serious problem, my take is that they shouldn’t have been continuing to sell the things. I mean, I would not have picked up the second processor, the 14900KF, if I’d known that they knew that two processor generations were affected and they didn’t have a fix yet. Like, sure, companies make mistakes, can’t completely eliminate that, but they should have been able to handle the screw-up a whole lot better than they did.
Like, they could have just said “buy 12th gen instead, we can’t fix the 13th and 14th gen processors, and we’ll restart 12th gen production”, and I would have been okay – irritated, but it’s not like the performance difference here is that large. But they sold product in this case that they knew or should have known was seriously flawed for an extended period of time.
Plus, it’s not even the $1500 or whatever in hardware that went into the wastebasket surrounding this, but I blew a ton of time working on diagnosing and troubleshooting this. All Intel needed to do was say “we know that there’s a problem, we haven’t fixed it, this is what we know are affected, this is what we think likely are affected”, and it means that I don’t need to waste my time troubleshooting or go out and buying pieces of hardware other than CPUs to try to resolve the issue. Intel had a bunch of bad CPUs. I can live with that. But I expect them to do whatever they can to mitigate the impact on their customers at the earliest opportunity if they’re at fault, and they very much didn’t.
And obviously the R7 5800x is just a monster
I don’t think that this is cooling, and the video talks about the thing too. I initially suspected that cooling might somehow be a factor (or power), given that one of the use cases that I could eventually get to reliably trigger problems for me was starting Stable Diffusion, was inclined to blame voltage or possibly heat somehow. But the video says no, they logged thermal data and their test servers are running very conservatively. And I kept an eye on the temperatures the second time from the get-go.
It looks like the 5800x has a TDP of 105W.
I switched to a 7950X3D, which has a TDP of 120W, but on both the Intel processors and the AMD one, was using one of these water coolers (which was definitely overkill on the AMD CPU). Never used water-cooling before this system – was never something that I’d consider necessary until the extreme TDPs that the recent Intel processors had – but it does definitely keep the processor cool. I probably wouldn’t have bothered getting the thing had I just been using an AMD CPU, but since I had the thing already…shrugs
AMD has been really solid. I’ve built a number of PC’s and there’s I’ve never run into an issue with the CPU’s. the R5 2600, 3600, and R7 5800 and 5800x are all surprisingly efficient chips out of the box, but I played around with each and found even crazier undervolt settings. My server PC draws practically nothing except if there’s something using the (NVIDIA) GPU extensively (and even then it’s like, oh no, is it almost 75 watts? better call the fire brigade! lmao).
And obviously the R7 5800x is just a monster, although I’ve consistently seen that it runs hot but… I air cool mine and it’s never really going above 85c when under full load on stock, and if you play with undervolting at all it’s pretty easy to keep the exact same performance while lowering the total power delivered. Although I’ve found that it goes up to 85c still and the chip just runs faster…
I mean, I don’t hate Intel – I’ve used exclusively their systems for, I dunno, maybe 25 years. And as Steve Burke says in the video, it’s not as if AMD has never had hardware problems on their CPUs. But this is a pretty insane dick-up on Intel’s part. Like, even if I’m generous and say “Intel had a testing regimen that these passed, because failures didn’t show up initially”, Intel should also have had CPUs that they kept running. They maybe didn’t know the cause. They maybe didn’t have a fix worked out or know whether they could fix it in software. But they should have known partway through the production run that they had a serious problem. And when they knew that there was a serious problem, my take is that they shouldn’t have been continuing to sell the things. I mean, I would not have picked up the second processor, the 14900KF, if I’d known that they knew that two processor generations were affected and they didn’t have a fix yet. Like, sure, companies make mistakes, can’t completely eliminate that, but they should have been able to handle the screw-up a whole lot better than they did.
Like, they could have just said “buy 12th gen instead, we can’t fix the 13th and 14th gen processors, and we’ll restart 12th gen production”, and I would have been okay – irritated, but it’s not like the performance difference here is that large. But they sold product in this case that they knew or should have known was seriously flawed for an extended period of time.
Plus, it’s not even the $1500 or whatever in hardware that went into the wastebasket surrounding this, but I blew a ton of time working on diagnosing and troubleshooting this. All Intel needed to do was say “we know that there’s a problem, we haven’t fixed it, this is what we know are affected, this is what we think likely are affected”, and it means that I don’t need to waste my time troubleshooting or go out and buying pieces of hardware other than CPUs to try to resolve the issue. Intel had a bunch of bad CPUs. I can live with that. But I expect them to do whatever they can to mitigate the impact on their customers at the earliest opportunity if they’re at fault, and they very much didn’t.
I don’t think that this is cooling, and the video talks about the thing too. I initially suspected that cooling might somehow be a factor (or power), given that one of the use cases that I could eventually get to reliably trigger problems for me was starting Stable Diffusion, was inclined to blame voltage or possibly heat somehow. But the video says no, they logged thermal data and their test servers are running very conservatively. And I kept an eye on the temperatures the second time from the get-go.
It looks like the 5800x has a TDP of 105W.
I switched to a 7950X3D, which has a TDP of 120W, but on both the Intel processors and the AMD one, was using one of these water coolers (which was definitely overkill on the AMD CPU). Never used water-cooling before this system – was never something that I’d consider necessary until the extreme TDPs that the recent Intel processors had – but it does definitely keep the processor cool. I probably wouldn’t have bothered getting the thing had I just been using an AMD CPU, but since I had the thing already…shrugs