They're not just adding more cores on because the processes have improved allowing for more on the same silicon.
The complexity of the chip is higher than the previous models, with three dies under the hood instead of one. The high end chips are closer to Threadripper than they are to the models they're replacing.
I think $750 is still a ridiculously good price, and Intel's feet are being held to the fire.
The 1950X cost $1000 2 years ago, and it looks like you can pick them up for $500 now. Both are 16 cores with two chiplets.
1950X advantages:
- 4 memory channels vs 2
- 64 PCIe lanes vs 16-4-4
3950X advantages:
- 64MB of L3 cache vs 32MB
- DDR4 3200 vs DDR4 2666
- The PCIe lanes are 4.0 vs 3.0
- 3.5Ghz base clock vs 3.4
- 4.7Ghz boost clock vs 4.0
- 15% better instructions per clock
- Full avx2 instead of emulating with two 128-bit units
- 105W vs 180W TDP
The 3950X sounds like quite the improvement! Definitely a great deal IMO.
The biggest difference between the 2 imo is that TR4 boards have 8 DIMM slots whereas Ryzen maxes out at 4 even with X570 afaik.
Ryzen 3000 specs say it can support 128GB of ram but itβs hard to find 32GB DIMMs on the market.
So if youβre trying to build a workstation with lots of ram and more than one GPU, the Ryzen boards are too limited even if youβre willing to buy a nice one.
Most high end AM4 Motherboards have sufficient clearance to allow a TR cooler with an adapter plate on the AM4-MB so buying one for a later upgrade might be possible.
Note: I have a TR cooler running on my AM4 board (custom loop though so not completely comparable) and there is more than sufficient space to place it.
You can't use an AM4 motherboard[1] with the 1950X - you have to use an X399-chipset/TR4 motherboard, which cost more than AM4 boards (and likely have adequate room for TR coolers)
This was as a response to the idea of using a ryzen as alternative to a 1950 and solving possible thermal issues if they would occur. I never mentioned using a TR on an AM4.
It appears you may have misunderstood the comment you were replying to upthread - the original debate was if buying a 1950X at $499 would be cheaper/better than a $750 Ryzen. @lhoff pointed out that even when the 1950X is cheaper, you'd still need to buy relatively expensive coolers and mobo (for TR), meaning you won't be saving (much) on older tech. Thermal issues weren't the subject (except as an explanation on why TR4 coolers are expensive).
In turn, I misunderstood your reply to @lhoff, because in that context, I read it as a rebuttal of the idea that TR parts being expensive by suggesting an AM4 mobo + TR4 cooler as substitutes on a 1950X system.
The X570 boards won't be cheap and are probably comparable. Quality requirements for PCIe 4 are pretty significant. Depend on your needs an older ThreadRipper might be a better move. I'm not that convinced and will probably go with a 3950X in September (unless the next generation of TR is significantly compelling to wait longer).
You can also buy into TR4 Threadripper 1950x for ~$500 [0]. I'm using one to write this post. It's a great system and you can likely buy used for cheaper than $500. I'm also assuming this price will fall further once they announce another TR4 chip.
I think the whole system price basically ends up as a wash with TR4 motherboards being at least ~$300 and needing a ~$100 cooler, I'm assuming these consumer chips will continue to have bundled coolers. The 3950X also draws 75W less than the 1950X so you can probably save a few bucks on the power supply and of course your electric bills over time.
The performance comparison will be interesting though. The 3950X should be quite a bit faster than the 1950X when it's not bottlenecked by memory bandwidth, but of course the 1950X still has twice the memory channels. Slightly offset by the Zen2 memory controller supporting higher frequency RAM. So which one is better will depend heavily on workload. I suspect that for a developer workstation the 3950X would be the better performer, most compilation workloads are not very sensitive to bandwidth.
Yea, the platform cost is higher. I ended up making a build you could make for ~$1,500 now. It was ~$1,600 when I made it. The biggest feature I'm interested in is the availability of PCIe lanes. I want this for adding a 10G nic later and two GPUs at some point as well (host & guest).
If you don't need those features you're completely correct about the 3950x.
My biggest problem with virtualization is USB. I have a libvirt with GPU passthrough setup that works great, but have been unable to get a USB controller of any sort to passthrough; always winds up in a group with a bunch of other PCI-e devices. And ordinary forwarding with SPICE or something isnβt really sufficient for what Iβd like to set up...
It's technically a security risk, but take a look at the acs patch that's out there. It'll forcefully split up the iommu groups to the hypervisor so you can do the pass through, but it does mean that the cards that were in the same group before can technically see each other's dma and other stuff on the bus. For anything other than a shared host it's pretty much fine though.
This should be doable on desktop Ryzen. I currently have one GPU (x16) + NVMe SSD (x4) + 10G NIC (x4 from chipset). The 16x can be split x8/x8 for dual GPU.
My single port Intel X520 achieves line rate through x4 chipset lanes just fine.
GPUs generally don't come close to saturating x8 3.0 lanes, unless you have a very specific workload (like the new 3dmark bandwidth benchmark AMD used to demo PCIe 4.0).
Games don't do nearly enough asset streaming to use a lot of bandwidth, since the amount of assets used at the same time is limited by VRAM size, and most stuff is kept around for quite some time. Offline 3D renderers like Blender Cycles IIRC just upload the whole scene at once and then path tracing happens in VRAM without much I/O. For buttcoin mining, people literally use boards with tons of x1 slots + risers. No idea how neural nets behave, but would make sense that they also just keep updating the weights in VRAM.
Except this is pcie 4 vs pcie 3 so it's double the lanes for the latter, so no, x8 pcie 4 will not bottleneck anything. Unfortunately GPU do not support pcie 4 yet, but it's not a problem for integrated 10GbE.
Yes, the Navi card of course supports gen4, and even the Vega20 did too. At least in the original Instinct variant (most places on the internet say that gen4 was cut on the consumer Radeon VII card)
I do wonder if they'll continue with Threadripper. If it exists in the next generation, it might simply be as rebadged and slightly nerfed EPYC chips rather than something custom.
AMD has already stated that they're going to continue with Threadripper.
It would leave a fairly big gap in the lineup with nothing to compete against Intel's X299 platform. AM4 is lacking in memory channels and PCIe lanes. Epyc has much lower clockspeeds, much more expensive CPUs, and more expensive motherboards than Threadripper.
> it might simply be as rebadged and slightly nerfed EPYC chips
Well, that is what first-gen Threadripper was. Same socket and all, but with half the connected DDR lanes and a pin telling the motherboard it's not EPYC.
The first-gen Threadripper only had two dies, and even the WX series was a bit weird internally, with two of the four dies not being able to perform IO.
I know it's not a big difference, but given the changes to IO and the 16 core consumer version, I don't see why there would be any internal difference to EPYC this time around (which this article claims will have a variable number of chiplets).
I hope as well; moreover, I hope they'd release a 64 core TR next year that could last a decade (even if it costs 2,500+). There are rumors Zen 3 should bring 4-way SMP, i.e. 4 threads/core instead of 2, so that might lead to ridiculous numbers of threads in normal systems.
As Lisa said, TRs were distinct to Epycs; I guess using UDIMMs vs RDIMMs and much higher base clock (except for the high freq EPYC 7371) led to a few changes.
It almost makes me think the TDP of either the 3800x or 3700x is a publishing mistake. 3.6/4.4 to 3.9/4.5 on the same hardware can't possibly require such a dramatic voltage increase to go from 65w to 105w.
If it does, it means the process is struggling to produce chips at that speed, so the headroom is incredibly low and you can forget about overclocking.
It doesn't take much of a voltage increase at all to send power usage skyrocketing.
That said the part you're missing is binning. The 3800x is definitely the worst binned chiplets, as evidenced by the 3900x and 3950x having the same TDP.
TDP is just a number these days. You're not getting 4 more cores for free, you're getting a cap put on your all-core power consumption at a lower clock rate. More cores, lower all-core boost clocks (the advertised clocks are single-core boost).
Even then, both AMD and Intel CPUs will pull significantly above their rated TDP when boosting. It's not quite a base-clock measurement (eg 9900K is more like 4.3-4.4 when 95W-limited) but it's definitely not a boost power measurement either.
Again, pretty much just a marketing number these days.
> You're not getting 4 more cores for free, you're getting a cap put on your all-core power consumption at a lower clock rate.
There's a mere 100mhz difference in base clock. Which is what TDP is based off of. No where close to enough of a reduction to fully explain +50% cores at the same TDP
> Again, pretty much just a marketing number these days.
Not really no. You just need to understand it represents all core base frequency thermal design target, and not maximum power draw.
It's still based in reality, though. It's not some random made up number.
And binning is an extremely real thing with very significant impact. Not sure why you seem to be trying to outright dismiss it.
Same as with Threadripper which eats 300W or so at 4GHz with applicable overvolting. (Rated 180W, you can overvolt it hard, even to 1.55V, if you can dissipate 450W, to get humongous 4.2G all core.)
To be fair 8 core 16 thread Ryzen chips could be had for as little as ~$200 for years now. With this release the respectable 2700X will probably fall pretty low in price till stocks run out.
Which can easily pull 200W+ when I test out my R7 1700 clocked up at 4.1ghz 1.375v. Of course I know TDP isn't at all realistic to actual power draws once you stop using things at stock settings. It only cost me $159 on sale and its easy enough to keep cool so no real complaints here.
Considering that Intel charges almost $600 for their top 8-core, yes, that would be ridiculous. There were some leaked/rumored prices for Ryzen 3xxx and Navi that were way too low and I didn't understood why AMD would almost give away such good products; the simple truth is that they won't.
The Ryzen 7 1800X, the Zen flagship was $499.
The Ryzen 7 2700X, the Zen+ flahsip was $329.
That was a 34% decrease in cost for comparable models after 1 generation. If that trend follows then the comparible Zen2+/Zen3 model will only be around $500. So, hopefully you just need to wait a year.
The 2700X price cut occurred in the context of Intel releasing a 6C12T processor for $350 that kept up with AMD's $500 processor. i.e. very close in multithreaded performance and easily beating it in single-thread.
That's not going to happen this time around. Intel doesn't really have a response to 16C consumer processors. Best thing they can do is release the 10C chip they're working on... probably at $500 again. And they will be behind the 12C version that AMD has at $500 already.
The only similarly aggressive move that Intel could even make would be to drop 10C to the $350 segment (perhaps with Hyperthreading disabled), which would be a massive blow to their margins.
I bet there will be such chips. They'll just have a few cores disabled. It's not like they make a separate die for every core configuration - that wouldn't make sense.