While it was a mistake, should it be an amazing price?
Sure, if you want 10x the memory bandwidth of a smaller card, that should be expensive.
But 80GB of GDDR6 would currently cost something like... $300. Or if you looked 1-3 years ago it would have been $1000.
GDDR6 is already designed so that you can attach 8 data lines per chip. A high end GPU with a 384-bit memory bus could attach 48 chips that way and have 96GB. Or exactly 80GB on a 320-bit bus.
$1000-1500 retail baseline for the GPU, $250-800 for extra RAM, $1000+ for the extra design hassle... I think you'd be able to buy that for $3500 if we had better competition.
For a lot of use cases, you want a balance between compute and memory. For big AI models outside of a datacenter, memory is far more important. It's worth putting half the budget into RAM chips if that means your model can fit, even if you "only" get 100 teraflops at FP16.
There are many other articles on Tom's Hardware, mostly old. The newest one (two weeks ago) is just a small piece that reveals that
> can barely render graphics [as in, real time 3D rendering] as they do not have enough special-purpose hardware [...] GH100 only has 24 raster operating (ROPs) units and does not have display engines or display outputs [...] One H100 board scores 2681 points in 3DMark Time Spy, which is even slower than performance of AMD's integrated Radeon 680M, which scores 2710
If Tom's Hardware reviewed cars, I wouldn't be surprised to learn they'd written the following: "Despite being called a "Ford", the Taurus scored exceptionally poorly in our river crossing tests, doing only roughly as well as other mechanical bulls."
According to Tom's Hardware, one could find 80GB boards for around 3500$.