

I get the analogy, but it’s more like you have an old version of the software that can’t defend you against the newer fancier stuff. Continuous cat and mouse game.
I get the analogy, but it’s more like you have an old version of the software that can’t defend you against the newer fancier stuff. Continuous cat and mouse game.
Not economical. Storage is already done on far larger fab nodes than CPUs and other components. This is a case where higher density actually can be cheaper. ”Mature” nodes are most likely cheaper than the ”ancient” process nodes simply due to age and efficiency. (See also the disaster in the auto industry during covid. Car makers stopped ordering parts made on ancient process nodes, so the nodes were shut down permanently due to cost. After covid, fun times for automakers that had to modernise.)
Go compare prices, new NVMe M.2 will most likely be cheaper than SATA 2.5” per TB. The extra plastic shell, extra shipping volume and SATA-controller is that difference. 3.5” would make it even worse. In the datacenter, we are moving towards ”rulers” with 61TB available now, probably 120TB soon. Now, these are expensive, but the cost per TB is actually not that horrible when compared to consumer drives.
Tape will survive, SSDs will survive. Spinning rust will die
Nope. Larger chips, lower yields in the fab, more expensive. This is why we have chiplets in our CPUs nowadays. Production cost of chips is superlinear to size.
It’s not the packaging that costs money or limits us, it’s the chips themselves. If we crammed a 3.5” form factor full of flash storage, it would be far outside the budgets of mortals.
Why? We can cram 61TB into a slightly overgrown 2.5” and like half a PB per rack unit.
# echo ”SELINUX=enforcing” > /etc/selinux/conf
# echo ”SELINUXTYPE=mls >> /etc/selinux/conf
# reboot
Come on, it will be fun!
#define yeet throw
#define let const auto
#define mut &
#define skibidi exit(1)
The future is now!
I agree with you, mostly. Margins in the datacenter are thin for some players. Not Nvidia, they are at like 60% pure profit per chip, including software and RnD. That will have an effect on how we design stuff in the next few years.
I think we’ll need both ”GPU” and traditional CPUs for the foreseeable future. GPU-style for bandwidth or compute constrained workloads and CPU-style for latency sensitive workloads or pointer chasing. Now, I do think we’ll slap them both on top of the same memory, APU-style á la MI300A.
That is, as long as x86 has the single-threaded advantage, RISC-V won’t take over that marked, and as long as GPUs have higher bandwidth, RISC-V won’t take over that market.
Finally, I doubt we’ll see a performant RISC-V chip from China the next decade - they simply lack the EUV fabs. From outside of China, maybe, but the demand isn’t nearly as large.