It’s using graphene so we’ll see this as soon as the 100s of graphene innovations come too in who knows when?
Still about 100 picoseconds too slow for my taste.
400 for my use case, we’re trying to violate causality
The human eye can’t even perceive faster than 1000 picoseconds, so…
Really? I would have guessed the eye was 6 orders of magnitude slower than that.
What, you can’t measure the size of a room by timing the bounces of light hitting the walls?
No! I didn’t know that’s how you guys were doing it. I feel silly for using perspective and the slight differences from my right and left eyes to judge distance this whole time!
For those, like me, who wondered how much data was written in 400 picoseconds, the answer is a single bit.
If I’m doing the math correctly, that’s write speeds in the 10s-100s GBps range.
If it’s sustainable.
1 bit / 400 picoseconds is 2.5Gbit/s, or 10x slower than a 1-bit GDDR7 bus (which the 5090 runs at 28Gbit/s * 512 bits).
To be fair this is non-volatile memory though, so the closest real comparison might be Intel Optame. The speeds actually seem somewhat comparable to DDR5, though even that is starting to run in to physical distance and timing issues. The real questions will be around density, cost, and reliability.
Other than just making everything generally faster, what would be a use-case that really benefits the most from something like this? My first thought is something like high-speed cameras; some Phantom cameras can capture hundreds, even thousands of gigabytes of data per second, so I think this tech could probably find some great applications there.
There’s some servers using SSDs as a direct extension of RAM. It doesn’t currently have the write endurance or the latency to fully replace RAM. This solves one of those.
Imagine, though, if we could unify RAM and mass storage. That’s a major assumption in the memory heirarchy that goes away.
So… How many cycles can it withstand?
At least 1