The smaller DGX Spark features the GB10 Grace Blackwell Superchip with Blackwell GPU and fifth-generation Tensor Cores, delivering up to 1,000 trillion operations per second for AI.
…
Since the systems will be manufactured by different companies, Nvidia did not mention pricing for the units. However, in January, Nvidia mentioned that the base-level configuration for a DGX Spark-like computer would retail for around $3,000.
I am surprised it’s only $3,000 even for a base level configuration. I am assuming Nvidia would want to make the most of its monopoly like position.
for AI inference (generally hardware agnostic, which is different than AI training, which is heavily CUDA based) favors vram. Digits has to compete against both apples m series chips at max ram capacity, as well as the recently announced AMD Strix Halo devices, which come with up to 128 GB shared ram.
A fully kitted out Strix Halo device with 128gb ram is 2000$, a fully kitted out Mac Studio with 128GB ram is just under 5000$. Nvidia claims it starts at 3000$, so finding out the 128gb cost is still in the air.
These desktop systems, first previewed as “Project DIGITS” in January, aim to bring AI capabilities to developers, researchers, and data scientists who need to prototype, fine-tune, and run large AI models locally. DGX systems can serve as standalone desktop AI labs or “bridge systems” that allow AI developers to move their models from desktops to DGX Cloud or any AI cloud infrastructure with few code changes.
If the aim is to get them to researchers and those researchers manage to develop useful software that runs on parallel compute hardware, that increases the value of their product.
I’d guess that desktop computer usage will only account for a small portion of their parallel compute hardware, and not what the parties with deep pockets are mostly interested in.