In US terms it's very fast. The US lags behind other developed countries in rail, but I hope it can improve. And, if it improves with electric propulsion, better.
China's emergence was inevitable - they have the numbers. Last one I heard was 200 million people in STEM careers alone. That's more than the entire US workforce.
I expect technological development to explode and my advice is for anyone interested in it to learn Mandarin. Including myself.
There are many cluster boards that allow plugging compute module boards that have an onboard switch. Such an arrangement would provide a much denser system. Making a new one, however, requires a lot of work. I'm not even sure how you do ethernet over PCB traces.
One project I keep telling myself I'll eventually do is to make a cluster board with 32 Octavo SoMs (each with 2 ethernets, CPU, GPU, RAM, and some flash), and a network switch (or two). And 32 activity LEDs on the side so a set of 16 boards will look like a Connnection Machine module.
I think the idea is to allow developers to write a single implementation and have a portable binary that can run on any kind of hardware.
We do that all the time - there are lots of code that chooses optimal code paths depending on runtime environment or which ISA extensions are available.
The performance purist don't use Cuda either though (that's why Deepseek used PTX directly).
Everything is an abstraction and choosing the right level of abstraction for your usecase is a tradeoff between your engineering capacities and your performance needs.
During the build, build.rs uses rustc_codegen_nvvm to compile the GPU kernel to PTX.
The resulting PTX is embedded into the CPU binary as static data.
The host code is compiled normally.
The issue in my mind is that this doesn’t seem to include any of the critical library functionality specific eg to NVIDIA cards, think reduction operations across threads in a warp and similar. Some of those don’t exist in all hardware architectures. We may get to a point where everything could be written in one language but actually leveraging the hardware correctly still requires a bunch of different implementations, ones for each target architecture.
The fact that different hardware has different features is a good thing.
> Though this demo doesn't do so, multiple backends could be compiled into a single binary and platform-specific code paths could then be selected at runtime.
That’s kind of the goal, I’d assume: writing generic code and having it run on anything.
Servers sitting idle is a strange concept. Ideally those resources should be powered down and workloads should be consolidated until the machines reach an optimal level of utilization.
> you can't power down servers when they're not being utilized.
You can’t boot a full OS in seconds, but you can boot a thin hypervisor and have compute resources immediately available. Same applies for hard disk drivers that can be spun down or flash devices that can be unpowered when not in need.
reply