Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What the other commenter said is absolutely true, but what's even more important is that 3080 doesn't seem to support SLI so you're "stuck" with one 3090 with that budget (which seems to be the only card supporting SLI this gen).


Not an expert in ML, but I don’t think CUDA uses SLI at all.

SLI is specific to rendering. Depending on the workload, it’s sometimes makes perfect sense to split GPGPU jobs into multiple GPUs. An extreme example of that approach is crypto-currency miners who sometimes use a dozen of GPUs in a single computer.

The only limitation, the working set used by each GPU needs to fit in VRAM of that GPU, otherwise GPUs gonna bottleneck on I/O as opposed to compute, will be very slow. For ML, this means the setup of two 3080 GPUs will be limited to 10GB model sizes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: