There are sequential dependencies, so you can't just arbitrarily increase speed by parallelizing over more GPUs. Every token depends on all previous tokens, every layer depends on all previous layers. You can arbitrarily slow a model down by using fewer, slower GPUs (or none at all), though.
Yes, because speculation has NEVER bitten us in the ass before, right? Coughs in Spectre
Speculative decoding is just running more hardware to get a faster prediction. Essentially, setting more money on fire if you're being billed per token.