One consideration to me, regardless of the exact burn rate on inference is the assumed increase in revenues via higher fees. One of the bull cases I often see is that the hockey stick revenue growth continues longer/higher than the hockey stick cost growth. Then it all prints money because people are spending 10x/100x/1000x what they are today.
In the real world ..
Where I work, AI is used heavily, we are already tipping into cost management mode at a firm level. Users are being aggressively steered to cheaper models, usage throttled, and cost attribution reports sent. This is already being done at the under-$1k/mo per user cost level. So some indications of revenue per user leveling out already.
Meanwhile everyone I know who works anywhere near a computer has had AI shoved down their throat, with training, usage KPIs, annual goal setting and mandated engagement. So we are already pretty saturated, it's not like theres giant new frontiers of new users.
In the real world ..
Where I work, AI is used heavily, we are already tipping into cost management mode at a firm level. Users are being aggressively steered to cheaper models, usage throttled, and cost attribution reports sent. This is already being done at the under-$1k/mo per user cost level. So some indications of revenue per user leveling out already.
Meanwhile everyone I know who works anywhere near a computer has had AI shoved down their throat, with training, usage KPIs, annual goal setting and mandated engagement. So we are already pretty saturated, it's not like theres giant new frontiers of new users.