Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think there is plenty of room to make AI inference much more energy efficient. For example, there are companies testing creating custom silicon to run the model. Once that technology matures and we have some "good enough" models for normal use, inference cost for non-bleeding-edge models can come way down.

I don't expect bleeding-edge models to become any cheaper, but previous generation models can potentially be really cheap.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: