Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

IIRC he got $millions in funding.

What does he need more funding for? How would he “native train” models to write Bend? Why is said method better than (as others say) the bitter lesson?



fair point, maybe not more funding then but I wonder why big labs hesitate to collaborate or partner with him? I just feel like he has such an interesting niche with enormous potential that for someone like NVIDIA or HuggingFace could be a win-win scenario, wonder what gives


How would they partner with him? Even with RVLR I don't understand what "native training" is or how it would work, and apparently neither the people in the big labs (who are LLM experts unlike any of us).

I agree he has lots of potential and what he's demonstrated deserves funding, but don't see why he needs more, or even what he'd do with it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: