And polling has its own issues. You have no real control over how often clients will poll, so you need some kind of caching layer or some kind of anti-abuse mechanism that keeps track of API keys and returns 429 when a client polls too frequently (but still permit polling from clients that poll at a more respectful, slower rate). That caching or anti-abuse layer has its own engineering cost and represents its own trade-offs.
You have no control over how often clients will poll even if you have webhooks. But it's easy enough to cache /events in cheap ephemeral storage, possibly even at the http level. Use Cloudflare if you want cheap. There shouldn't be much of an engineering cost here.
Except that webhooks are often client-private. Consider, for example, GitHub repository webhooks, particularly for private repositories. Storing /events in a global CDN cache is a privacy nightmare. The GitHub API has strict usage quotas and I'm sure that the engineering effort for doing that at GitHub's scale is non-trivial.
So you make the API /clients/CLIENTID/events. What's the issue?
Webhooks have engineering issues all their own, including job queues and failure notifications. Stashing events in a table and truncating it every now and then is relatively straightforward.