The notch itself is probably considered temporary internally. If you code a rule for the notch, then you're going to have to consider which hardware OSX is running on in order to determine if the notch is present or not for your "notch width calculation."
Well there's effectively no space on the lefthand side of the notch. You must assume that side is going to be completely consumed by actual menu items.
Side note: If you want to check what icons might be buried by the notch, you can Cmd + Drag any icon from the menu bar to rearrange them. If you drag an icon through the notch, the other items will pop into view, if any are hidden.
It didn't used to be like this though, and it feels bad to feel like a rat in a cage with YT. It's an un-winnable situation, choosing between excessive ads and paying the racketeer to be safe from the racket. It just really exemplifies their de-facto monopoly on internet video, and it makes me feel bad.
You can do this manually on the free plan. Mouse over the playhead and you'll see a graph. You can click the highest point in the graph which is where most people clicked through to.
its crazy lengths ppl will go to avoid paying few dollars a month. i dont belive ppl commenting on this website are that squeezed for money. kind of bizzare.
It's just repackaged Google results masquerading as an 'answer.' PageRank pulled results and displayed the first 10 relevant links and the LLM pulls tokens and displays the first relevant tokens to the query.
1. LLMs can translate text far better than any previous machine translation system. They can even do so for relatively small languages that typically had poor translation support. We all remember how funny text would get when you did English -> Japanese -> English. With LLMs you can do that (and even use a different LLM for the second step) and the texts remain very close.
2. Audio-input capable LLMs can transcribe audio far better than any previous system I've used. They easily understood my speech without problems. Youtube's old closed captioning system want anywhere close to as good and Microsoft's was unusable for me. LLMs have no such problems (makes me wonder if my speech patterns are in the training data since I've made a lot of YouTube videos and that's why they work so well for me).
3. You can feed LLMs local files (and run the LLM locally). Even if it is "just" pagerank, it's local pagerank now.
4. I can ask an LLM questions and then clarify what I wanted in natural language. You can't really refine a Google search in such a way. Trying to explain a Google search with more details usually doesn't help.
5. Iye mkx kcu kx VVW dy nomszrob dohd. Qyyqvo nyocx'd ny drkd pyb iye. - Google won't tell you what this means without you knowing what it is.
LLMs aren't magic, but I think they can do a whole bunch of things we couldn't really do before. Or at least we couldn't have a machine do those things well.
> The ads aren’t going into your paid plans (except maybe a highly discounted tier, depending on the market). The ads are a play to offer a free version. Having an ad-supported free tier isn’t new.
This statement doesn't discount the original statement: that ads are going into GPT, which Sam called a last resort.
> The discussion about being unprofitable also repeats the reductionist view that these companies are losing money and therefore the business model doesn’t work. This happens with every VC cycle where writers don’t understand that funded companies are supposed to lose money while they grow. That’s what the investment money is for.
Usually propped-up companies don't last in the long term once the VC subsidy runs out. There's a difference between getting VC money in order to buy rocket parts, and getting VC money in order to charge $7 when you would really need to charge $10. The latter problem never goes away.
reply