This is actually the truth, we all have tens or hundreds of priceless saved links. However, I claim that 90% are forgotten after a day or two, maybe that's actually something that small language models can fix ?
Spot on — the "90% forgotten" problem is real. I think the fix isn't really about the model size, though; it's about surfacing the right thing at the right moment. If the system can detect what you're working on and push relevant saved knowledge to you proactively, you don't need to remember what you saved in the first place. The hard part is getting the context matching precise enough to be helpful without being noisy.