I am struggling with finding a good model for desktop apps. The subscription model always seems to yield the most money, but I too dislike subscriptions.
One-shot option seems attractive, but the desktop (MacOS at least) app market is actually so niche that the SAM is somewhere in the low thousands. So, if I would offer a one-time 100$ app, I'd have 100k$ before taxes. And for that revenue, there's developing, marketing, plus support and maintenance. So to match a dev's salary, I'd need to make 2-3 successful apps a year, that I'd also have to maintain for a long time.
I think maybe there's a mid-ground with buy forever, 1 year updates, so people get the product they paid for, and if they want updates or support the development they can re-buy, however I'm yet to hear opinions on this model.
> I think maybe there's a mid-ground with buy forever, 1 year updates, so people get the product they paid for, and if they want updates or support the development they can re-buy, however I'm yet to hear opinions on this model.
As far as desktop software is concerned, I think this a commonly accepted approach. Sublime Text is probably the most notable example.
Isn't that just how most software used to be sold? If you buy Photoshop CS5 or MS Office 2023 you get the product as it's released and maybe a year of bugfix releases (but no new features). If you want the new features buy Photopshop CS6 or MS Office 2024
Personally I like the model, as long as old versions stay truly static and don't get enshittification updates. It aligns incentives on feature development far better than subscription models: if you make genuine improvements you get recurring sales, if you don't then existing users will just stay on the old version. And existing users are protected from features or UI changes they disagree with
I wouldn't single out the concern new diseases if the population is small. Most diseases co-evolve intra-population. The lethal ones are the ones that suffer a mutation and are suddenly able to be passed to a different 'species'. So, if they already survived on a 'knife's edge', immune variety is of comparatively low concern (but still existential) on the list of things that can end your species (climate change, competition, demographics - 2-3 infertile females in a group of 20, say bye bye to tribe).
Isolated populations are all going to be creating isolated variants of any disease doing well enough to stick around.
And the impact of events where any individuals die to a new variant, is amplified for a small population. The risks of highly correlated vulnerabilities are on top of that.
Variants of the flu continue to quietly emerge and kill people today. Despite all our regular exposures to their constant churn and weather shielded environments.
But you are certainly right that the cross-overs are incomparably worse. And diversity becomes species extinction protection at that level.
> The risks of highly correlated vulnerabilities are on top of that.
For small groups, it doesn't matter too much if it's correlated or not. A 'small' hit doesn't exist, so 20% or 80% is a wipe-out either way. You don't have big population dynamics, you can't take even a 20% hit to your population as a small group. Even if you'd have the genetic diversity of modern humans, your population would still be damned (my 2-3 females gone example, it's an extinction vortex [0])
> Variants of the flu continue to quietly emerge and kill people today. Despite all our regular exposures to their constant churn and weather shielded environments.
Flu is specifically adapted to exactly what you point out. Check out virulence in doors vs out doors for influenza. Also, it's precisely regular exposures that allows influenza to persist, as it has a rapid mutation rate, and it benefits from as much exposure to humans as well. There is no evidence for Flu before the Neolithic, precisely because the flu is adapted to constant exposure to an inter-connected population, requiring a critical community size in the hundreds of thousands.
I think we are more or less agreeing, but emphasizing different valid sides.
After all, as much as cross over illnesses are orders of magnitude more dangerous, they are also orders of magnitude reduced in threat distribution, for any low population species.
The integrated surface x time surface area for transfer just isn't there. Lots of low population species last millions of years, or tens. And even then, die off for other reasons. despite their extreme and persistent low-diversity vulnerability to any successful cross over parasite/illness. Neanderthals are not exceptional on that count.
You can buy historical data at least from some reputable vendors, although you're still responsible to understand their collection process (sampling frequency, timestamp conventions, corporate action adjustments etc etc), as even 'obvious' things like how daily stock levels are reconstructed based on intraday data can mess up your analytics really bad.
I don't think it generalizes to real-time data.
Panel is an arbitrary UX delimiter, so fundamentally no, unless you're really pedantic in defining (upfront) panel as a meaningfully semantic unit across apps.
Nonsense, when he'll have one panel per screen pixel, he'll be able to see over 8 million Fibonacci retracements, 40 heatmaps and real-time market sentiment headlines at once on a 4k monitor, then you'll see.
Instead of anti-fragility, I'd point you to the law of requisite variety instead.
You'll notice that all AI improvements are insanely good for a week or two after launch. Then you'll see people stating that 'models got worse'. What happened in fact is that people adapted to the tool, but the tool didn't adapt anymore. We're using AI as variety resistant and adaptable tools, but we miss the fact that most deployments nowadays do not adapt back to you as fast.
New models literally do get worse after launch, due to optimization. If you charted performance over time, it'd look like a sawtooth, with a regular performance drop during each optimization period.
That's the dirty secret with all of this stuff: "state of the art" models are unprofitable due to high cost of inference before optimization. After optimization they still perform okay, but way below SOTA. It's like a knife that's been sharpened until razor sharp, then dulled shortly after.
> If you charted performance over time, it'd look like a sawtooth
People have, though, and it doesn't show that. I think it's more people getting hit by the placebo effect, the novelty effect, followed by the models by-definition non-determinism leading people to say things like "the model got worse".
Is this insider info? The 'charted performance' caught my eye instantly.
Couple things I find odd tho: why sawtooth? it would likely be square waves, as I'd imagine they roll down the cost-saving version quite fast per cohort. Also, aren't they unprofitable either way? Why would they do it for 'profitability'?
It's rumors based on vibes. There are attempts to track and quantify this with repeated model evaluations multiple times per day, this but no sawtooth pattern has emerged as far as I know.
I don't want to go too far down the conspiracy rabbit hole, but the vendors know everyone's prompts so it would be trivial for them to track the trackers and spoof the results. We already know that they substitute different models as a cost-saving measure, so substituting models to fool the repeated evaluations would be trivial.
We also already know that they actively seek out viral examples of poor performance on certain prompts (e.g. counting Rs in strawberry) and then monkey-patch them out with targeted training. How can we be sure they're not trying to spoof researchers who are tracking model performance? Heck, they might as well just call it "regression testing."
If their whole gig is an "emperor's new clothes" bubble situation, then we can expect them to try to uphold the masquerade as long as possible.
It's not insider info, it's common knowledge in the industry (Google model optimization). I think they are unprofitable either way, but unoptimized models burn runway a lot faster than optimized ones.
The reason it's not a square wave is because new optimization techniques are always in development, so you can't apply everything immediately after training the new model. I also think there's a marketing reason: if the performance of a brand new model declines rapidly after release then people are going to notice much more readily than with a gradual decline. The gradual decline is thus engineered by applying different optimizations gradually.
It also has the side benefit that the future next-gen model may be compared favourably with the current-gen optimized (degraded) model, setting up a rigged benchmark. If no one has access to the original pre-optimized current-gen model, no one can perform the "proper" comparison to be able to gauge the actual performance improvement.
Lastly, I would point out that vendors like OpenAI are already known to substitute previous-gen models if they determine your prompt is "simple." You should also count this as a (rather crude) optimization technique because it's going to degrade performance any time your prompt is falsely flagged as simple (false positive).
Coding with AI is kind of like obesity in modernity: having tons of resources is the goal, but once you get there, you end up in a system you're not really adapted to.
Personally, I don't care that much about org incentives (even though they obviously matter for what OP posted) but more about what it does to my thinking. For me, actually writing code is what slows my brain down, helps me understand the problem, and helps me generate new ideas. As soon as I hand off implementation to an LLM (even if I first write a spec or model it in TLA+) my understanding drops off pretty quickly.
One-shot option seems attractive, but the desktop (MacOS at least) app market is actually so niche that the SAM is somewhere in the low thousands. So, if I would offer a one-time 100$ app, I'd have 100k$ before taxes. And for that revenue, there's developing, marketing, plus support and maintenance. So to match a dev's salary, I'd need to make 2-3 successful apps a year, that I'd also have to maintain for a long time.
I think maybe there's a mid-ground with buy forever, 1 year updates, so people get the product they paid for, and if they want updates or support the development they can re-buy, however I'm yet to hear opinions on this model.
reply