>>I'm always confused as hell how little insight we have in memory consumption.
>>I look at memory profiles of rnomal apps and often think "what is burning that memory".
Because companies starting with Microsoft approach it as an infinite resource, and have done so literally for generations of programmers — it is now ancient tradition.
Back in the x86 days when both memory and memory handles were constrained (64k of them, iirc) I went to a MS developer conference. One problem starting to plague everyone was users' computers running out of memory when actual memory in use was less than half, and the problem was not that memory was used, but all available handles were consumed.
I randomly ended up talking to the (at the time) leader of the Excel team, so I thought I'd ask him about good practices, asking "Does it make sense to have the software look at the task and make an estimate of the full amount of RAM required and allocate it off one handle and track our usage ourselves within that block?" I was speechless when he answered: "Sure, if you wanted to optimize the snot out of it — we just allocate another handle."
That two-line answer just blew my mind and instantly explained so much about problems I saw at the time, and since.
It also made sense in the context of another talk they gave at a previous conference where the message was they anticipate the increased power of the next generation of hardware and write their new version for that hardware, not the then-current hardware. It makes sense, but in the new light, it seems almost like a cousin of planned obsolescence — "How can we squander all the new power Intel is giving us?". And the result was decades after word processing and spreadsheets had usable performance on 640K DOS machines, new machines with orders of magnitude more power and RAM, actually run slower from a user perspective.
I'm hoping this memory crunch (having postponed a memory upgrade for my daily driver and now noticing it is 10x the price) will at least have the benefit of driving developers to maybe get back some craft of designing in optimization.
Software engineers seem to be more and more abstracted from the hardware they use. You also (rarely) back in the day had to worry about things like IRQ ports and optimising for tiny amounts of latency.
Personally I am fine with programmers not spending tons of time optimising down to every last piece because we do have so much more ram and compute relative to the old days. My bigger issue is that things are also a laggy mess even when there is plenty of resources available. I understand these things go hand in hand but I would much rather see more optimisations for the things users will actually notice than just going for metrics. A nice combo of the two would be ideal.
That being said what's probably most appalling is the amount some modern programs hard crash even when they have plenty of resources.
>>I look at memory profiles of rnomal apps and often think "what is burning that memory".
Because companies starting with Microsoft approach it as an infinite resource, and have done so literally for generations of programmers — it is now ancient tradition.
Back in the x86 days when both memory and memory handles were constrained (64k of them, iirc) I went to a MS developer conference. One problem starting to plague everyone was users' computers running out of memory when actual memory in use was less than half, and the problem was not that memory was used, but all available handles were consumed.
I randomly ended up talking to the (at the time) leader of the Excel team, so I thought I'd ask him about good practices, asking "Does it make sense to have the software look at the task and make an estimate of the full amount of RAM required and allocate it off one handle and track our usage ourselves within that block?" I was speechless when he answered: "Sure, if you wanted to optimize the snot out of it — we just allocate another handle."
That two-line answer just blew my mind and instantly explained so much about problems I saw at the time, and since.
It also made sense in the context of another talk they gave at a previous conference where the message was they anticipate the increased power of the next generation of hardware and write their new version for that hardware, not the then-current hardware. It makes sense, but in the new light, it seems almost like a cousin of planned obsolescence — "How can we squander all the new power Intel is giving us?". And the result was decades after word processing and spreadsheets had usable performance on 640K DOS machines, new machines with orders of magnitude more power and RAM, actually run slower from a user perspective.
I'm hoping this memory crunch (having postponed a memory upgrade for my daily driver and now noticing it is 10x the price) will at least have the benefit of driving developers to maybe get back some craft of designing in optimization.