Hacker Newsnew | past | comments | ask | show | jobs | submit | blovescoffee's commentslogin

There's definitely a few reasons but one of them is that you have to ask the GPU to do ~60x less work when you render 60x less frames

PSR (panel self-refresh) lets you send a single frame from software and tell the display to keep using that.

You don’t need to render 60 times the same frame in software just to keep that visible on screen.


How often is that used? Is there a way to check?

With the amount of bullshit animations all OSes come with these days, enabled by default, and most applications being webapp with their own secondary layer of animations, and with the typical developer's near-zero familiarity with how floating point numbers behave, I imagine there's nearly always some animation somewhere, almost but not quite eased to a stop, that's making subtle color changes across some chunk of the screen - not enough to notice, enough to change some pixel values several times per second.

I wonder what existing mitigations are at play to prevent redisplay churn? It probably wouldn't matter on Windows today, but will matter with those low-refresh-rate screens.


Android has a debug tool that flashes colors when any composed layer changes. It's probably an easy optimization for them to not re-render when nothing changes.

I never thought about it but you've made me realise that a lot of people in our industry have been so enthusiastically working on random "creative" things that at best no one even asked for and it turns out to hurt the end users in ways no one even knows.

I used to be a front end dev and I always hated that animation was coded per element. There should be just a global graphics API that does all the morphing and magic moves that user can turn off on the OS.


Normally, your posts are very coherent, but this one flies on the rails. (Half joking: Did someone hack your account!?) I don't understand your rant here:

    > With the amount of bullshit animations all OSes come with these days, enabled by default, and most applications being webapp with their own secondary layer of animations, and with the typical developer's near-zero familiarity with how floating point numbers behave
I use KDE/GNU/Linux, and I don't see a lot of unnecessary animations. Even at work where I use Win11, it seems fine. "[M]ost applications being webapp": This is a pretty wild claim. Again, I don't think any apps that I use on Linux are webapps, and most at work (on Win11) are not.

Seriously? What is _this_ comment? TeMPOraL makes perfect sense.

LLMs learned that users have post histories? /s

Why? Surely copying the same pixels out sixty times doesn't take that much power?

The PCWorld story is trash and completely omits the key point of the new display technology, which is right in the name: "Oxide." LG has a new low-leakage thin-film transistor[1] for the display backplane.

Simply, this means each pixel can hold its state longer between refreshes. So, the panel can safely drop its refresh rate to 1Hz on static content without losing the image.

Yes, even "copying the same pixels" costs substantial power. There are millions of pixels with many bits each. The frame buffer has to be clocked, data latched onto buses, SERDES'ed over high-speed links to the panel drivers, and used to drive the pixels, all while making heat fighting reactance and resistance of various conductors. Dropping the entire chain to 1Hz is meaningful power savings.

[1] https://news.lgdisplay.com/en/2026/03/lg-display-becomes-wor...


So it's a Sharp MIP scaled up? https://sharpdevices.com/memory-lcd/

Sharp MIP makes every pixel an SRAM bit: near-zero current and no refresh necessary. The full color moral equivalent of Sharp MIP would be 3 DACs per pixel. TFT (à la LG Oxide) is closer to DRAM, except the charge level isn't just high/low.

So, no, there is a meaningful difference in the nature of the circuits.


Thanks. Great explanation.

Copying , Draw() is called 60 times a second .

It isn't for any reasonable UI stack. For instance, the xdamage X11 extension for this was released over 20 years ago. I doubt it was the first.

Xdamage isn’t a thing if you’re using a compositor for what it’s worth. It’s more expensive to try to incrementally render than to just render the entire scene (for a GPU anyway).

And regardless, the HW path still involves copying the entire frame buffer - it’s literally in the name.


Thats not true. I wrote a compositor based on xcompmgr, and there damage was widely used. It's true that it's basically pointless to do damage tracking for the final pass on gl, but damage was still useful to figure out which windows required new blurs and updated glows.

At the software level yes, but it seems nobody has taken the time to do this at the hardware level as well. This is LG's stab at it.

Apple has been doing this since they started having 'always-on' displays.

So has Samsung, but we're talking mobile devices with OLED displays, which is an entirely different universe both hardware and software-wise.

What’s your metal model of what happens when a dirty region is updated and now we need to get that buffer on the display?

It was, but xdamage is part of the composting side of the final bitmap image generation, before that final bitmap is clocked out to the display.

The frame buffer, at least the portion of the GPU responsible for reading the frame buffer and shipping the contents out over the port to the display, the communications cable to the display screen itself, and the display screen were still reading, transmitting, and refreshing every pixel of the display at 60hz (or more).

This LG display tech. claims to be able to turn that last portion's speed down to a 1Hz rate from whatever it usually is running at.


You forget that all modern UI toolkits brag about who has the highest frame rate, instead of updating only what's changed and only when it changes.

Do you really recommend people travel internationally in 2026 without a cellphone? I’m kind of bewildered by this suggestion. As someone who has to go between LATAM and US frequently, I have no choice but to bring my cellphone.

I went to Thailand for three weeks in November. I didn’t bring a phone or a laptop. I printed my maps, reservations, and emergency numbers. It was awesome. Don’t lock yourself into imaginary prisons.

I don't bring phones either. I go straight to the nearest mall/bazaar/market and buy one. Anyplace developed enough that you need a phone has them for sale. Anyplace where you can't find a phone has enough other people without them you can still get around. The phone gets trashed before I go through the next international border.

IMHO today is difficult to do anything without a smartphone. I hate the state of affairs, but it is just so. Anything needs an app to work. Public services in some countries requiere it. Paying, etc.

But in traveling is almost essential. GPS to navigate, search for hotels, places to eat, take fotos… yes, you could carry many devices… but seriously?! Ah btw… what about being in touch with family?


Do you remember a time people did all those things without a phone?

I’ve driven across multiple continents and many dozens of countries without a phone or gps.

Talking to locals to ask directions is half the fun, especially when I don’t speak the language. I’ve been invited to parties, weddings and more because of it.


> Talking to locals to ask directions is half the fun, especially when I don’t speak the language.

I can absolutely understand that you and many people love that. But maybe you can understand other people prefer to never feel lost, be able to translate signs, find places to eat easily, discover “must see”[1] things, take fotos, be in contact with family, all in a device which weighs 200g in my pocket. Even having it can I eventually forget it, and talk with locals… but when I want to go back to the hotel, is nice to know exactly how[2]

I am old enough that I did travel without cell:

[1] it happened to my many times (at least 3 out of the top of my head) that locals have bo idea where a museum is, or the house of X, or other things that tourists may find interesting, but locals don’t give a shit

[2] if you have been in places like Turkey or South America, you may know that taking a taxi is an interesting exercise. Sometimes they charge you wrong, sometimes they take you for a 20km ride. Having (a) gps, (b) a mean to call the police and (c) a mean to check online what should the travel cost, (d) a translator in your pocket, seems very convenient for me.

Or in other words: do you understand that now having the phone you can still do everything as you used to, asking for directions without understanding, talking with people, all, but now when you want you have a super tool? The best is: is smaller that a foto camera from those days, can take 100000 more fotos, and has 20 more functions!

People used to live without electricity, fride, email… so? Why should I not use what is avaible today?


By no means was I suggesting that everyone should live without a phone. Merely that I prefer life without one. Yes, it's inconvenient sometimes, but I decided a long time ago that convenience was not the goal of my life, experiences are.

To have the experiences I want, I need more time. To have more time, I need to go to work less. So spending less money means I get to spend time with my daughter, go snowboarding have adventures around the globe.

It turns out not having a phone is another great way to save money, and go to work less.


I'm working on this. It works pretty well. The main issue I'm working out right now (which has proven very difficult) is the auto-placing and auto-routing on a multi-layer pcb.


Okay read anything about David Ogilvy


customer acquisition


Not quite, compression enables you to simulate / represent / encode x data with less than x memory.


Only for those inputs that are compressible.

If a compressor can compress every input of length N bits into fewer than N bits, then at least 2 of the 2^N possible inputs have the same output. Thus there cannot exist a universal compressor.

Modify as desired for fractional bits. The essential argument is the same.


Would the compressibility of the state of the universe be useful to prove whether we are in a simulation already? (i.e. it is hard to compress data that is already compressed)


Source?


Like "someone" who's knowledge cutoff is from a while back...


What makes you think/say they’ve skipped safety standards?


Do you actually feel like this is better (and not just at par or worse)?


Almost anything is better than writing JavaScript


I find it much better for my use cases to use this than to use a JavaScript framework the necessitates the use of a JavaScript server


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: