Hacker Newsnew | past | comments | ask | show | jobs | submit | darkwater's commentslogin

And just sticking to counting, a not exceptionally well-trained ear could already count how many letters you typed and if you pressed backspace (at least with the double-width backspace, sound is definitely different)

Yeah I recall that there was an attack researchers demonstrated years back of using recordings of typing with an AI model to predict the typed text with some accuracy. Something to do with the timings of letter pairings, among other things.

93% - 95% accuracy and it wasn't even a good quality recording

> When trained on keystrokes recorded by a nearby phone, the classifier achieved an accuracy of 95%, the highest accuracy seen without the use of a language model. When trained on keystrokes recorded using the video-conferencing software Zoom, an accuracy of 93% was achieved, a new best for the medium.

https://arxiv.org/abs/2308.01074


Notably, I believe this has to be tuned to each specific environment. The acoustics of your keyboard are going to be different from mine. Which is not much of a barrier, given a long enough session where you can presumably record them typing non password-y things.

The fact that there is not a single root cause but several ones makes me instinctively think this is a good report, because it's not what the "bosses" (and even less politicians) like to hear.

Yes, a lot of modern engineering is good enough that single-cause failures are very rare indeed. That means that failures themselves are rare, but when they do happen, they're most likely to have multiple causes.

How to explain that to non-engineers is another problem.


I think a better way of explaining it to people is that we've made critical systems so reliable that, in order for them to fail, the failures have to be quite complex.

This is almost universal in aviation. They always talk about the "accident chain." Essentially everything that can kill you with one mistake is illegal through training and operational requirements and engineering and maintenance regulations.

we are not as complicated as the national grid, I have been here for nearly 10 years now, and our outages have gone from single cause, two causes, or now its nearly always 3 things that need to go wrong at the same time.

Frequently, when you see these massive failures, the root cause is an alignment of small weaknesses that all come together on a specific day. See, for instance, the space shuttle O-ring incident, Three-Mile Island, Fukushima, etc. These are complex systems with lots of moving parts and lots of (sometimes independent) people managing them. In a sense, the complexity it the common root cause.

This is the same thing that happened with the 35W bridge collapse in Minneapolis. The gusset plates after the disaster were examined and found to be only 1/2" thick when the original design called for them to actually be 1" thick. The bridge was a ticking time bomb since the day it was built in 1967.

As the years went on, the bridge's weight capacity was slowly eroded by subsequent construction projects like adding thicker concrete deck overlays, concrete median barriers and additional guard rail and other safety improvements. This was the second issue, lining up with the first issue of thinner gusset plates.

The third issue that lined up with the other two was the day of the bridges failure. There were approximately 300 tons of construction materials and heavy machinery parked on two adjacent closed lanes. Add in the additional weight of cars during rush hour when traffic moved the slowest and the bridge was a part of a bottleneck coming out of the city. That was the last straw and when the gusset plates finally gave way, creating a near instantaneous collapse.


It's like the Swiss Cheese model where every system has "holes" or vulnerabilities, several layers, and a major incident only occurs when a hole aligns through all the layers.

https://en.wikipedia.org/wiki/Swiss_cheese_model


I use this model all the time. It's very helpful for explaining the multifactorial genesis of catastrophes to ordinary people.

Also perhaps worth a read:

https://devblogs.microsoft.com/oldnewthing/20080416-00/?p=22...

"You’ve all experienced the Fundamental Failure-Mode Theorem: You’re investigating a problem and along the way you find some function that never worked. A cache has a bug that results in cache misses when there should be hits. A request for an object that should be there somehow always fails. And yet the system still worked in spite of these errors. Eventually you trace the problem to a recent change that exposed all of the other bugs. Those bugs were always there, but the system kept on working because there was enough redundancy that one component was able to compensate for the failure of another component. Sometimes this chain of errors and compensation continues for several cycles, until finally the last protective layer fails and the underlying errors are exposed."


I've had that multiple times. As well as the closely related 'that can't possibly have ever worked' and sure enough it never did. Forensics in old codebases with modern tools is always fun.

> As well as the closely related 'that can't possibly have ever worked' and sure enough it never did.

I had one of those, customer is adamant latest version broke some function, I check related code and it hasn't been touched for 7 years, and as written couldn't possibly work. I try and indeed, doesn't work. Yet customer persisted.

Long story short, an unrelated bug in a different module caused the old, non-functioning code to do something entirely different if you had that other module open as well, and the user had disciverdd this and started relying on this emergent functionality.

I had made a change to that other module in the new release and in the process returned the first module to its non-functioning state.

The reason they interacted was of course some global variables. Good times...


By the way, a corollary I encountered, I think with one of the recent AWS meltdowns, is that a paradoxical consequence of designing for "reliability" is that it guarantees that when something does happen, it's going to be bad, because the reliability engineering has done a good job of masking all the smaller faults.

Which means 1. anything that gets through, almost by definition, is going to be bad enough to escape the safeguards, and 2. when things do get bad enough to escape the safeguards, it will likely expose the avalanche of things that were already in a failure state but were being mitigated

The takeaway, which I'm not really sure how to practically make use of, was that if a system isn't observably failing occasionally in small ways, one day it's going to instead fail in a big way

I don't think that's necessarily something rigorously proven but I do think of it sometimes in the face of some mess


That's a fairly common pattern. As frequency of incidents goes down the severity of the average incident goes up. There has to be some underlying mechanism for this (maybe the one you describe but I'm not so sure that's the whole story).

Global variables... the original sin if you ask me. Forget that apple.

> See, for instance, the space shuttle O-ring incident

That wasn't really a result of an alignment of small weaknesses though. One of the reasons that whole thing was of particular interest was Feynman's withering appendix to the report where he pointed out that the management team wasn't listening to the engineering assessments of the safety of the venture and were making judgement calls like claiming that a component that had failed in testing was safe.

If a situation is being managed by people who can't assess technical risk, the failures aren't the result of many small weaknesses aligning. It wasn't an alignment of small failures as much as that a component that was well understood to be a likely point of failure had probably failed. Driven by poor management.

> Fukushima

This one too. Wasn't the reactor hit by a wave that was outside design tolerance? My memory was that they were hit by an earthquake that was outside design spec, then a tsunami that was outside design spec. That isn't a number of small weaknesses coming together. If you hit something with forces outside design spec then it might break. Not much of a mystery there. From a similar perspective if you design something for a 1:500 year storm then 1/500th of them might easily fail every year to storms. No small alignment of circumstances needed.


In reality the "swiss cheese" holes for major accidents often turn out to be large holes that were thought to be small at the time.

> [Fukushima] No small alignment of circumstances needed.

The tsunami is what initiated the accident, but the consequences were so severe precisely because of decades of bad decisions, many of which would have been assumed to be minor decisions at the time they were made. E.g.

- The design earthquake and tsunami threat

- Not reassessing the design earthquake and tsunami threat in light of experience

- At a national level, not identifying that different plants were being built to different design tsunami threats (an otherwise similar plant avoid damage by virtue of its taller seawall)

- At a national level, having too much trust in nuclear power industry companies, and not reconsidering that confidence after a number of serious incidents

- Design locations of emergency equipment in the plant complex (e.g. putting pumps and generators needed for emergency cooling in areas that would flood)

- Not reassessing the locations and types of emergency equipment in the plant (i.e. identifying that a flood of the complex could disable emergency cooling systems)

- At a company and national level, not having emergency plans to provide backup power and cooling flow to a damaged power plant

- At a company and national level, not having a clear hierarchy of control and objective during serious emergencies (e.g. not making/being able to make the prompt decision to start emergency cooling with sea water)

Many or all of these failures were necessary in combination for the accident to become the disaster it was. Remove just a few of those failures and the accident is prevented entirely (e.g. a taller seawall is built or retrofitted) or greatly reduced (e.g. the plant is still rendered inoperable but without multiple meltdowns and with minimal radioactive release).


To be blunt; that isn't an appropriate application of the swiss cheese model to Fukushima. It isn't a swiss cheese failure if it was hit by an out-of-design-spec event. Risk models won't help there. Every engineered system has design tolerances. And that system will eventually be hit by a situation outside the tolerances and fail. Risk models aren't to overcome that reality - they are one of a number of tools for making sure that systems can tolerate situations that they were designed for.

If Japan gets traumatised and changes their risk tolerance in response then sure, that is something they could do. But from an engineering perspective it isn't a series of small circumstances leading to a failure - it is a single event that the design was never built to tolerate leading to a failure. There is a lot to learn, but there isn't a chain of small defence failures leading to an unexpected outcome. By choice, they never built defences against this so the defences aren't there to fail.

> Many or all of these failures were necessary in combination for the accident to become the disaster it was.

Most of those items on your list aren't even mistakes. Japan could reasonably re-do everything they did all over again in the same way that they could simply rebuild all the other buildings that were destroyed in much the same way they did the first time. They probably won't, but it is a perfectly reasonable option.

Again I'm going from memory with the numbers but doubling the cost of a rare disaster in a way that injures ... pretty much nobody ... is a great trade for cheap secure energy. It isn't a clear case that anything needs to change or even went wrong in the design process. Massive earthquakes and tsunamis aren't easy to deal with.


> It isn't a swiss cheese failure if it was hit by an out-of-design-spec event

First of all, the design basis accident is a design choice by the developers of the plant and regulators. The decision process that produced that DBA was clearly faulty - the economic and social costs of the disaster so clearly have exceeded those of a building to a more serious DBA.

> Again I'm going from memory with the numbers but doubling the cost of a rare disaster in a way that injures ... pretty much nobody ... is a great trade for cheap secure energy. It isn't a clear case that anything needs to change or even went wrong in the design process. Massive earthquakes and tsunamis aren't easy to deal with.

This is absolute nonsense. For the cost of maybe maybe tens of millions at most in additional concrete to build the seawall a few meters higher, the entire disaster would have been avoided entirely (i.e. plant restored to operation). With backup cooling that could have survived the tsunami (a lower expense than building a higher seawall), all that would have happened at Fukushima Daiichi is what happened at its neighbor Fukushima Daini (plant rendered inoperable, no meltdown, no significant radioactive release). Instead, we are talking about a disaster that will cost a (current) estimated $180 billion USD to clean up (and there is no way this estimate is realistic, when the methods required to perform the cleanup barely exist yet).


> The decision process that produced that DBA was clearly faulty - the economic and social costs of the disaster so clearly have exceeded those of a building to a more serious DBA.

That isn't clear at all. We're effectively sampling from the entire globe and we've had 2-3x bad nuclear disasters since the 70s. Our safety standards appear to be overcautious given the relatively small amount of damage done vs ... pretty much every alternative. The designs seem to be fine. I'm still waiting to see the justification for the evacuations from Fukushima; they seemed excessive. People died.

> For the cost of maybe maybe tens of millions at most...

You haven't thought for long enough before you typed that. For this particular disaster, sure. But hardening against all the possible disasters is what needs to happen when you become less risk tolerant. It is the millions of dollars to prevent against this disaster multiplied by the number of potential disasters that you have to consider. Safety is expensive.

The numbers aren't small, safety of that magnitude might not even be economically feasible. To say nothing of whether it is actually sensible. And once you get into one in 500 or thousand year events, some really catastrophic stuff starts happening that just can't be reasonably defended against. San Francisco and its fault springs to mind, I forget what sort of even that is but it is probably once a millennium or more often.


Fukushima was designed to be constructed on a hill 30-35 meters above the ocean, but someones decided would be cheaper to construct it at sea level in order to reduce costs in water pumping, others decided to approve this, and much latter, one decade before the disaster when was requested to reinforce the security measures within all the reactors at Japan, those in charge of Fukushima decided to ignore it, again, pushing for extensions year after year until it all blew up. Decades of bad decisions with a strong smell to corruption.

https://warp.da.ndl.go.jp/info:ndljp/pid/3856371/naiic.go.jp...

https://warp.da.ndl.go.jp/info:ndljp/pid/3856371/naiic.go.jp...

https://web.archive.org/web/20210314022059/https://carnegiee...


I mean, ok. So say they build the plant 35m higher up, then get hit by a tsunami that is 36 meters higher [0] than the one that caused the Fukushima disaster? If we're going to start worrying about events outside the design spec we may as well talk about that one. If they're designing to tolerate an event, we can pretty reliably imagine a much worse event that will happen sooner or later and take the plant out. That is the nature of engineering. Eventually everything fails; time is generally against a design engineer.

Caveating that I'm not really sure it was even an out-of-design event, but if it was then it is case closed and the swiss cheese model is an inappropriate choice of model to understand the failure. If you hit a design with things it wasn't designed to handle then it may reasonably fail because of that.

[0] https://en.wikipedia.org/wiki/Megatsunami homework for the interested, it is cool stuff. Japan has seen some quite large waves, 57 meters seems to be the record in recent history.


In Japan they have the "Tsunami Stones" [0] across the coast, memorials to remind future generations of the highest point the water reached.

It was negligent to construct a nuclear plant at sea level, it was just a plant waiting to be flooded, and for such case they had ten years to design protections after being requested to reinforce measures (along with the other Japanese plants), but I can imagine the ones that should put the money was not very collaborative (I even doubt if such responsible learnt the lesson).

[0] https://www.smithsonianmag.com/smart-news/century-old-warnin...

If it was a cheese model or not I do not enter (notice that parent of parent and me are different users), their negligence breaks all the possible logic we could apply without introducing the corruption's variable behind such decades of bad decisions.


> It was negligent to construct a nuclear plant at sea level, it was just a plant waiting to be flooded,

So why did they build it there? It isn't a gentleman in a clown hat hitting himself on the head with a rubber mallet, they had a reason. These things are always trade-offs.

Maybe if they'd built it up on the hill there'd have been an earthquake, a landslide then the plant slides into the sea and gets waterlogged. I dunno. If we're talking about things without a clearly defined bounds of risk tolerance that is the sort of scenario that can be bought up. You're talking about negligence, but you aren't saying what tolerances this plant was built with, what you want it to be built to or what the trade-offs you want made are going to be. Once you start getting in to those details it becomes a lot less obvious that Fukushima is even a bad thing (probably is, the tech is pretty old and we wouldn't build a plant that way any more is my understanding). It isn't possible to just demand that engineers prevent all bad outcomes, reality is too messy. It isn't negligent if there are reasonable design constraints, then something outside the design considerations happens and causes a failure, is the theoretical point I'm bringing up. It is just bad luck.

The whole affair seems pretty responsible from where I sit a long way away. Fukushima is possibly the gentlest engineering disaster to ever enter the canon. It is much better than a major dam or bridge failure for example, and again assuming the event that caused the whole thing was unexpected not even evidence of bad management. Most engineering failures involve a chain of horrific choices the leave the reader with tears in their eyes, not just a fairly mild "well we were hit with a wild tsunami and doubled the nominal price tag of the cleanup with no obvious loss of life or limb". And bear in mind we're scouring the world for the worst nuclear disaster in the 21st century.

And besides, they did build it above sea level.


> "well we were hit with a wild tsunami and doubled the nominal price tag of the cleanup with no obvious loss of life or limb"

This is a bit of a wild understatement. (1) the tsunami was by no means wild, as multiple posts here have referenced, and (2) the incident resulted in a number of significant injuries, not including for deaths involved in the evacuation. And those deaths very much count - you can't hand-wave away the consequences of the evacuation on the basis of hindsight that the evacuation was larger than the final outcome necessitated.


> And those deaths very much count - you can't hand-wave away the consequences

I don't. If it is what it looks like, the government officials that ordered/organised the evacuations should be harshly censured and the next time evacuation orders should be more risk-based and executed in a safer way. What little I've gleaned suggests an appalling situation where a bunch of presumably old people were forced from their homes to their deaths. The main thing keeping me quiet on the topic is I don't speak Japanese and I don't really know what happened in detail there.


Did you read the report I put? the pdf,

    << The Fukushima Daiichi Nuclear Power Plant construction was based on the seismological knowledge of more than 40 years ago. As research continued over the years, researchers repeatedly pointed out the high possibility of tsunami levels reaching beyond the assumptions made at the time of construction, as well as the possibility of reactor core damage in the case of such a tsunami. However, TEPCO downplayed this danger. Their countermeasures were insufficient, with no safety margin.>>

    << By 2006, NISA and TEPCO shared information on the possibility of a station blackout occurring at the Fukushima Daiichi plant should tsunami levels reach the site. They also shared an awareness of the risk of potential reactor core damage from a breakdown of sea water pumps if the magnitude of a tsunami striking the plant turned out to be greater than the assessment made by the Japan Society of Civil Engineers.>>
Even leaving aside they ignored the original placement in order to reduce costs by using biased seismological reports of their convenience, TEPCO knew the plant was at risk, they was warned successively it was at risk. And the supposed regulator NISA [0] closed the eyes conveniently (conveniently for someones).

    << TEPCO was clearly aware of the danger of an accident. It was pointed out to them many times since 2002 that there was a high possibility that a tsunami would be larger than had been postulated, and that such a tsunami would easily cause core damage.>>
From the other url I put (I updated it with a cached url, I didn't noticed the article was deleted),

    << there appear to have been deficiencies in tsunami modeling procedures, resulting in an insufficient margin of safety at Fukushima Daiichi. A nuclear power plant built on a slope by the sea must be designed so that it is not damaged as a tsunami runs up the slope.>>
[0] https://en.wikipedia.org/wiki/Nuclear_and_Industrial_Safety_...

> the gentlest engineering disaster

EU raised the maximum permitted levels of radioactive contamination for imported food following Fukushima, this is not a gentlest gesture to the Europeans. Japanese citizens also received their dose, at time the more vulnerable ones was recruited by the Yakuza to clean up the zone.


> Did you read the report I put?

No, I'm just trusting that you'll be honest about what it is saying. I don't need to read a report to persuade myself that a 40 year old plant was designed based on the best available knowledge of 40 years ago. That seems like something of a given. I'm just not sure where you are going with that, it doesn't obviously suggest negligence to me.

You're not saying what tolerances you want them to design to. We both agree that there are scenarios that can and might happen. Obviously is is possible for a tsunami to take out buildings built near the shore in Japan so it doesn't surprise me that people raised it as a risk. A lot of buildings got taken out that day. That doesn't obviously suggest negligence to me; obviously a lot of people were happy living with the risk.

> EU raised the maximum permitted levels of radioactive contamination for imported food following Fukushima

Oh well then. I had no idea. I thought the consequences were minor and now I have learned ... there you go, I suppose. I'm not really sure what to do with this new information.


> I'm just not sure where you are going with that, it doesn't obviously suggest negligence to me.

You didn't read the report or search for information about the matter, but I have not problem to repeat it for you,

The General Electric's design was originally designed to be placed 30-35 meters above the ocean, instead of this TEPCO modified such design and constructed at sea level (almost) recurring to studies convenient to their purpose, cheaper, this in one of the more tsunami-prone countries, with an history of ones reaching 20-30 meters. When those -for them- convenient studies was not longer justifiable, as deeper studies did finally refute them, they decided to just keep ignoring all the warnings and requests to reinforce the safety. They knew the nuclear plant was in danger, they always knew it, General Electric didn't designed at 30-35 meters above the ocean by coincidence, and this happened with a supposed regulator always closing the eyes to this, conveniently, across those years, ignoring even pipes with fissures.

Well, this obviously suggest negligence to me. Decades of bad decisions with a strong smell to corruption.

> You're not saying what tolerances you want them to design to.

What about tolerance to avoid a meltdown of the core, specially under two events, an earthquake and a tsunami, exactly what happened after ignoring the warnings and requests to reinforce the safety.

> Oh well then. I had no idea. I thought the consequences were minor and now I have learned ... there you go, I suppose. I'm not really sure what to do with this new information.

Keep the sarcasm for other places, if you don't mind. It is not a mere gentlest engineering disaster as it reached the whole planet, with ate TEPCO's cesium-137, specially the Japanese. And it is not a mere gentlest engineering disaster when you have to force vulnerable people to go to ground zero to move contaminated land and water.


> What about tolerance to avoid a meltdown of the core, specially under two events, an earthquake and a tsunami, exactly what happened after ignoring the warnings and requests to reinforce the safety.

I wasn't going to reply but that seems like it moves the conversation forward; so why not?

It seems to me your design goal is fundamentally incompatible with a lot of the specific complaints of negligence. If you want a design that doesn't melt down when there is an earthquake and a tsunami, then moving the reactor to higher ground isn't helpful because it won't achieve the design goal. The design is still fundamentally vulnerable. Moving the reactor up 35m still leaves it vulnerable to a large enough tsunami and a big enough earthquake.

If your solution is moving the site uphill, then your design goal should be talking in terms of a 1 in X year event. If you want the risk completely mitigated then in this case it isn't relevant where the site is since the obvious way to achieve that design goal is just build something that doesn't fail when flooded. Coincidentally that seems to be the approach that the newer generation designs use - change how the cooling works so that it can't melt down in any reasonable circumstances, tsunami or otherwise.

I will note that there is a reading of your comment where you want the design to be able to tolerate this specific event. I'm ignoring that reading as unreasonable since it requires hindsight, but in the unlikely event that is what you meant then just pretend I didn't reply.

> Keep the sarcasm for other places, if you don't mind. It is not a mere gentlest engineering disaster as it reached the whole planet, with ate TEPCO's cesium-137, specially the Japanese. And it is not a mere gentlest engineering disaster when you have to force vulnerable people to go to ground zero to move contaminated land and water.

Which one do you think was gentler and a story of similar popularity as Fukushima? It is pretty usual to have multiple people actually die and it be the engineer's responsibility once something becomes international news. Even something as basic as a port explosion usually has a number of missing people in addition to a chunk of city being taken out. To anchor this in reality, Fukushima at a class 7 meltdown might have done less damage than a coal plant in normal operation. Coal plants aren't pretty places and air pollution is nasty, nasty stuff.


> It seems to me your design goal is fundamentally incompatible with a lot of the specific complaints of negligence. If you want a design that doesn't melt down when there is an earthquake and a tsunami, then moving the reactor to higher ground isn't helpful because it won't achieve the design goal.

My goal? My solution? My design!? you must be now kidding,

- GE original design 30-35 meters above the sea.

- Warnings about reinforce safety along one decade.

- Tsunami at Fukushima's nuclear plant, 15 meters above the sea.

> I wasn't going to reply but that seems like it moves the conversation forward; so why not?

Foward to... nothing it seems. You just replied with hypotheticals like if the event didn't happened, and as if such event would have been impossible to avoid, with some kind of dissociative reflexions that surpass the cynicism. I'm the one that is not going to reply.


> Caveating that I'm not really sure it was even an out-of-design event but if it was then it is case closed and the swiss cheese model is an inappropriate choice of model to understand the failure.

This is not how safe systems are designed and operated. Safety is not a one-time item, it is a process. All safety-critical systems receive attention throughout their operating lives to identify and mitigate potential safety risks. Throughout history, many safety-critical systems have received significant changes during their operating lives as a result of newly-discovered threats or recognition that threats identified during the initial design were not adequately addressed. Many (if not most) commercial aircraft have required significant modifications to address problems that were not understood at the time they were initially built and certified. Likewise, nuclear power plants in many countries have received major modifications over the years to address potential safety issues that were not understood or properly modeled at the time of their design. Sometimes, this process determines that there is no safe way to continue operation - usually that there is no economically viable way to mitigate the potential failure mode - and the system is simply shut down. This has happened to a few aircraft over the years, as well as several nuclear power plants (in many cases justified, in others not so much).

Fukushima existed in just such a system, and that the disaster occurred was the result of failures throughout the system, not a one-off failure at the design stage.

> I mean, ok. So say they build the plant 35m higher up, then get hit by a tsunami that is 36 meters higher [0] than the one that caused the Fukushima disaster? If we're going to start worrying about events outside the design spec we may as well talk about that one. If they're designing to tolerate an event, we can pretty reliably imagine a much worse event that will happen sooner or later and take the plant out. That is the nature of engineering.

I think you are missing the point. Obviously it is possible that a tsunami higher than any possible design threshold could occur (it is, after all, possible that an asteroid will strike in the pacific and kick up a wave of debris that wipes everything off the home islands). However, the tsunami that struct Fukushima Daiichi was no higher than a number of tsunamis that were recorded in Japan within the last century. The choice of DBA tsunami height was clearly an underestimate, and underestimates were identified for Fukushima and other plants prior to the accident but not acted upon. This was not a cases of "a bigger wave is always possible", it was a case where the design, operation, and supervision were wrong, and known (by some) to be so prior to the accident.


> The choice of DBA tsunami height was clearly an underestimate, and underestimates were identified for Fukushima and other plants prior to the accident but not acted upon.

Not much of a swiss cheese failure then though. The failure is just that they committed hard to an assumption that was wrong.

My point is that unless it is actually an example of multiple failures lining up then this is a bad example of a swiss-cheese model. Seems to be an example of a tsunami hitting a plant that wasn't designed to cope with it. And a plant with owners who were committed to not designing against that tsunami despite being told that it could happen. It is a one-hole cheese if the plant was performing as it was designed to. The stance was that if a certain scenario eventuated then the plant was expected to fail and that is what happened.

Swiss cheese failures are there are supposed to be a number of independent or semi-independent controls in different systems that all fail leading to an outcome. This is just that they explicitly chose not to prepare for a certain outcome. Not a lot of systems failing; it even seems like a pretty reasonable place to draw the line for failure if we look at the outcomes. Expensive, unlikely, not much actual harm done to people and likely to be forgotten in a few decades.


There was a strong corporate cultural component to Fukushima as well. Tepco had spent decades telling the Japanese public that nuclear power was completely safe. A tall order in Japan obviously, but by and large it worked.

During the operation of Fukushima Daiichi, various studies had been done that recommended upgraded safety features like enlarging the seawall, moving the emergency generators above ground so they couldn't be flooded, etc.

In every case, management rejected the recommendations because:

1. They would cost money.

2. Upgrading safety would be tantamount to admitting the reactors were less than safe before, and we can't have that.

3. See 1.


I’m not sure why you think those are not a confluence of smaller events or that something outside the design spec isn’t one of those factors. By “small,” I don’t mean trivial. I mean an event that by itself wouldn’t necessarily result in disaster. Perhaps I should have said “smaller” rather than “small.” With the O-rings, the cold and the pressure to launch on that particular day all created the confluence. With Fukushima, the earthquake knocked out main power for primary cooling. That would have been manageable except then the backup generators got destroyed by the tsunami. It was not a case of just a big earthquake, whether outside or inside the design spec, making the reactor building fall down and then radiation being released.

If Fukushima get hit by a disaster that is outside the design spec then the engineering root cause of the failure is established. There isn't some detailed process needed to figure out how a design should tolerate out-of-design events. And there isn't a confluence of smaller events, it is a very cut and dry situation (well, unstable and wet situation I suppose). There was one event that caused the failure. An event on a biblical scale that was hard to miss.

If you want Fukushima to tolerate things it wasn't designed to tolerate or fail in ways it wasn't designed to fail in then the swiss cheese model isn't going to be much help. You're going to need to convince politicians and corporate entities that their risk tolerance is too high. Which in a rational world would be a debate because it isn't obvious that the risk tolerances were inappropriate.


The design spect tsunami resistance is for getting away with just a couple days downtime plus what the grid concerns.

A much higher much rare case is what happened and which they didn't have a plan ready on hand.

Even if you treat the box as the special being they wre...


It usually starts with a broken coffee machine.

When that happens, get ready.

Complexity breeds disaster, but it's usualy worse with overconfidence and bad incentives,

nobody loves a saftey review until the lights go out.


They need more battery storage for grid health, both colocated at solar PV generators (to buffer voltage and frequency anomalies) and spread throughout the grid. This replaces inertia and other grid services provided by spinning thermal generators. There was no market mechanism to encourage the deployment of this technology in concert with Spain’s rapid deployment of solar and wind.

There are non-battery buffers available too--I recently got rooftop residential solar installed, and learned that my area is covered by a grid profile requiring that the solar system stay online through something like 60 +/- 2Hz before shutting down completely, and ramping down production linearly beyond a 1Hz deviation or so. The point is to avoid cascading shutdowns by riding through over/undersupply situations, whereas an older standard for my area would have the all solar systems cut off the moment frequency exceeded 60.5Hz (which would indicate oversupply from power plant generators spinning faster via lower resistance).

In my system's case, switching to this grid profile was just a software toggle.


This is grid following, very effective for small scale generation. It does not work for large scale generation though when the grid is relying on that voltage and frequency from the utility scale renewable generation ("grid forming"). When those large generators exceed their ride through tolerance, batteries step in to hold voltage and frequency up until the transient event ends or dispatchable generators called upon spin up (currently fossil gas primarily, but also nuclear if there is headroom to increase output). Thermal generators can take minutes to provide this support (called upon, fuel intake increased, spinning metal spins faster), batteries respond within 250-500ms.

Tesla’s Megapack system at the Hornsdale Power Reserve in Australia was the first example of this being proven out at scale in prod. Batteries everywhere, as quickly as possible.


One problem that happened here is the _voltage_ spikes as the synchronous generation went away. Voltage _spikes_ on generation going away seem insane, but it's a real phenomenon.

The problem is that the line itself is a giant capacitor. It's charged to the maximum voltage on each cycle. Normally the grid loads immediately pulls that voltage down, and rotating loads are especially useful because they "resist" the rising (or falling) voltage.

So when the rotating loads went away, nothing was preventing the voltage from rising. And it looks like the sections of the grid started working as good old boost converters on a very large scale.


Nope, they need more inertial storage to smooth things out and buy time / absorb inevitable failure bursts/cascades from inverted production means or safety disconnection events.

Battery storage provides this grid service, as mentioned in my other comments.

In this very specific case, battery storage would not have helped (in fact, it would have worsened the problem). One of the issues in the failure is renewables, but not because of intermittence. It's because of their ~infinite ramp and them being DC.

Anything that's not a spinning slug of steel produces AC through an inverter: electronics that take some DC, pass it through MOSFETs and coils, and spits out a mathematically pure sine wave on the output. They are perfectly controllable, and have no inertia: tell them tout output a set power and they happily will.

However, this has a few specific issues:

- infinite ramps produce sudden influx of energy or sudden drops in energy, which can trigger oscillations and trip safety of other plants

- the sine wave being electronically generated, physics won't help you to keep it in phase with the network, and more crucially, keep it lagging/ahead of the network

The last point is the most important one, and one that is actually discussed in the report. AC works well because physics is on our side, so spinning slugs or steel will self-correct depending on the power requirements of the grid, and this includes their phase compared to the grid. How out-of-phase you are is what's commonly called the power factor. Spinning slugs have a natural power factor, but inverter don't: you can make any power factor you want.

Here in the spanish blackout, there was an excess of reactive power (that is, a phase shift happening). Spinning slugs will fight this shift of phase to realign with the correct phase. An inverter will happily follow the sine wave measured and contribute to the excess of reactive power. The report outlines this: there was no "market incentive" for inverters to actively correct the grid's power factor (trad: there are no fines).

So really, more storage would not have helped. They would have tripped just like the other generators, and being inverter-based, they would have contributed to the issue. Not because "muh renewable" or "muh battery", but because of an inherent characteristic of how they're connected to the grid.

Can this be fixed? Of course. We've had the technology for years for inverters to better mimic spinning slugs of steel. Will it be? Of course. Spain's TSO will make it a requirement to fix this and energy producers will comply.

A few closing notes:

- this is not an anti-renewables writeup, but an explanation of the tech, and the fact that renewables are part of the issue is a coincidence on the underlying technical details

- inverters are not the reason the grid failed. but they're a part of why it had a runaway behavior

- yes, wind also runs on inverters despite being spinning things. with the wind being so variable, it's much more efficient to have all turbines be not synchronized, convert their AC to DC, aggregate the DC, and convert back to AC when injecting into the grid


I agree with your detailed assessment, but importantly, I argue more battery storage would've allowed for the grid to fail gracefully through rapid fault isolation and recovery (assuming intelligent orchestration of transmission level fault isolation). Parallels to black start capabilities provided by battery storage in Texas (provided by Tesla's Gambit Energy subsidiary). When faults are detected, the faster you can isolate and contain the fault, the faster you can recover before it spreads through the grid system.

The storage gives you operational and resiliency strength you cannot obtain with generators alone, because of how nimble storage is (advanced power controls), both for energy and grid services.

> Can this be fixed? Of course. We've had the technology for years for inverters to better mimic spinning slugs of steel. Will it be? Of course. Spain's TSO will make it a requirement to fix this and energy producers will comply.

This is synthetic inertia, and is a software capability on the latest battery storage systems. "There was no market mechanism to encourage the deployment of this technology in concert with Spain’s rapid deployment of solar and wind." from my top comment. This should be a hard requirement for all future battery storage systems imho.

Potential analysis of current battery storage systems for providing fast grid services like synthetic inertia – Case study on a 6 MW system - https://www.sciencedirect.com/science/article/abs/pii/S23521... | https://doi.org/10.1016/j.est.2022.106190 - Journal of Energy Storage Volume 57, January 2023, 106190

> Large-scale battery energy storage systems (BESS) already play a major role in ancillary service markets worldwide. Batteries are especially suitable for fast response times and thus focus on applications with relatively short reaction times. While existing markets mostly require reaction times of a couple of seconds, this will most likely change in the future. During the energy transition, many conventional power plants will fade out of the energy system. Thereby, the amount of rotating masses connected to the power grid will decrease, which means removing a component with quasi-instantaneous power supply to balance out frequency deviations the millisecond they occur. In general, batteries are capable of providing power just as fast but the real-world overall system response time of current BESS for future grid services has only little been studied so far. Thus, the response time of individual components such as the inverter and the interaction of the inverter and control components in the context of a BESS are not yet known. We address this issue by measurements of a 6 MW BESS's inverters for mode changes, inverter power gradients and measurements of the runtime of signals of the control system. The measurements have shown that in the analyzed BESS response times of 175 ms to 325 ms without the measurement feedback loop and 450 ms to 715 ms for the round trip with feedback measurements are possible with hardware that is about five years old. The results prove that even this older components can exceed the requirements from current standards. For even faster future grid services like synthetic inertia, hardware upgrades at the measurement device and the inverters may be necessary.


Yep, sounds like "This was bound to happen at some point"

Which on some level is exactly "what the bosses and politicians want to hear"

When it's everybody's fault it's nobody's fault.


In some ways, yes, but yet it's what reality is. There was probably some last factor kicking in that triggered the cascade, but there were probably many non-happy-paths not properly covered by working backup/fallback strategies. So a report could totally still tell "it's X fault", pointing the finger there. Government would blame the owner of X, some public statement about fixing X would be made and then the ones working in the field should internally push toi improve/fix their own (reduced) scope.

I don't know what will come of this report in the next months/years, I will keep an eye on it though, since I live in Spain :)


Exactly.

But EU's liberalized energy market gives us resiliency and low prices for electricity! /s

But not across the Pyrenees :_)

There are ways to aggregate these into a single resilience score for policy makers with only moderate loss of detail but it's unpopular.

It is very carefully worded, but variable renewables are holding the smoking gun here. This is why spain now requests a better connection to french nuclear now. This reckless overbuild of variable generation is a valuable negative example, wind and solar without adequate hydro or nuclear is dead

Your statement is wrong.

The report describes that there was no mechanism to dispatch the reactive power of renewables separately from the active power.

In page 452, item numbered 1 states "RES power plants follow fixed power factor" (RES = Renewable Energy Sources). The source of this finding is in section 4.2.1.

In page 208, footnote 35, the reference is given to Royal Decree 413/2014 of 6 June, which mandates this fixed power factor. The Article 7, section e), states that renewable energy sources must follow the instructions given by the operator to set power factor, and only if the distribution lines support it.

And footnote 36 describes how this worked in practice on the date of the outage: renewables were told, by email on the previous day, which fixed power factor correction to use the following day.

--

This lack of dynamic dispatch of reactive power was a known problem, already reported in 2022 [1]

[1] https://www.eldiario.es/economia/competencia-reconocio-julio...


It's lack of experience managing variability, not variability itself.

Wind and solar are very far from dead, but they do need some adjustments - as the report makes clear.


Spaniard here, I didn't hear about that.

I don't like the editorialized title either but I would say that the actual post title

"The FSF doesn't usually sue for copyright infringement, but when we do, we settle for freedom"

and this sentence at the end

" We are a small organization with limited resources and we have to pick our battles, but if the FSF were to participate in a lawsuit such as Bartz v. Anthropic and find our copyright and license violated, we would certainly request user freedom as compensation."

could be seen as "threatening".


And they can ban your account if they think you are doing that. I think someone even commented here on HN they were banned by Anthropic for this.

Why would they have that feature in claude code cli if it goes against the ToS? You can use Claude Code programatically. This is not the issue. The issue is that Anthropic wants to lock you in within their dev ecosystem (like Apple does). Simple as that.

allowed shell pipes doesn't necessarily mean they want loops running them.

One of the economic tuning features of an LLM is to nudge the LLM into reaching conclusions and spending the tokens you want it to spend for the question.

presumably everyone running a form of ralph loop against every single workload is a doomsday situation for LLM providers.


> allowed shell pipes doesn't necessarily mean they want loops running them.

insane that people apologize for this at all. we went from FOSS software being standard to a proprietary cli/tui using proprietary models behind a subscription. how quickly we give our freedom away.


Anthropic itself advertised their own implementation of agentic loop (Ralph plugin). Sure, it worked via their official plugin, but the end result for Anthropic would be the same.

There's nothing in TOS that prevents you from running agentic loops.


I don't know why this is downvoted, see my nephew (?) comment [0] for a longer version, but this is not at all clear IMHO. I'm not sure if a "claude -p" on a cron is allowed or not with my subscription, if I run it on another server is it? Can I parse the output of claude (JSON) and have another "claude -p" instance work on the response? It's only a hop, skip, and a jump over to OpenClaw it seems, which is _not_ allowed. But at what point did we cross the line?

It feels like the only safe thing to do is use Claude Code, which, thankfully, I find tolerable, but unfortunate.

[0] https://news.ycombinator.com/item?id=47446703


is this against their tos or something? what did they expect programmers to do, not automate the automated code-writer?

They have now successfully turned the temperature knob from 2 to 5. I wonder what 7 will be.

Non-playstore applications will have restricted access(sms/telephony), and bit by bit the screws will be tightened.

"Only 0.0004% of the userbase installs after the initial 24 period, greater than x% take 48 hours or more so the 24hr window is now 72hr", and repeat until its all nice and locked down for them.

"Your google play account will now need ID to prevent children accessing adult software" will come along not long after. For the children.

-.-


1. Company A develops Project One as GPLv3

2. BigCo bus Company A

3a. usually here BigCo should continue to develop Project One as GPLv3, or stop working on it and the community would fork and it and continue working on it as GPLv3

3b. BigCo does a "clean-room" reimplementation of Project One and releases it under proprietary licence. Community can still fork the older version and work on it, but BigCo can continue to develop and sell their "original" version.


As a real world example, Redis was both Company A and BigCo. Project One is now ValKey.

2. BigCo owns ProjectOne now 3a. Bigco is now free to release version N+1 as closed source only. 3b. Community can still fork the older version and work on it, but BigCo can continue to develop and sell their original version.

"Firefox’s free VPN won’t be using Mullvad’s infra though; it’s hosted on Mozilla servers around the world (if beta testing of the feature done in late 2025 tracks)."

From OMG Ubuntu


What makes me not want to use it is I assume Mozilla has a legal presence all around the world.

Two huge reasons people use VPNs is piracy and saying things/accessing content that's not legal in their country. If that company has a legal presence in your country, then they'll hand over that data to the police should you criticize the wrong person or download a movie without permission. At which point a VPN becomes kind of pointless.


The only time i use a VPN is when i'm traveling and I don't trust "free coffee shop wifi"

I probably won't use Mozilla's offering, because i want any VPN to cover the whole system, not just the browser (correct me if i've made a bad assumption here)


It depends on whether the company keeps records or not. They can't turn over records they don't have.

how is that any different from any other VPN provider?

Oh, this is really good. Even just for the money part. Thanks!

Are you sure? We have many SaaS and final products which are just stitching together more SaaS. We have a very vocal part of the HN community always reminding you to buy a SaaS solution and connect it to your business instead of maintaining an in-house bespoke solution.

I think the core idea is that if a baby with "caveman genetics" so to speak were to be born today, they could achieve similar intellectual results to the (average?) rest of the population. At least that's how I interpret it.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: