No, they stop hunt their way to depressed prices where they then buy anticipating the recovery while you closed out your “safe” retirement positions at -15%.
You use a trailing stop loss. You get closed out 15% down from the top, not 15% down from purchase. The alternative in a 24 hour market is worse — the news of a real event hits and by the time you wake up and respond you’re down 50% or more and the stock isn’t coming back.
This policy change is to hunt profit from a safety mechanism used by retail traders.
It is something that should yield a lot of profit for 24 hour trading systems during a downturn.
That is a reasonable position, however the assumption that it is the administration that is gaming them vs other motivated parties is open for discussion.
It is in fact not at all reasonable. They are saying that the BLS stats can't be trusted because they totally misunderstand the survey methodology. That isn't a reason!
I’d counter that if we were doing a good job gathering data that these structural biases could be compensated for with more conservative initial numbers.
At some point a lack of decision to take compensating action becomes faking the numbers.
> if we were doing a good job gathering data that these structural biases could be compensated for with more conservative initial numbers
There is no more conservative. The data will bias in the direction of trend. The point of the data are, in part, to measure that trend. Fucking with it to make it politically correct to the statistically illiterate is precisely the sort of degradation of data we’re worried about.
(They’re also useless as a time series if the methodology changes quarter to quarter. That’s the job of analysis. Not the data.)
What you wrote suggests the data will bias predictably, which matches my understanding.
Reporting biased data as the default because the bias compensation is already built into the audience seems like a weak argument for not improving.
They can provide for the continuation of data visibility/granularity by releasing the prior numbers as previously calculated and at the same time changing the calculation of the headline number to be better compensated.
The simpler argument is that changing it at all will result in a negative step change in the reporting that no one wants to take accountability for.
> What you wrote suggests the data will bias predictably
Ex post facto. Before the fact, we don’t know.
Imagine you know the weather will be a strong gust regardless of direction. Averaging the models will produce a central estimate. But you know it will be biased away from the center. You just don’t know, until it happens, in which direction.
> They can provide for the continuation of data visibility/granularity by releasing the prior numbers as previously calculated and at the same time changing the calculation of the headline number to be better compensated
They do. These data are all recalculated with each methodological change. They’re just deprecated indices the media don’t report on because they’re of academic, not broad, concern.
> simpler argument is that changing it at all will result in a negative step change in the reporting
Simpler but wrong. Those data would be useless for the same reason we don’t let CEOs smooth revenues.
I’m confused by this discussion. It seems like you said the biases were structural because we know who reports early and that is why the early numbers are always revised down. Structural implies known in advance.
It also seems like you said they shouldn’t revise the numbers but now you are saying they already do.
The performance gap between Apple’s flash and a typical aftermarket NVMe drive in a Windows laptop is more attributable to controller design and integration than to trace length.
Apple can get away with less RAM because their flash storage is fast enough to make swapping barely noticeable. In contrast, most Windows machines incur a significant performance penalty when swapping.
I mean, maybe it's not, honestly it was the first FAANG that popped in my head as I typed the comment. But for most software engineers working at a place like that even as boring or not sexy as it may have become in SF, they won't ever even get close
First you take a 50 person org. Then (for scale) you hire highly motivated performers who, because they came up in big orgs, are used to using 50 people for three years to do a project six people can do in three to six months. Then you create incentives that make them compete for standing. And the standing also depends on their personal scope (ie headcount).
I want some useful memory but it seems hardcoded to try shoehorn in personal details or tidbits from past conversations into responses. Even if I specifically ask it not to in my personalization prompt.
It is interesting that both "customer support" comments here suggesting "you're just using it wrong" are from very recent accounts with very little karma.
reply