So you think the answer is replacing a requirement for a 6-digit 2FA code that can be typed into the npm publishing CLI with a requirement for a device that has a camera that can scan a QR code and then... what? What does the QR code do on the device? How does the npm CLI present the QR code?
Simply supporting passkeys gives people domain locked login via qr/phone, or any fido2 usb device. No more keyboard entry required for login other than username, which means phishing is off the table. Standards are great if we can get anyone to use them.
None of your solutions seem useful in this case, especially a $150 hold. Site-wide rate limiting for payment processing? Too complicated, high-maintenance, and easy to mess up.
You can't block 100% of these attempts, but you can block a large class of them by checking basic info for the attempted card changes like they all have different names and zip codes. Combine that with other (useful) mitigations. Maybe getting an alert that in the past few hours or days even, 90% of card change attempts have failed for a cluster of users.
>None of your solutions seem useful in this case, especially a $150 hold.
Attackers are going after small charges. That's the reason they're going after these guys in the first place.
>Site-wide rate limiting for payment processing? Too complicated, high-maintenance, and easy to mess up.
And then you give a solution that is 10x as complicated, high maintenance, and easy to mess up.
>You can't block 100% of these attempts, but you can block a large class of them by checking basic info for the attempted card changes like they all have different names and zip codes.
This is essentially a much more complex superset of rate limiting.
The point is whether every user actually notices it, it's that enough of them do that attackers are specifically looking for the ability to do small charges. If you remove that capability, they will look elsewhere.
Yeah… no it wouldn’t. I’ve watched users have their bank accounts emptied (by accident) because they kept refreshing. A measly £150 isn’t going to register until it’s too late anyway.
Stop posting AI slop, especially slop pull requests like the one you made to OpenClaw. Learn the first thing about a project you want to monetize and make fake contributions to. For example, OpenClaw is overwhelmed with slop PRs and the author has talked about this a lot.
In theory the same way people are making those claims about "stolen" art, such as models that produced watermarks from Getty images or Shutterstock. Similar "watermarks" have existed in some LLM output.
Trying to set up a prompt injection attack for someone accessing a repo with a coding agent is juvenile and pointless. And it doesn't deal with the training part.
It's been around for almost 15 years and stable enough for several providers to roll it out in production the past 10 years (GCP and Azure in 2017).
AWS is just late to the game because they've rolled so much of their own stack instead of adapting open source solutions and contributing back to them.
> AWS is just late to the game because they've rolled so much of their own stack instead of adapting open source solutions and contributing back to them.
This is emphatically not true. Contributing to KVM and the kernel (which AWS does anyway) would not have accelerated the availability.
EC2 is not just a data center with commodity equipment. They have customer demands for security and performance that far exceed what one can build with a pile of OSS, to the extent that they build their own compute and networking hardware. They even have CPU and other hardware SKUs not available to the general public.
If my sources are correct, GCP did not launch on dedicated hardware like EC2 did, which raised customer concerns about isolation guarantees. (Not sure if that’s still the case.) And Azure didn’t have hardware-assisted I/O virtualization ("Azure Boost") until just a few years ago and it's not as mature as Nitro.
Even today, Azure doesn’t support nested virtualization the way one might ordinarily expect them to. It's only supported with Hyper-V on the guest, i.e., Windows.
> While nested virtualization is technically possible while using runners, it is not officially supported. Any use of nested VMs is experimental and done at your own risk, we offer no guarantees regarding stability, performance, or compatibility.
We operate a postgres service on Firecracker. You can create as many databases as you want, and we memory-snapshot them after 5 seconds of inactivity, and spin them up again in 50ms when a query arrives.
Not sure what you mean. It IS the same model, just a smaller version of it. And gpt-5.3-codex is a smaller version of gpt-5.3 trained more on code and agentic tasks.
Their naming has been pretty consistent since gpt-5. For example, gpt-5.1-codex-max > gpt-5.1-codex > gpt-5.1-codex-mini.
what do you mean by the same model, just smaller version? Codex should be finetune of the "normal" version, where did you get it's smaller? It's not that simple as to take some weights from the model and create a new model, normaly the mini or flash models are separately trained based on the data from the larger model.
reply