>Roughly 50% of indoor dust is composed of microplastics, so it's not like it's uncommon.
I highly doubt that. Soil, skin and pollen are usually the big ones. Hairs depending one how you count dust, but eliminating hair like fibres would also eliminate most of the sources of plastic, unless you allow really large particle sizes.
[edit] Checking research. The highest claim I found was 39% of fibres (in household dust, Japan). but that seemed to be per particle not by volume.
Synthetic fibers from clothes are microplastics, and clothes shed lots of fibers. Not to mention all the upholstered furniture, carpet, rugs, drapes, bags, etc.
And indeed there is not currently conclusive proof that WiFi is a significant risk to human health. However, this is the same line the tobacco industry used for decades even though they knew different.
Because it’s an inverted claim of falsification it works for literally anything (I cannot prove that X will absolutely not hurt you), but you get pilloried if you put something in the blank that the herd happens to support.
We’ve reached the absurd point where all sides of the political spectrum have sacred cows, and an exceedingly poor understanding of scientific reasoning, and all sides also try to dunk on the others by claiming scientific authority.
I'm not sure if they have established a threat. I thought it was mostly hypothesised or very locally specific harms.
On the other hand I suspect much of the real science on environmental plastic might avoid the term microplastic since it seems to have a meaning that flows to whatever can make the scariest headline today. I have seen the size range to qualify run from microscopic up to a couple of millimetres. Volumes, quantities, or location stated without regard to individual particle size. I'm relatively certain that they have not discovered 1mm particles inside red blood cells.
Even what counts as a plastic seems to be an easy way of adding vagueness, I saw one table that seemed to count cellulose as a plastic, which makes sense if you are thinking about properties of the material, but unsurprisingly easy to come across that it's not really worth going looking for it.
Compute, bytes of ram used, bytes in model, bytes accessed per iteration, bytes of data used for training.
You can trade the balance if you can find another way to do things, extreme quantisation is but one direction to try. KANs were aiming for more compute and fewer parameters. The recent optimisation project have been pushing at these various properties. Sometimes gains in one comes at the cost of another, but that needn't always be the case.
The invention of fast food does not change anyone's ability to excersize. When fast food was invented people excersized way more than they do today.
Time constraints have caused an increase in fast food consumption and a reduction in excersize.
Both issues then seem to be addressed by coercion to change behaviour when what is needed is a systemic change to the environment to provide preferable options.
"the question that organised the coverage was whether Claude, a chatbot made by Anthropic, had selected the school as a target."
This article is the first I have seen mention of Claude in relation to this specific incident. There's been plenty of talk about AI use in warfare in general but in the case of this school most of the coverage I have seen suggested outdated information and procedures not properly followed.
Amodei looks absolutely prescient for taking a stand against use of Claude in the kill chain. Not to mention how utterly foolish DoD looks declaring Claude to be a national security threat while simultaneously using to choose targets. No wonder they got humiliated in court.
Well, to people who don't believe in precognition, it sounds like Anthropic had quality control engineers dedicated to their military clients' usage. Basically running through the prompts and inspecting the answers and digging deeper how their chatbots gave those answers. Somebody must have pressed the high-alert button, resulting in Anthropic taking a stance.
Certainly possible but I'd assume DoD expressly forbid anyone looking at their usage and Anthropic had to support that to win their contract. They may have gotten wind of what they were doing somehow.
you, today, can use Claude in Amazon Bedrock, and the way that works is, if you want it to be this way: the piece of code and model weights and whatever other artifacts are involved, they are run on Bedrock. Bedrock is not a facade against Claude's token-based-billing RESTful API, where Anthropic runs its own stuff. In the strictest sense, Bedrock can be used as a facade over lower level Amazon services that obey non-engineering, real world concerns like geographic boundaries / physical boundaries, like which physical data center hardware is connected by what where / jurisdictional boundaries, whatever. It's multi-tenancy in the sense that Amazon has multiple customers, but it's not multi-tenancy in the sense that, because you want to pay for these requirements, Amazon has sorted out how to run the Claude model weights, as though it were an open-weights model you downloaded off Hugging Face, without giving you the weights, but letting you satisfy all these other IP and jurisdictional and non-technical requirements that you are willing to pay for, in a way that Anthropic has also agreed.
This is what the dispute with the Pentagon is about, and what people mean when they say Claude is used in government (it is used in Elsa for the FDA for example too). Anthropic doesn't have telemetry, like the prompts, in this agreement, so they have the contract that says what you can and cannot use the model for, but they cannot prove how you use the model, which of course they can if you used their RESTful API service. They can't "just" paraphrase your user data and train on it, like they do on the RESTful API service. There are reasons people want this arrangement ($$$).
The vendor (Palantir) can use, whatever model it wants right? It chose Claude via "Bedrock." I don't know if they use Claude via Bedrock. Ask them. But that's what they are essentially saying, that's what this is about. Palantir could use Qwen3 and run it on datacenter hardware. Do you understand? It matters, but it also doesn't matter.
It's a bunch of red herrings in my opinion, and this sort of stuff being a red herring is what the article is mostly about.
As I understood it, Anthropic was prime on their own contract which the DoD infamously unsuccessfully tried to renegotiate mid-term. Are you saying that Palantir had some subcontracted use of Claude independent of Anthropic's existing contract?
reply