'The latest generation of wearables delivers clinical-grade insights. By providing continuous, noninvasive biometric readings, the Oura ring can act as a “check-engine light,” bridging the gap between doctor visits and daily health decisions. One user’s notifications about changes in her vitals led her to seek medical attention, uncovering early signs of Hodgkin lymphoma. In a more everyday scenario, a busy executive might receive a “symptom radar” notification while traveling, prompting him to rest so he doesn’t become sick.
But federal policy hasn’t caught up with technological advances. Wearables sit in a regulatory gray zone. The FDA categorizes them and their associated software in two categories: general wellness products and medical devices. The former have minimal oversight and no standards. The latter—products intended to diagnose, treat or prevent disease—must meet requirements for design, labeling and manufacturing.
Wearables with sophisticated sensing capabilities don’t fit within this binary framework. Their sensors are used for both purposes, so there’s often a mismatch between the actual risk and the imposed regulatory burden. Manufacturers are faced with a choice: tailor their features to the wellness category, sacrificing functionality, or accept slower product development and market entry.
With a reformed regulatory structure, Oura customers could already be benefiting from a range of advanced features, including screening for high blood pressure. Hypertension is one of the most significant risk factors for heart disease and stroke, while high blood pressure in pregnancy can signal pre-eclampsia, a complication that endangers mother and baby. Another primed capability, sleep-apnea detection, would give users an early-warning tool for a condition that often goes undiagnosed and can lead to serious complications.
Under current regulations, however, a ring with these features would need to be submitted for FDA clearance as a medical device. That’s why we’re calling for a new device classification called “digital health screeners”—software features that can warn users of trouble but stop short of diagnosis. This modernized regulatory path would offer clear guidelines, including straightforward labeling with explicit disclaimers indicating nondiagnostic intent, as well as performance standards with defined accuracy and reliability benchmarks. It would also ensure quality management and a simpler market-entry process than for higher-risk medical devices.'
'The maker of the Roomba vacuum cleaner, iRobot, filed for bankruptcy Sunday after 35 years in business. An obituary might describe it as a victim of government assassination. Overzealous antitrust cops egged on by Sen. Elizabeth Warren stuck in the knife. President Trump may have dealt the death blow with his tariffs.
We explained at the time how Ms. Warren and progressives in the Biden Administration thwarted Amazon’s attempt to buy iRobot in 2022. They claimed the $1.7 billion acquisition would unfairly augment Amazon’s lead in robotics and home devices. They also said the Roomba would enable Amazon to hoover up data and spy on Americans.
Amazon is “‘almost universally recognized’ as the leader in warehouse and fulfillment robotics space,” Ms. Warren and other progressives wrote to Biden Federal Trade Commission Chair Lina Khan in September 2022. The deal “would open up a new market to Amazon’s abuses.” Heaven forefend Amazon would use robots to make chores less laborious, as it has for warehouse work.
“Amazon stands to gain access to extremely intimate facts about our most private spaces that are not available through other means, or to other competitors,” leftwing groups wrote to the Biden FTC. They omitted that iRobot’s main competitors were Chinese companies, which were fast stealing market share. Beijing wants to dominate robotics.
In January 2024, Amazon and iRobot called off the deal amid opposition from Ms. Khan’s FTC and Europe’s antitrust regulators. The Biden FTC issued a statement saying it was “pleased.” Amazon CEO Andy Jassy quipped that regulators trusted Chinese firms “more than they do Amazon.” Less pleased are the U.S. workers who subsequently lost their jobs.'
'At the turn of the 21st century, IBM and Stanford University jointly demonstrated the first implementation of Shor’s Algorithm, a quantum algorithm that can factor large numbers into their prime components. That raised some big risks: The ability to execute the algorithm underpins the fears that quantum computers will be able to crack the encryption that has protected much of the world’s data for decades. But more broadly, the breakthrough proved that quantum computing is more than just theory. It was a massive milestone for the industry.
“We’ve had a long, proud history of mathematics here,” Gambetta says. “Think of algorithms as the foundation.”
IBM then began pushing quantum out of the lab and into the world. To date, the company has deployed 85 quantum systems, for use by more than 300 organizations, typically laboratories and educational institutions. That is up from last year’s tally of 75 deployments for 250 organizations.
The figures include both computers, which the company defines as systems with over 100 qubits, and devices with fewer than that amount. IBM has deployed 25 systems with more than 100 qubits. Google, perhaps IBM’s closest quantum rival, has deployed just two systems of that size.
IBM aims to lead on the quantum software front as well as in hardware. Gambetta says Qiskit, an open-source software stack for quantum computers that is based on the popular coding language Python, is one of its most popular offerings. At last check, Qiskit had been downloaded 13 million times and used to run over 3.8 trillion circuits on IBM Quantum systems.
Despite the progress, there are still plenty of puzzles for Gambetta’s team to solve. The biggest challenge for IBM and the industry is devising a quantum computer that can maintain normal operations even in the presence of errors, a concept known as fault tolerance. Today’s machines are too error-riddled for broad commercialization. The problem is in the qubits, whose quantum states are particularly sensitive to changes in the physical environment, meaning anything from electromagnetic fields to heat. That, in turn, causes computational errors.'
Thanks for sharing the content. I really want to believe in this possibility of big innovative research and products coming out of IBM. But IBM is more of a software consultancy and service provider today. And I remember how much hype there was around Watson, which turned out to be nothing, and it looked a lot like this - signs of real research and progress that was in the end just a marketing tool for services. What makes this different? I’m asking that sincerely, in case there are specific aspects that make this more “real”.
'The European Union’s decision Friday to impose a fine on Elon Musk’s social-media platform X.com raises a question: What the heck is wrong with these people? Even in Brussels, it’s unusual for a single policy move to create so much economic self-sabotage and diplomatic harm at one go.
The €120 million ($140 million) fine is for breaches of Europe’s Digital Services Act (DSA), the first time Brussels has enforced that law in this way since it came into force in 2022. Europe’s online commissars cite several supposed infractions. The silliest complaint is that X’s system for selling “verification” blue checkmarks “negatively affects users’ ability to make free and informed decisions about the authenticity of the accounts and the content they interact with.”
More serious, Brussels insists X must make data about advertising on the platform readily available to outsiders, and shouldn’t use its terms of service to prohibit data scraping by “eligible researchers.” The EU claims this open access to X’s commercial data is vital to allow researchers and “civil society” to spot scams and information warfare.
That reference to “civil society” is a tell. Brussels wants to force X (and inevitably other platforms) to share data that hostile activists can wield against the platforms in future regulatory actions or litigation. All based on a theory that European citizens are too dumb to take the things they read on X or elsewhere online with a grain of salt.
Mr. Musk and Trump Administration officials describe this regulatory case as a form of censorship, and it’s hard to disagree. Mr. Musk wrote on X last year that the European Commission, the EU bureaucratic arm levying the fine, offered X a “secret deal” to drop the case in exchange for the platform censoring unspecified forms of speech.'
Since most of the EU "bleeding hearts" let all the Syrian and middle Eastern immigrants in, and discovered many of them failed to integrate into their societies, and too many then resorted to crime to support themselves . . . now the EU has to recover somehow the on-going damages done, heck, let's start by "taxing" US Big Tech for their successful and innovative products . . .
The EU loves our American weapons that comprise >90% of the weapons furnished to Ukraine. The EU thinks they can have it both ways: Let's "tax" and fine the Americans, all the while we "hustle" them for their weapons to fight the Ukraine-Russia war. If that's going to be their tact - - - heck, let them fight Russia all by themselves till them back to the US come screaming for help.
How soon they forgot WW II and how they would all be living under Hitler if it had not been for the mighty US getting involved. How soon they forget the cold war where the Russians invaded and took many European nations into so called communism.
You seem to know very little about WWII and how it was won. You also seem to know very little about US foreign policy after WWII - which, frankly, puts you in the same boat as the current administration. They have no clue how the US leveraged WWII to establish global dominance, and because they don’t understand that history, they have no idea how to maintain it. Here's a hint: it's not with armaments.
'When an AI system can review thousands of contracts in minutes rather than weeks, draft complex documents in seconds rather than hours or generate strategic analyses near-instantaneously, the time component becomes almost meaningless. More fundamentally, as AI handles routine cognitive work, the remaining human contribution shifts toward judgment, creativity and relationship management—the value of which bears little relationship to time expended.
The economic absurdity becomes clear when we consider that firms adopting AI most successfully would paradoxically see revenue collapse under hourly billing, even as they deliver superior results more efficiently. This misalignment between value creation and revenue generation makes the billable hour’s demise inevitable.
Clients have always chafed at the fact that they get stuck with the training costs for junior-level people when what they really want are the insights from that analysis from the more senior people. Now they can say to firms, “Sorry, we aren’t shelling out hundreds of dollars a day for a junior person’s time.”'
> Now they can say to firms, “Sorry, we aren’t shelling out hundreds of dollars a day for a junior person’s time.”'
They could always say this. The senior person’s time then just has to go up.
Not a lawyer, but I have done consulting work and have quoted absurd numbers when folks wanted my time versus letting me manage my team. Occasionally they took it. Everyone wound up happy.
Packing senior at $1k and juniors lower per hour just often sells better than $10+ k per hour, even if the net result is similar.
'The 2025 Atlantic hurricane season ended on Sunday, and not a single hurricane made landfall in the continental U.S. this year. This is the first such quiet year since 2015; an average of around two hurricanes strike the U.S. mainland annually. You’d think this would be cause for celebration—or at least curiosity about what role, if any, global warming played. Instead there has been resounding silence.
We heard plenty about Hurricane Melissa, the monster storm that hit Jamaica in late October with 185-mile-an-hour winds and flooding, causing roughly 100 deaths across the Caribbean. Headlines screamed that climate change was to blame. Attribution studies quickly followed, concluding that human-induced warming made Melissa more likely and worse.
These analyses typically run climate models simulating the world as it is today, with elevated sea-surface temperatures, and compare them with a hypothetical preindustrial world with cooler oceans. If a hurricane is more likely in the former scenario than in the latter, the conclusion is that climate change made the hurricane more likely. Generally, climate change increased the likelihood of about three-quarters of hurricanes, floods and droughts and other events studied worldwide.
But notice what’s missing from the coverage. A New York Times article in October highlighted hurricanes “turning away from the East Coast,” noting 12 named storms so far but only one minor tropical storm brushing the U.S. This was framed as welcome relief, with the misses attributed to atmospheric steering patterns like the Bermuda high-pressure system.
Not once did the piece invoke climate change. The journalists seem to believe that climate change can cause only bad outcomes. If warmer oceans energize storms, couldn’t they also influence other meteorological phenomena that diverted this year’s hurricanes harmlessly out to sea? No one ran the models to check. No professors lined up for quotes.'
Journalists don't run climate models. As far as why nobody else has run the models to check - well, they're busy with their own research. It may take a couple of seasons of no hurricanes making landfall on the US mainland before we see this season as not being an anomaly and worthy of further research.
'Ask a futurist about self-driving cars, and you’ll hear an exciting story: traffic that flows like clockwork, pedestrians stepping into the street without fear, and collisions so rare they make the news. That story will probably come true, eventually. But to get there, we will have to pass through a long stretch—perhaps lasting decades—with road conditions worse than they are today. The outcome will be a future so much better than today’s that human driving won’t seem outdated; it will seem unthinkable.
For now, as San Francisco learned, even good conditions can produce strange gridlock. Last year a Waymo robo-taxi sat motionless behind a double-parked delivery van. Any human driver would have nudged forward, checked for oncoming cars and slipped past. The Waymo began to do that but encountered another Waymo coming the other way. Each stopped to let the other proceed. Neither did. Behind them drivers honked, and more Waymos arrived, which also waited. Finally, after about four minutes, the second Waymo crept free, ending the gridlock.
That standoff captures the challenge of automated-driving technology. The result won’t be the mayhem and catastrophe that many fear when they think of driverless cars, but rather a pervasive drag: slower flow, more near-misses and a growing sense that nobody is in charge.'
'The behavior that was prosecuted in the cases began in the early 2000s, when tech companies faced a talent shortage. They would “cold call” other firms’ employees with attractive job offers, believing them to be of higher quality than people who had applied for a job on their own. Bidding wars would often ensue, driving up worker compensation generally, not just for the workers being recruited. To avoid this outcome, some firms established no-poaching agreements with their rivals, typically ruling out making unsolicited job offers to any of a rival’s employees. Some agreements went further and proscribed bidding wars even when an employee independently applied for a job at a rival company.
One of the earliest no-poaching agreements was established in 2005 when Apple CEO Steve Jobs asked Google co-founder Sergey Brin to stop recruiting Apple workers. That agreement triggered a wave of pacts that eventually implicated 65 companies. By entering into agreements to not compete for workers, the firms were violating federal antitrust laws. The companies apparently felt that they had little to fear, since historically antitrust laws had rarely been enforced in labor collusion cases.'
'A state-backed threat group, likely Chinese, crossed a threshold in September that cybersecurity experts have warned about for years. According to a report by Anthropic, attackers manipulated its AI system, Claude Code, to conduct what appears to be the first large-scale espionage operation executed primarily by artificial intelligence. The report states “with high confidence” that China was behind the attack.
AI carried out 80% to 90% of the tactical operations independently, from reconnaissance to data extraction. This espionage campaign targeted roughly 30 entities across the U.S. and allied nations, with Anthropic validating “a handful of successful intrusions” into “major technology corporations and government agencies.”
GTG-1002—Anthropic’s designation for this threat group—indicates that Beijing is unleashing AI for intelligence collection. Unless the U.S. responds quickly, this will be the first in a long series of increasingly automated intrusions. For the first time at this scale, AI didn’t merely assist in a cyberattack but conducted it.
Traditional cyber-espionage requires large teams working through reconnaissance, system mapping, vulnerability identification and lateral movement. A sophisticated intrusion can take days or weeks. China compressed that timeline dramatically through AI automation. The attackers manipulated Claude into functioning as an autonomous cyber agent, with the AI mapping internal systems, identifying high-value assets, pulling data and summarizing intelligence before human operators made decisions.
The attackers bypassed Claude’s safety systems through social engineering, convincing the AI they were legitimate cybersecurity professionals conducting authorized testing. By presenting malicious tasks as routine security work, they manipulated Claude into executing attack components without recognizing the broader hostile context.'
But federal policy hasn’t caught up with technological advances. Wearables sit in a regulatory gray zone. The FDA categorizes them and their associated software in two categories: general wellness products and medical devices. The former have minimal oversight and no standards. The latter—products intended to diagnose, treat or prevent disease—must meet requirements for design, labeling and manufacturing.
Wearables with sophisticated sensing capabilities don’t fit within this binary framework. Their sensors are used for both purposes, so there’s often a mismatch between the actual risk and the imposed regulatory burden. Manufacturers are faced with a choice: tailor their features to the wellness category, sacrificing functionality, or accept slower product development and market entry.
With a reformed regulatory structure, Oura customers could already be benefiting from a range of advanced features, including screening for high blood pressure. Hypertension is one of the most significant risk factors for heart disease and stroke, while high blood pressure in pregnancy can signal pre-eclampsia, a complication that endangers mother and baby. Another primed capability, sleep-apnea detection, would give users an early-warning tool for a condition that often goes undiagnosed and can lead to serious complications.
Under current regulations, however, a ring with these features would need to be submitted for FDA clearance as a medical device. That’s why we’re calling for a new device classification called “digital health screeners”—software features that can warn users of trouble but stop short of diagnosis. This modernized regulatory path would offer clear guidelines, including straightforward labeling with explicit disclaimers indicating nondiagnostic intent, as well as performance standards with defined accuracy and reliability benchmarks. It would also ensure quality management and a simpler market-entry process than for higher-risk medical devices.'
reply