"The Trump administration announced on Monday that all foreign-made drones and their components posed 'unacceptable risks to the national security of the United States' and would be put on a federal blacklist of equipment makers prohibited from selling their goods freely in the country."
Yeah, everyone else in the comments so far is acting emotionally, but --
As a fan and DAU of both OpenAI and the NYT, this is just a weird discovery demand and there should be another pathway for these two to move fwd in this case (NYT to get some semblance of understanding, OAI protecting end-user privacy).
It sounds like the alternate path you're suggesting is for NYT to stop being wrong and let OpenAI continue being right, which doesn't sound much like a compromise to me.
Don't you think it's a little circular that you always default to assuming that their support is about regulatory capture?
Like, what if they had that opinion before they built the company? If you saw evidence of that (as is the case with Anthropic), would that convince you to reconsider your judgement? Surely, you think... some people support regulatory frameworks, some amount of the time... and unless they banned themselves from every related industry, those might be regulatory frameworks that they might one day become subject to?
You aren't taking what he said seriously. The junior could also get sick, present management issues, etc.
If this person plus a junior represented "1.3 engineering knots," he's saying... "actually, I'm still 1.3 engineering knots without him."
When this person leaves, they go find someone else who is 1.3 engineering knots. The junior represented .3, without the 1., it doesn't matter that much. Headcount strategy shifts.
If your company can treat people as cogs this way, your company has zero value add/domain knowledge. A company's value is what it value adds, what domain knowledge/expertise it has, or cheapest price. If all you have is cheapest price, you will lose over time. You will be undercut. You won't have the domain knowledge to adapt, to see future changes coming down the path.
So the company you talk about is already in the entropy vortex. It has no momentum. It has no future. It just hopes it can keep doing what it is doing now.
The time span between grandmaster + AI engine being better than AI chess engine was like 10 years max, likely a lot less depending on how you look at it.
It's the same dynamic but shrunk by far more time due to extreme uptick. You won't ever prompt even close to better than the person who literally optimized a prompt using verifiable rewards for 500 iterations.
The future will be controlled by those most effective at wielding the means of computation. Writing prompts by hand gets you the equivalent of a serf getting a shovel. Learning to wield the automation tools gets you your own castle.
This makes a tremendous amount of sense. Most people are so bad at using AI for productive purposes -- but outside of eng, most AI fluency is actually hot garbage. People just haven't gained an understanding or appreciation for the degree of quality and capability they can achieve.
And once they can use it and get visible results, those orgs are ripe for large amounts of AI product adoption.
Only downside to the OAI version will be that it's OAI specific.
As someone who has tried almost all of the AI browsers that are accessible or in a relatively open beta, plus all the browser control frameworks and agents, I super agree with the notions behind this post.
Curious about your approach, though: so, it's a literal script, or an LLM being told to follow a deterministic script and only get subjective when necessary? Based on the blog, it looks like the former, but why not the later? Get the LLM to be pseudo-deterministic but still step-by-step it so that it can handle UI changes and adjacent interfaces.
A workflow can have subjective parts too. For example, click on button A if it satisfies certain conditions I wrote in plain English, otherwise click on B.
These subjective elements can be defined with user inputs/prompts.
So a workflow is a literal script with embedded LLM calls for branching or even scraping details where literal script feels tedious.
Everyone in this thread who posts some variation of "wow love it how the government gets to decide if you get to sell your startup or how the market should work" should be handcuffed to their chair and forced to answer these 3 questions:
1. is there any role for gov't antitrust in your view of modern capitalism?
2. if there is a role, why is Adobe x Figma not the perfect example for enforcement?
3. if your answer is "Adobe clearly isn't a monopoly, look at the existence of Figma as evidence," why are you dumb?
"The Trump administration announced on Monday that all foreign-made drones and their components posed 'unacceptable risks to the national security of the United States' and would be put on a federal blacklist of equipment makers prohibited from selling their goods freely in the country."
Related FCC fact sheet: https://docs.fcc.gov/public/attachments/DOC-416839A1.pdf
reply