Fin is the most useless thing ever. There's no obvious way to get reports in front of a human in a timely manner, and there's no clue to believe fin interactions are retained.
This does mean ultimately no loyalty. I can't stay loyal to a brand that doesn't actually respond to inquiries, bug reports or down reports at all.
I do understand that Anthropic is operating at a tremendous scale and can't have enough humans in the loop. This sounds like a good use for ai classification and triage, really!
> I can't stay loyal to a brand that doesn't actually respond to inquiries, bug reports or down reports at all.
Amen to this.
Being in business means having to respond to customer enquiries at some point.
Given the amount of billions being pumped into Anthropic's pockets and given the millions their senior-leadership no doubt pay themselves, I'm sure they could spare a bit of cash to get off their backsides and sort out the Customer Service.
I simply do not buy the "poor Antropic, they are operating at scale, they are too busy winning to deal with customer service" argument that comes up time and time again.
The fact is there are many large businesses, many large governments that are able to deal with customers "at scale".
Scale means you respond a bit slower, maybe a few days or at most a couple of weeks AT MOST. But complete silence for months or years is inexcusable.
All of my experiences with "Fin" matches that of my friends and colleagues .... namely that "Fin" is a synonym for "black hole". I've got "tickets" opened with "Fin" months ago that have not had a modicum of reply.
No need to engage with an article that makes naked assertions with little backing.
Ok, fine then...:
"But they have no more consciousness, sensitivity, and sentience than a hammer. " -- naked assertion, no backing, no definition, no ope rationalization, no scientific or philosophical work shown (and this is a spicy one, because there's been philosophical turf wars on this for half a century, you can't just ASSERT that)
"Every device made by man has an off switch. We can use it sometimes." -- I have stories. Semi-Explosive near death stories. At any rate... uh, not quite?
Look, at very least he's sloppy here. Mostly just a raw opinion piece I guess, but not really backed by much that is real. Just so you know, this cost me more time than the text even deserves.
Oh that's too bad, it was an interesting concept while it was running. I did notie that it takes a lot more effort to do real world journalism than it is to write an encyclopedia. And accreditation is a tricky thing in a pseudonymous community.
Ah, it is/was a volunteer community, so not really a thing the WMF needs to put much effort in besides running the server. I bet even I could take over the job if I really wanted to, and not because I'm amazing or anything. Well at least, if this were 2010 or so. By now scraper mitigation might be a challenge. <scratches head>
"what is 16929481231+22312333222?" is an easy way to test this claim. Pick large enough numbers and there's no way all the sums of that size would fit into the dataset (you don't need to stick to + either, but it's the simplest thing that works)
For my contribution to the conversation: Earlier/cheaper models can't do it either, they make mistakes, they need a calculator/jupiter kernel/what have you. 'Medium' models will put the numbers underneath each other and do it 'properly' in a table, checking themselves after. Claude Opus 4.6 (the current rolls royce today) just says the answer in one go sometimes (it's a monster). But all of them end up spending many seconds and thousands of tokens on a task that takes a calculator or an ALU fractions of a second.
Right, the wikipedia rules are not that different from the HN rules. A human needs to be responsible for what finally goes on the page. And that's fair enough. There's some experimental (non-wikimedia) wikis that use AI for editing, but they haven't taken off yet.
It REALLY depends on how you're using the AI. I get the strong impression a lot of people are still at the "I'll write a few prompts and see what happens" stage, and hoping for an answer from the magical oracle; as opposed to really using the tool. This never fails to disappoint.
I might be slightly wrong, but probably not by a lot, yet. Sure there's an element of "holding-it-wrong-ism" in my position. But ... it does actually take practice to get it right, and best practices are badly documented!
Most Wikipedia work is taking paywalled academic content and summarizing it in an encyclopedic format.
For programming, agentic AI can find most of what it needs because everything is open access on Arxiv, blogs, or in the codebase itself. That's why it can "magical oracle" answer questions that were limited to good prompting.
For most other professional topics, citations are locked behind paywalls. Wikipedia editors get free access to academic libraries, but the readers don't. That's why consumer tools suck.
When the big AI companies integrate with proprietary databases in fields like history or social sciences is the time when Wikipedia dies for answering questions.
reply