Hacker Newsnew | past | comments | ask | show | jobs | submit | woctordho's commentslogin

There is DataClaw https://github.com/peteromallet/dataclaw which uploads your Claude Code chats and more to HuggingFace in a single command. Nowadays there are many similar tools.

Apart from local AI, a serious choice is aggregated API such as new-api [0]. An API provider aggregated thousands of accounts has much better stability than a single account. It's also cheaper than the official API because of how the subscription model works, see e.g. the analysis [1].

[0] https://github.com/QuantumNous/new-api

[1] https://she-llac.com/claude-limits


>An API provider aggregated thousands of accounts has much better stability than a single account

Isn't this almost certainly against ToS, at least if you're using "plans" (as opposed to paying per-token)?


You don't even need to be a customer served by Anthropic or OpenAI so the Terms of Service are irrelevant. That's how I live in China and use almost free Claude and GPT which they don't sell here.

Wait, is this just something like openrouter, that routes your requests to different API providers, where you're paying per-token rates? Or is this taking advantage of fixed price plans, by offering an API interface for them, even though they're only supposed to be used with the official tools?

It's taking advantage of fixed price plans or even free plans.

considering it has things like a turnstile "handler", I'm assuming it attempts to abuse the free chat interface.

That seems like Anthropic's problem.

It's going to be quickly your problem when they figure out you're breaching ToS and ban your account.

The whole point of these services is that it’s not your account. It’s very much anthropic’s problem, and honestly I don’t care they’re getting ripped off.

So... I make another account? Problem solved.

It would be interesting to show bf code rather than the actual email on the webpage. A lot of OCR systems struggle with this kind of repeated symbols where the exact count is required.

It's not a problem at all. Humans also read books written by humans.


If someone sees this: The ninja package on PyPI [0] currently stays at version 1.13.0 . There is an issue in 1.13.0 preventing it building projects on Windows. The issue is already fixed in 1.13.1 almost a year ago, but the PyPI package hasn't got an update, see [1], and many downstream projects have to stay at 1.11 . I hope it could update soon.

[0] https://pypi.org/project/ninja/

[1] https://github.com/scikit-build/ninja-python-distributions/i...


Why is a C++ project being distributed on PyPi at all?


Probably for the same reason other binaries are distributed by npm: lack of cross platform general package managers and registries


Also for cases where a python project needs to depend on it.


Kinda weird to have the language toolchain wrap the build system, should be the other way around.


Yes, but I mean... this is Python we're talking about. There are several build systems / coordinators written in Python (scons, colcon, etc) not to mention Python packages that themselves contain compiled bits written in other languages.

I know nowadays we have formalized, cross-platform ways to build bindings (scikit-build-core, etc), but that is a relatively recent development; for a long ass time it was pretty common place to have a setup.py full of shell-outs to native toolchains and build tools. It's not hard to imagine a person in that headspace feeling like being able to pull that stuff directly from pypi would be an upgrade over trying to detect it missing and instruct the user to install it before trying again.


Or lack of a tool like Goreleaser in the language ecosystem that handles that


You may be interested in this discussion: https://discuss.python.org/t/use-of-pypi-as-a-generic-storag...


You need to bundle a supply chain attack with it. /s

Because the development world either hasn't heard of nix or has collectively decided to not use nix.


What a messy and frankly, absurd situation to be left in. To fork a project in order to provide a tool through Pypi, only to then stop updating it on a broken version. That's more a disservice than a service for the community... If you're going to stay stuck, better to drop the broken release and stay stuck on the previous working one.


RCE is exactly the feature of coding agents. I'm happy with it that I don't need to launch OpenCode with --dangerously-skip every time.


No, it is still configurable. You can specify in your opencode.json config that it should be able to run everything. I think they just argued that it shouldn't be the default. Which I agree with.


No, the problem is that when logging in, the provider's website can provide an authentication shell command that OpenCode will send to the shell sight unseen, even if it is "rm -rf /home". This "feature" is completely unnecessary for the agent to function as an agent, or even for authentication. It's not about it being the default, it's about it being there at all and being designed that way.


Ah, yes. That's crazy. I was thinking they were refering to the lax permissions of the agent by default.


And in the webui there is a don't ask button


The real barrier is not just the language keywords but lots of documentations and discussions in English. I'm not sure whether there is a solution to this.


This tons of Eng documents and contents cannot be translated once but this project is trying to use another language for future use. Thanks for the comment, though.


Then don't get sorrow killing it. Living things are not so special.


This time even Unsloth could not provide bitsandbytes 4-bit models. bitsandbytes does not support new models with MoE and linear attentions, and it's much less flexible than GGUF. Nowadays I think it's better to train lora over GGUF base model, see the discussion at https://github.com/huggingface/transformers/issues/40070

I'll find some time to do this and I hope someone can do this earlier than me.


In one word, porn.

Qwen filtered out a lot of porn during data curation, and a finetuned model can perform much better than context engineering. Abliteration can only remove censorship, not add something non-existent in the training data.

This guy did some great work in the age of Qwen 3.0: https://huggingface.co/chenrm/qwen3-235b-a22b-h-corpus-lora


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: