Hacker Newsnew | past | comments | ask | show | jobs | submit | abhikul0's commentslogin

Same experience with browser-use, it installs litellm as a dependency. Rebooted mac as nothing was responding; luckily only github and huggingface tokens were saved in .git-credentials and have invalidated them. This was inside a conda env, should I reinstall my os for any potential backdoors?

Well, I reinstalled and finally upgraded to Tahoe.

I only know of this one: https://github.com/shootthesound/comfyUI-Realtime-Lora. Haven't played with any layer manipulation though.

I was thinking more like this one: https://github.com/AdamNizol/ComfyUI-Anima-Enhancer/

"It adds the Anima Layer Replay Patcher, which can enhance fine detail and coherence by replaying selected internal blocks during denoising."


I tried out the one I linked with sd1.5 today, moved the sliders around like a total noob and got pretty bad results but I found no way to "replay" any of the layers like the one you linked, so thanks for the link. Must take a lot of trial & errors haha. I'll check it out, assuming it works for the anima preview 2 too.

It doesn't seem to work, tried the -u flag with the default address and it just couldn't connect to the existing chrome instance.

Clawjet, secured with sandboxing, bring your own SKILLs.


Curse the shitty changes; if possible, switch to an alternative; stop the use of the enshittified service and if none of the above works, begrudgingly continue with the enshittified experience.


Aye, add to that the XB50 in-ears too. My current pair is ~8 years old.


Coming from windows to macos, I(i think i used perplexity :P) created a spoon for switching between open windows with 4 finger swipe[0]. Swiping left/right switches between windows, swiping down minimizes all visible windows, swiping up restores them(one by one). Created this repo to backup my config with an llm documenting it.

It uses a swipe gesture detection spoon I found after searching for something similar[1].

[0] https://github.com/abhikul0/hammerspoonConfig

[1] https://github.com/mogenson/Swipe.spoon


Moltbook, Facebook, hmmm. Seems like a good match; at least one of them has a good amount of feed activity.


Facebook’s feed is mostly AI slop and Moltbook’s feed is mostly humans posing as AI, so there’s some good synergy here.


Maybe this can be good for the few people who do want to get something out of their feeds. Connect your agent which would then browse for you and collect actual posts that you whitelist/want to read(Friends' posts, some specific liked page/Marketplace listing, posts from a Group), but we all know zuck ain't getting Moltbook for helping the users...


I do find it hilarious that after all the machine learning optimizations done on people's feeds over the years, all the promos got for a 1% improvement on this metric, every E7 and E8 who can claim x% of this or that, after all of that work, we might genuinely, and not even as a joke, be in the situation of needing to throw _other_ AI agents at this selfsame feed in order to extract any real value from it. What a world we've built.


This is a typical evolutionary arms race, advertisers come up with better tool to fuck with us, we have to come up with better defense systems.


Ran llama-bench on my M3 Pro with `--n-depth 0,8192,16384 --n-prompt 2048 --n-gen 256 --batch-size 2048 -ub 2048`:

  | model                           |       size |     params | backend    | threads | n_ubatch |            test |                  t/s |
  | ------------------------------- | ---------: | ---------: | ---------- | ------: | -------: | --------------: | -------------------: |
  | qwen35moe 35B.A3B Q4_K - Medium |  19.74 GiB |    34.66 B | MTL,BLAS   |       6 |     2048 |          pp2048 |        512.97 ± 0.33 |
  | qwen35moe 35B.A3B Q4_K - Medium |  19.74 GiB |    34.66 B | MTL,BLAS   |       6 |     2048 |           tg256 |         25.92 ± 0.23 |
  | qwen35moe 35B.A3B Q4_K - Medium |  19.74 GiB |    34.66 B | MTL,BLAS   |       6 |     2048 |  pp2048 @ d8192 |        397.20 ± 2.32 |
  | qwen35moe 35B.A3B Q4_K - Medium |  19.74 GiB |    34.66 B | MTL,BLAS   |       6 |     2048 |   tg256 @ d8192 |         22.56 ± 0.36 |
  | qwen35moe 35B.A3B Q4_K - Medium |  19.74 GiB |    34.66 B | MTL,BLAS   |       6 |     2048 | pp2048 @ d16384 |        313.67 ± 0.63 |
  | qwen35moe 35B.A3B Q4_K - Medium |  19.74 GiB |    34.66 B | MTL,BLAS   |       6 |     2048 |  tg256 @ d16384 |         20.45 ± 0.04 |
I sure do want that silicon now haha.


Are you running it locally with llama.cpp? If so, is it working without any tweaking of the chat template? The tool calls fail for me when using the default chat template, however it seems to work a whole lot better with this: https://huggingface.co/Qwen/Qwen3.5-35B-A3B/discussions/9#69...


I’ve been running it via llama-server with no issues. Running the latest Bartowski 6-bit quant


Bartowski? Like Chuck Bartowski from the TV show?


Different one. Bartowski is a minor celebrity in the local LLM world, together with Unsloth.


What's the selling point of these quants vs the Unsloth ones?


Sometimes unsloth has broken ones for a particular model, sometimes no quants at all, and there is subtle difference in behavior.


Thanks, i'll check his quants.


Have you tried the '--jinja' flag in llama-server?


Yes, it fails too. I’m using the unsloth q4_km quant. Similarly fails with devstral2 small too, fixed that by using a similar template i found for it. Maybe it’s the quants that are broken, need to redownload I guess.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: