Hacker Newsnew | past | comments | ask | show | jobs | submit | suprjami's commentslogin

Dual 3060s run 24B Q6 and 32B Q4 at ~15 tok/sec. That's fast enough to be usable.

Add a third one and you can run Qwen 3.5 27B Q6 with 128k ctx. For less than the price of a 3090.


Look up "USB RF remote" on eBay. There are two common ones you'll see everywhere. I have one for my Kodi system.


Please anyone make a new Slack. 4Gb RAM for a slow chat client with a bad interface is just so slovenly it should be illegal.


Unsloth Dynamic. Don't bother with anything else.


For anyone else trying to run this on a Mac with 32GB unified RAM, this is what worked for me:

First, make sure enough memory is allocated to the gpu:

  sudo sysctl -w iogpu.wired_limit_mb=24000
Then run llama.cpp but reduce RAM needs by limiting the context window and turning off vision support. (And turn off reasoning for now as it's not needed for simple queries.)

  llama-server \
    -hf unsloth/Qwen3.5-35B-A3B-GGUF:UD-Q4_K_XL \
    --jinja \
    --no-mmproj \
    --no-warmup \
    -np 1 \
    -c 8192 \
    -b 512 \
    --chat-template-kwargs '{"enable_thinking": false}'
You can also enable/disable thinking on a per-request basis:

  curl 'http://localhost:8080/v1/chat/completions' \
  --data-raw '{"messages":[{"role":"user","content":"hello"}],"stream":false,"return_progress":false,"reasoning_format":"auto","temperature":0.8,"max_tokens":-1,"dynatemp_range":0,"dynatemp_exponent":1,"top_k":40,"top_p":0.95,"min_p":0.05,"xtc_probability":0,"xtc_threshold":0.1,"typ_p":1,"repeat_last_n":64,"repeat_penalty":1,"presence_penalty":0,"frequency_penalty":0,"dry_multiplier":0,"dry_base":1.75,"dry_allowed_length":2,"dry_penalty_last_n":-1,"samplers":["penalties","dry","top_n_sigma","top_k","typ_p","top_p","min_p","xtc","temperature"],"chat_template_kwargs": { "enable_thinking": true }}'|jq .
If anyone has any better suggestions, please comment :)


Shouldn't you be using MLX because it's optimised for Apple Silicon?

Many user benchmarks report up to 30% better memory usage and up to 50% higher token generation speed:

https://reddit.com/r/LocalLLaMA/comments/1fz6z79/lm_studio_s...

As the post says, LM Studio has an MLX backend which makes it easy to use.

If you still want to stick with llama-server and GGUF, look at llama-swap which allows you to run one frontend which provides a list of models and dynamically starts a llama-server process with the right model:

https://github.com/mostlygeek/llama-swap

(actually you could run any OpenAI-compatible server process with llama-swap)


I didn't know about llama-swap until yesterday. Apparently you can set it up such that it gives different 'model' choices which are the same model with different parameters. So, e.g. you can have 'thinking high', 'thinking medium' and 'no reasoning' versions of the same model, but only one copy of the model weights would be loaded into llama server's RAM.

Regarding mlx, I haven't tried it with this model. Does it work with unsloth dynamic quantization? I looked at mlx-community and found this one, but I'm not sure how it was quantized. The weights are about the same size as unsloth's 4-bit XL model: https://huggingface.co/mlx-community/Qwen3.5-35B-A3B-4bit/tr...


Yes that's right. The config is described by the developer here:

https://www.reddit.com/r/LocalLLaMA/comments/1rhohqk/comment...

And is in the sample config too:

https://github.com/mostlygeek/llama-swap/blob/main/config.ex...

iiuc MLX quants are not GGUFs for llama.cpp. They are a different file format which you use with the MLX inference server. LM Studio abstracts all that away so you can just pick an MLX quant and it does all the hard work for you. I don't have a Mac so I have not looked into this in detail.


FYI UD quants of 3.5-35BA3B are broken, use bartowski or AesSedai ones.


They've uploaded the fix. If those are still broken something bad has happened.


UD-Q4_K_XL?


The cheapest option is two 3060 12G cards. You'll be able to fit the Q4 of the 27B or 35B with an okay context window.

If you want to spend twice as much for more speed, get a 3090/4090/5090.

If you want long context, get two of them.

If you have enough spare cash to buy a car, get an RTX Ada with 96G VRAM.


Rtx 6000 pro Blackwell, not ada, for 96GB.


Ah thanks.

The names are so good and not repetitious.

No not the RTX 6000. No not the A6000...


Thanks this is a great summary of the tradeoffs!


Big deal, so does every other company.

If you're lonely just upload a few AI keywords to a repo. You'll get emails forever.


At 8 years old I was able to expertly dismantle many radios.

Was still a few years away from reassembly.


At 8 years I recycled filesystem directories. I didn't know you can create new folders, so when I needed one I grabbed a random one from C:\Windows, moved it to my desktop and deleted its contents.


That’s funny. When I was little I found “format” in my mp3 player’s settings. Thought it would customize the UI or something, but instead I ended up with no music for the rest of the road trip.


Makes total sense, it used to be called "Recycle Bin" after all!


I wonder if Microsoft did focus group testing and found understandably computer illiterate people were concerned about "trashing" files meant they were somehow permanently using up HDD space


Worked ok til it was a system dir and the system wouldn’t boot anymore? :)


No better way to learn System32 folder was essential is Windows than by destroying your family computer by removing it.


I deleted the files from there to free up disk space


I don't need autoexec.bat or config.sys! it's got some garbage in there that I don't understand, so it must not be important.


I was doing that at three or four and was reminded of it constantly for the next ten years or more. (I actually raised the subject on my mother's death bed.)


When I was a boy all we had were high-voltage vacuum tube electronics, it was fun.


Next step is to skip the bread and eat Nutella from the jar with a spoon.


I find dipping a cashew in is even better. The cashew becomes your scooper. The combination is divine.


It feels to me there are plenty of people running these because "just trust the AI bro" who are one hallucination away from having their entire bank account emptied.


Exactly, I've seen people who bought a Mac Mini and ended up running claw against a claude subscription. Completely misunderstand the point of local models. Plus, there was more hype about running claw way cheaper on Raspberry Pi which cost the stock price of Raspberry maker to skyrocket.

Some of the comments here show that technical people set these things up for non-technical people, which is just one step away from a misstep. Time will show whether this is similar in behavior to the "I can run it" mindset that people had with local models before. A small dopamine hit to see "it can be done" in order to end up a cloud service in the long run.


OpenXcom adds a whole heap of wonderful conveniences to UFO/X-Com. It's probably my favourite open source game engine clone thing.

https://openxcom.org/


Dolls / Girls Frontline 2: Exilium[0][1][2] is a modern take on the XCOM concept.

Free (but gacha.)

0. https://gf2exilium.sunborngame.com/main

1. https://gf2.haoplay.com/jp/

2. https://store.steampowered.com/app/3308670/GIRLS_FRONTLINE_2...


> gacha

Hard pass.


Spending money is actually entirely optional.

I personally haven't, because it would take the fun out of the game.


Yeah I know. I lost 2 years to Azur Lane. That was enough.


Love them shipgirls, eh?

Right now, I am caught up with gfl2, and having a blast with Arknights: Endfield. The factory must grow!

In a few weeks, I'll probably be working on my projects and not touching any games at all, as I was just a few weeks ago.

Two years is indeed a bit too much. Got to do something else when it stops being fun. I had to learn that lesson with a few months of DOTA2; it can turn into a job, except that it produces nothing of value.


> my favourite open source game engine clone thing

For me that's OpenMW

https://openmw.org/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: