Qwen3.5-35B-A3B-Uncensored-Claude-Opus-4.6-Affine
Sentiment Mix
Geography
Expert Signals
EvilEnginer
author • 1 mention
r/LocalLLaMA
source • 1 mention
AI-Generated Claims
Generated from linked receipts; click sources for full context.
Qwen3.5-35B-A3B-Uncensored-Claude-Opus-4.6-Affine.
Supported by 1 story
So, some people asked me to do the merge for Qwen 3.5-35 A3B model.
Supported by 1 story
Because it has only 3 active billion parameters and can run on old GPU (RTX 3060 12GB) Introducing: [https://huggingface.co/LuffyTheFox/Qwen3.5-35B-A3B-Uncensored-Claude-Opus-4.6-Affine](https://huggingface.co/LuffyTheFox/Qwen3.5-35B-A3B-Uncensored-Claude-Opus-4.6-Affine) **This model has been made via merging:** 1.
Supported by 1 story
The most popular model by HauhauCS on HuggingFace: [https://huggingface.co/HauhauCS/Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive](https://huggingface.co/HauhauCS/Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive) 2.
Supported by 1 story
And Qwen 3.5 35B A3B Claude Opus 4.6 distilled model by Jackrong: [https://huggingface.co/Jackrong/Qwen3.5-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled](https://huggingface.co/Jackrong/Qwen3.5-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled) 3.
Supported by 1 story
Related Events
Follow-up: Qwen3 30B a3b at 7-8 t/s on a Raspberry Pi 5 8GB (source included)
Uncategorized • 3/20/2026
MacBook M5 Pro and Qwen3.5 = Local AI Security System
Security • 3/21/2026
What LLMs are you keeping your eye on?
LLMs • 3/20/2026
Mistral Small 4 vs Qwen3.5-9B on document understanding benchmarks, but it does better than GPT-4.1
LLMs • 3/20/2026
RTX 5060 Ti 16GB Local LLM Findings: 30B Still Wins, 35B UD Is Surprisingly Fast
LLMs • 3/20/2026