Small models (Qwen 3.5 0.8B, Llama 3.2 1B, Gemma 3 1B) stuck in repetitive loops
Sentiment Mix
Geography
Expert Signals
lionellee77
author • 1 mention
r/LocalLLaMA
source • 1 mention
AI-Generated Claims
Generated from linked receipts; click sources for full context.
Small models (Qwen 3.5 0.8B, Llama 3.2 1B, Gemma 3 1B) stuck in repetitive loops.
Supported by 1 story
I'm working with small models (\~1B parameters) and frequently encounter issues where the output gets stuck in loops, repeatedly generating the same sentences or phrases.
Supported by 1 story
This happens especially consistent when temperature is set low (e.g., 0.1-0.3).
Supported by 1 story
What I've tried: * Increasing temperature above 1.0 — helps somewhat but doesn't fully solve the issue * Setting repetition\_penalty and other penalty parameters * Adjusting top\_p and top\_k Larger models from the same families (e.g., 3B+) don't exhibit this problem.
Supported by 1 story
Has anyone else experienced this?
Supported by 1 story
Related Events
Why do instructions degrade in long-context LLM conversations, but constraints seem to hold?
LLMs • 3/20/2026
Ask HN: Is Claude down Again?
LLMs • 3/20/2026
Xiaomi launches AI model to challenge OpenAI and Anthropic, lead researcher calls it ‘a quiet ambush’ - The Times of India
LLMs • 3/20/2026
Meta Pivots from Llama to Closed AI Models, Abandoning Open Source Roots - WinBuzzer
LLMs • 3/20/2026
Meta reportedly delays rollout of new AI model Avocado – here's why - Mint
LLMs • 3/20/2026
Causality Chain
Preceded By