Experiment: How far can a 28M model go in business email generation?
Sentiment Mix
Geography
Expert Signals
AdhesivenessSea9511
author • 1 mention
r/LocalLLaMA
source • 1 mention
AI-Generated Claims
Generated from linked receipts; click sources for full context.
Experiment: How far can a 28M model go in business email generation?.
Supported by 1 story
I've been experimenting with training a small (\~28M parameter) Transformer model on synthetic business email data.
Supported by 1 story
It's definitely not perfect and still struggles with instruction-following, but I was surprised that it can sometimes produce reasonably coherent email-like text.
Supported by 1 story
The model is very small compared to typical LLMs, so this was more of an experiment to see how far structured generation can go under tight parameter constraints.
Supported by 1 story
Some generations are messy or drift off-topic, but occasionally it produces outputs that *almost* look usable.
Supported by 1 story
Related Events
Anthropic Releases Findings from Global User Study on AI Expectations and Concerns - MLQ.ai
Research • 3/20/2026
Kimi just published a paper replacing residual connections in transformers. results look legit
Research • 3/20/2026
Meta’s Alexandr Wang unveils new open-source AI model that understands over 1,600 languages - The Financial Express
Open Source • 3/20/2026
Why do instructions degrade in long-context LLM conversations, but constraints seem to hold?
LLMs • 3/20/2026
Meta AI agent’s instruction causes large sensitive data leak to employees | AI (artificial intelligence) - The Guardian
Uncategorized • 3/20/2026