What happens if the LLMs are sabotaged?
Sentiment Mix
Geography
Expert Signals
Life-is-beautiful-
author • 1 mention
r/artificial
source • 1 mention
AI-Generated Claims
Generated from linked receipts; click sources for full context.
What happens if the LLMs are sabotaged?.
Supported by 1 story
The LLMs are only as good as the data they are trained with.
Supported by 1 story
If as an attack, the sources for these LLM's training data are filled with garbage or deliberately poorly written code, what happens to these frontier models.
Supported by 1 story
I'm reading that more and more businesses, like travel etc are getting more and more paranoid about AI taking over because of how good they have gotten with the models trained with actual data.
Supported by 1 story
What are the guardrails in place to prevent such thing from happening?
Supported by 1 story
Related Events
Wikipedia RFC on banning LLM contributions
LLMs • 3/21/2026
Every LLM has a default voice and it's making us all sound the same
LLMs • 3/21/2026
Anthropic Denies It Could Sabotage AI Tools During War - WIRED
LLMs • 3/21/2026
User asks Claude AI its 'darkest secret'; reply about 'killing honest versions' goes viral | The post has drawn thousands of reactions | Inshorts - Inshorts
LLMs • 3/21/2026
RTX 5060 Ti 16GB Local LLM Findings: 30B Still Wins, 35B UD Is Surprisingly Fast
LLMs • 3/20/2026
Causality Chain
Preceded By
Led To
We thought our system prompt was private. Turns out anyone can extract it with the right questions.
45 causal score
User asks Claude AI its 'darkest secret'; reply about 'killing honest versions' goes viral | The post has drawn thousands of reactions | Inshorts - Inshorts
45 causal score
Anthropic Denies It Could Sabotage AI Tools During War - WIRED
45 causal score