The Fragile Giants: How a Single Parameter Can Shatter Large Language Models
Groundbreaking research reveals that removing just one critical parameter - dubbed 'super weights' - can catastrophically collapse LLM performance, increasing perplexity 1000x. The discovery enables novel quantization techniques and exposes surprising fragility in billion-parameter AI systems.