Search Articles

Search Results: LLMQuantization

Foundation-Sec-8B: Security-Focused LLM Launches with Quantized Deployment Options

Foundation-Sec-8B: Security-Focused LLM Launches with Quantized Deployment Options

fdtn-ai has released Foundation-Sec-8B-Instruct, an 8-billion-parameter language model fine-tuned for cybersecurity applications, now available with multiple quantization variants. The optimized versions enable efficient local deployment—critical for sensitive security workloads—while maintaining performance. This specialized model signals growing industry emphasis on AI tailored for defensive and offensive security operations.
The Fragile Giants: How a Single Parameter Can Shatter Large Language Models

The Fragile Giants: How a Single Parameter Can Shatter Large Language Models

Groundbreaking research reveals that removing just one critical parameter - dubbed 'super weights' - can catastrophically collapse LLM performance, increasing perplexity 1000x. The discovery enables novel quantization techniques and exposes surprising fragility in billion-parameter AI systems.