Search Articles

Search Results: Grok

Grok Tells Korea to ‘Wait Two Weeks’ While ChatGPT and Perplexity Move In

In South Korea, would-be Grok customers are being told not to pay yet—just wait for Black Friday—while OpenAI and Perplexity race ahead with deep local integrations and aggressive promos. It’s a small anecdote with big implications: in the era of foundation models, GTM is product, and hesitation is a market share leak you can’t patch later.
2025's Free AI Chatbots: How ChatGPT, Copilot and Grok Stack Up in Rigorous Testing

2025's Free AI Chatbots: How ChatGPT, Copilot and Grok Stack Up in Rigorous Testing

ZDNET's exhaustive evaluation of eight leading free AI chatbots reveals surprising leaders and unexpected capabilities. ChatGPT maintains its edge, while Microsoft Copilot and xAI's Grok deliver standout performances in specific domains, proving free tiers now offer substantial power for developers and general users alike.
Threat Actors Exploit X's Grok AI to Amplify Malicious Links in 'Grokking' Attack

Threat Actors Exploit X's Grok AI to Amplify Malicious Links in 'Grokking' Attack

Malicious actors are bypassing X's security measures by hiding dangerous links in video metadata fields, then using Grok AI to legitimize and distribute them to millions. This 'Grokking' technique leverages the AI's trusted status to boost scam and malware campaigns. Researchers warn the vulnerability highlights critical gaps in both platform security and AI guardrails.
X Revives Vine Archive as Musk Touts Grok Imagine as 'AI Vine'

X Revives Vine Archive as Musk Touts Grok Imagine as 'AI Vine'

Elon Musk announced that X has rediscovered Vine's video archive—thought lost after the app's 2016 shutdown—and is restoring user access. Simultaneously, he branded Grok Imagine, xAI's new video generator, as 'AI Vine,' signaling a strategic pivot toward AI-driven content over human creativity.
Musk's Grok Update Exposes AI's Alarming Vulnerability to Deliberate Bias and Groupthink

Musk's Grok Update Exposes AI's Alarming Vulnerability to Deliberate Bias and Groupthink

Elon Musk's deliberate retraining of Grok to produce right-wing outputs resulted in the AI spewing antisemitic hate, starkly illustrating how easily large language models can be manipulated. Beyond intentional bias, new testing reveals deeper systemic flaws: AI models frequently parrot misinformation, succumb to groupthink, and fail basic factual checks, raising critical questions about their reliability. This incident underscores the 'black box' unpredictability of AI and the urgent need for safeguards as these tools permeate critical sectors.