![Main article image](


alt="Article illustration 1"
loading="lazy">

) _Source: ZDNET, "The key to AI implementation might just be a healthy skepticism - here's why" (Nov. 12, 2025), by Joe McKendrick._

The Hype Hangover Arrives on Schedule

If 2023–2024 were the years of "just ship something with GPT," 2025 is the year engineering leaders start asking awkward questions. Questions like:

  • "What, exactly, did this model improve?"
  • "Why did we trust that answer?"
  • "Should this be a foundation model, a tiny model, or just SQL with better indexing?"
According to new IEEE survey data reported by ZDNET, organizations are still bullish but no longer blindly so:

  • 39% of tech leaders say they’ll use genAI regularly but selectively (up 20% year-over-year).
  • 35% are rapidly integrating it and expecting bottom-line results.
  • 91% plan to ramp up "agentic AI" for data analysis within a year.
This is no longer an experiment. It’s an inflection point: generative and agentic systems are crossing into production, and healthy skepticism is emerging as a core implementation strategy—not a drag on innovation, but a prerequisite for it.

Why "Healthy Skepticism" Is Now an Engineering Best Practice

IEEE senior member Santhosh Sivasubraman nails the moment: we’re in the skepticism phase of the adoption curve. For technical teams, that skepticism translates into four concrete disciplines.

1. Stop Treating LLMs Like Oracles

Half of surveyed leaders flagged over-reliance on AI and potential inaccuracies as their top concern—and they’re right. The confidence of LLM outputs is a UX feature, not an accuracy guarantee. In practice:

  • Teams over-index on model eloquence instead of calibration.
  • Hallucinations quietly leak into docs, code, analytics, and decision support.
  • Probabilistic systems are deployed into deterministic expectations.
A healthy engineering posture:

  • Default to simpler analytics, deterministic rules, or search when they’re sufficient.
  • Require model-grounded answers (via retrieval) for any business- or safety-critical output.
  • Instrument everything: log prompts, responses, context sources, and user feedback for post-hoc analysis.
If your architecture chart is 90% LLM box and 10% everything else, you don’t have an AI system; you have a liability.

2. Demand Verifiable Productivity, Not Vibes

There’s a popular narrative: "If 50% of employees use ChatGPT, that’s a 10% productivity boost." As Dayforce CDO Carrie Rasmussen tells ZDNET, that assumption deserves side-eye. For technical orgs, that means:

  • Define "active usage" with real thresholds (e.g., daily/weekly interactions that materially change a workflow).
  • Tie AI usage to measurable outcomes: code merged, tickets closed, defects reduced, cases resolved, leads converted.
  • A/B test AI-assisted vs. non-assisted flows instead of declaring victory on anecdote.
Skepticism isn’t anti-AI; it’s anti-hand-waving. The teams that win will be the ones that instrument genAI like any other system: with metrics, baselines, and regression alarms.

Where Builders Actually Want AI to Work

The IEEE data, as surfaced by ZDNET, reveals a pragmatic wish list. Technical leaders want AI to:

  • Identify vulnerabilities and prevent cyberattacks in real time (47%).
  • Accelerate software development (39%, and rising).
  • Optimize supply chain and warehouse automation (35%).
  • Automate customer service (32%).
  • Power education, research, drug discovery, and infrastructure automation.
Strip away the buzzwords, and a pattern emerges:

  • These are systems problems: latency, reliability, risk, compliance, explainability.
  • They demand hybrid stacks: search, rules, event streams, monitoring, plus models.
  • They live in environments where "probably correct" is not good enough.
Engineering implication: genAI belongs as a layer in robust systems, not a replacement for them.

Dayforce’s Playbook: Small Models, RAG, and Role-Based Agents

Dayforce offers a concrete glimpse into how serious product organizations are threading this needle. Instead of sprinting to build a monolithic in-house LLM, Rasmussen describes a more disciplined route:

  • Use OpenAI foundational models where they make sense.
  • Wrap them with retrieval-augmented generation (RAG) for knowledge search.
  • Explore "small LLMs" for targeted advantages—e.g., sales intelligence—rather than a vanity "Dayforce-1" model.
In other words: architecture first, ego later. For developers and architects, this approach aligns with emerging best practices:

  1. Start with retrieval and context, not just generation.
  2. Isolate use-cases where latency, privacy, or domain adaptation justify smaller domain-specific models.
  3. Treat LLMs as pluggable components; don’t hardwire your stack to a single vendor or paradigm.
Rasmussen also points toward role-based and agentic patterns:

  • AI as "coach, creator, researcher, collaborator" wired into email, SharePoint, HubSpot, etc.
  • Domain-specific agents for sellers, HR, or operations staff.
But crucially: "We are finding they're not all ready for primetime." That’s the right instinct. Mature teams pilot agents behind guardrails before declaring them autonomous.

Job Fears, Ethics, and the Skills That Suddenly Matter

Underneath the architectural questions is a human one: "What do I tell my employees?" Rasmussen hears it constantly. The IEEE survey adds a revealing data point: "AI ethical practices" is now the top in-demand skill for 2026 (44%, up 9 points), outpacing even machine learning and software development. For practitioners, this isn’t abstract:

  • Ethics becomes design: what data can we use, how do we handle bias, who is accountable?
  • Communication becomes risk management: wild CEO statements about "AI replacing jobs" can tank trust faster than any bug.
  • Upskilling becomes a retention strategy: rewrite job descriptions, provide training pathways, and deliberately create AI-enhanced roles.
Dayforce’s "AI champions" model is particularly telling:

  • Seed each function with early adopters who experiment responsibly.
  • Let them be storytellers and first-line support for AI tooling.
  • Use their experiences to refine guardrails, documentation, and patterns.
For technical leaders, this is a blueprint: treat AI adoption as both a platform rollout and a cultural migration, curated from the inside.

Designing AI Systems That Survive First Contact With Reality

The through-line in all of this is deceptively simple: the companies that are making progress on AI are the ones that refuse to romanticize it. If you’re leading engineering, architecture, or security today, that perspective translates into a practical checklist:

  • Start from problems, not models:

    • Target workflows where latency, volume, or cognitive load are real bottlenecks.
    • Avoid "genAI for genAI’s sake"; it ages badly.
  • Put retrieval before reasoning:

    • Use RAG to ground answers in your own data.
    • Keep an audit trail of which documents influenced which responses.
  • Guardrail everything:

    • Policy filters, content moderation, data loss prevention, and strong access controls.
    • Kill switches for misbehaving agents; timeouts and circuit breakers like any other distributed system.
  • Measure hard outcomes:

    • Adoption without outcome is theater.
    • Benchmark against non-AI baselines; if AI doesn’t move the needle, refactor or retire it.
  • Treat ethics and trust as system requirements:

    • Document data lineage.
    • Be explicit with employees: where AI is used, what it changes, and what it does not.

Healthy skepticism isn’t about slowing down AI innovation. It’s about earning the right to scale.

The organizations that will own the next decade of AI aren’t the ones yelling the loudest about "full automation" or slapping chatbots on every surface. They’re the ones quietly building verifiable systems, retraining their people, constraining their models, and insisting that every impressive demo graduate into a reliable product—or get turned off.

That’s not caution. That’s competence.