At Davos, Meta's CTO announced internal delivery of the first high-profile AI models from its new Superintelligence Labs, calling them 'very good.' Meanwhile, OpenAI is rolling out age prediction globally and testing chatbot ads with advertisers, signaling a shift toward monetization and safety features. The announcements highlight the industry's dual focus on advancing capability while grappling with commercial and ethical pressures.
The annual World Economic Forum in Davos has once again served as a backdrop for major tech announcements, and this year's event has been particularly rich with signals about the direction of the AI industry. Two developments from Meta and OpenAI, respectively, offer a snapshot of the current state of play: one focused on raw capability, the other on the infrastructure of deployment and safety.
Meta's Superintelligence Labs: A First Glimpse
Meta's CTO, Andrew Bosworth, confirmed that the company's newly formed Superintelligence Labs has delivered its first high-profile AI models internally this month. In a statement reported by Reuters, Bosworth characterized the models as "very good." This is the first tangible output from the lab, which was established in 2025 to consolidate Meta's advanced AI research efforts under the leadership of former Scale AI CEO Alexandr Wang.
The significance of this announcement lies in its timing and context. Meta has been playing catch-up in the generative AI race, with its Llama models often seen as strong open-source contenders but not necessarily leading the frontier. The creation of a dedicated "Superintelligence" lab was a clear signal of ambition to compete at the highest level. Delivering internal models, even if not yet public, represents a critical milestone. It suggests the lab has moved from theoretical research and recruitment to tangible product development.
However, the term "very good" is deliberately vague. It lacks specifics about performance benchmarks, parameter counts, or capabilities compared to models from OpenAI, Google, or Anthropic. This pattern of announcement is familiar: companies often tout internal breakthroughs before public release, building anticipation while gauging the competitive landscape. The real test will come when these models are released, either as open-source weights or through API access, and subjected to independent evaluation. The community will be watching closely to see if Meta's investment in a dedicated superintelligence effort yields models that can shift the market dynamics, particularly in the open-source domain where Meta has historically focused.
OpenAI's Dual Track: Monetization and Safety
OpenAI's activities at Davos and in recent announcements present a different set of priorities: establishing revenue streams and implementing safety guardrails as its user base scales globally.
Age Prediction Rollout: OpenAI announced it is rolling out age prediction on ChatGPT globally. The system aims to identify accounts likely owned by minors and apply automatic content protections. This move comes as OpenAI prepares to add adult content to its platform, a significant expansion that necessitates more robust age-gating. The company stated that users identified as under 18 will be directed to a model with stricter safety settings, which will block adult content and apply other protections.
This is a complex technical and ethical challenge. Age prediction, especially at a global scale, is fraught with potential for error and bias. The company will need to be transparent about the accuracy of its system and provide clear appeal mechanisms for users who are misclassified. The announcement also raises questions about data privacy and the methods used for prediction. While the stated goal is safety, the implementation details will be critical in determining whether this is an effective measure or a potential source of user frustration and privacy concerns.
Testing Chatbot Ads: Concurrently, reports indicate that OpenAI has begun offering its new chatbot ads to dozens of advertisers. The initial model charges per ad view, not per click, and asks for commitments under $1 million. This is a significant step toward monetizing the massive user base of ChatGPT, which has been growing rapidly since the launch of GPT-4o and other models. The move mirrors the evolution of other digital platforms, which often start with a free service and later introduce advertising to generate revenue.
The community sentiment around this is mixed. On one hand, it's a logical business step for a company with enormous infrastructure costs. On the other, it introduces the potential for commercial influence into what many users perceive as an objective tool. The choice of a "per view" model rather than "per click" is interesting; it may be an attempt to align incentives with user experience rather than driving clicks, but it still fundamentally changes the nature of the interaction. The test with a limited number of advertisers suggests a cautious, measured rollout, likely to gauge user reaction and refine the ad format before a broader launch.
Counter-Perspectives and Broader Implications
These developments are not happening in a vacuum. They are part of a larger, often contentious, industry conversation.
The Open-Source vs. Closed-Source Debate: Meta's push with its Superintelligence Labs and its commitment to open-source models like Llama exists in tension with OpenAI's increasingly commercial and closed approach. While Meta releases its models, allowing for broad experimentation and adaptation, OpenAI is building a monetized ecosystem. This divergence creates a clear fork in the road for developers and businesses: do they build on open, customizable models with potentially lower costs but more responsibility, or on proprietary, state-of-the-art models with easier access but less control and potential vendor lock-in? The success of Meta's new models will directly influence this choice.
Safety as a Moving Target: OpenAI's age prediction feature highlights the ongoing struggle to define and implement safety at scale. As AI systems become more capable and integrated into daily life, the pressure to protect vulnerable users increases. However, the methods used—like algorithmic age estimation—can be imperfect and raise their own ethical questions. The community is actively debating where the line should be drawn between protection and overreach, and between automated systems and human oversight.
The Economics of AI: The discussions at Davos underscore the immense financial pressures facing leading AI labs. The cost of training and running state-of-the-art models is astronomical, driving the need for diverse revenue streams. Advertising is one path; enterprise deals, like the one OpenAI signed with ServiceNow (also announced this week), are another. The push for monetization is inevitable, but it will shape the product's evolution in ways that may not always align with user expectations or the original vision of creating beneficial AGI.
Looking Ahead
The announcements from Meta and OpenAI at Davos provide a clear view of the industry's current trajectory. On one front, the race for capability continues unabated, with new labs and models emerging to push the boundaries of what's possible. On another, the practical challenges of deploying these technologies at a global scale—monetization, safety, and regulation—are coming to the forefront.
For developers and technologists, the landscape is both exciting and complex. The availability of powerful models, whether from Meta's open-source efforts or OpenAI's API, provides unprecedented building blocks. Yet, the choices made by these companies regarding licensing, safety, and business models will have long-lasting effects on the ecosystem. The "very good" models from Meta's Superintelligence Labs may offer a new tool in the open-source arsenal, while OpenAI's experiments with ads and age prediction will test the boundaries of what users will accept in a commercial AI service. The coming months will reveal how these parallel tracks converge and diverge, shaping the future of AI development and its role in society.

Comments
Please log in or register to join the discussion