White House Hits Pause on AI Preemption Order
The White House has paused a draft executive order that would have used federal power to challenge or override state artificial intelligence laws, after significant pushback over states’ rights and concern about weakening protections around deepfakes, fraud, and discrimination. For enterprises, this means the current patchwork of state-level AI and privacy rules will remain in place for the foreseeable future, and organizations will need to design AI compliance programs that account for multiple, sometimes conflicting, state requirements.
States Move to Rein In Algorithmic and AI-Driven Pricing
Several U.S. states are advancing legislation to curb algorithmic and AI-driven pricing practices that rely on behavioral data, location, and personal history to set individualized prices. Lawmakers are increasingly focused on the risk that these models can overcharge or unfairly target vulnerable consumers, especially when their inner workings are opaque. Any organization using dynamic pricing or yield management will need to be ready to explain how its models work, document fairness and non-discrimination, and provide auditable records when regulators come calling.
Google Brings AI Image Verification to Gemini
Google is rolling out an AI image verification feature in the Gemini app that lets users upload a picture and ask whether it was created or edited using Google AI. The feature relies on SynthID invisible watermarks and will be extended with C2PA-style content credentials, so images can carry more robust proof of origin over time. For security, communications, and brand leaders, this is an early signal that content provenance will become a standard part of digital asset management and an important tool in defending against deepfakes and impersonation.
Sources:
https://blog.google/technology/ai/ai-image-verification-gemini-app/
Microsoft Agent 365: A Control Plane for AI “Digital Employees”
Microsoft is introducing Agent 365 as a control plane for AI agents inside the Microsoft 365 ecosystem, treating them as digital employees that can be registered, governed, and monitored alongside human users. The platform promises a central registry of agents, granular access control tied into identity and compliance services, and visibility into how agents interact with people and data. Organizations now face a strategic choice: lean into this vendor-defined model for agent governance, or design a more neutral architecture that can span multiple clouds, platforms, and security stacks.
Sources:
https://news.microsoft.com/ignite-2025-book-of-news/
AWS–IDC: Agentic AI Deployment Expected by 2027
A new study from IDC, commissioned by AWS, finds that a clear majority of organizations expect full deployment of agentic AI by 2027, with many already piloting or running AI agents in production scenarios today. These agents are being tasked with analyzing data, recommending actions, and executing workflows with a growing degree of autonomy. Leadership teams should treat agentic AI as a near-term operating model, not a distant future concept, and begin planning for the governance, monitoring, and integration work required to make these systems safe and effective at scale.
Sources:
https://aws.amazon.com/isv/resources/agentic-ai-idc-study/
Survey: Half of Organizations Already Run 10+ AI Agents
A recent survey of nearly one thousand business and IT leaders reports that half of respondents work in organizations that already have ten or more AI agents running in production. At the same time, relatively few have fully mature governance frameworks, formal ownership, or rigorous testing approaches for these agents. This gap between adoption and control suggests that many enterprises are taking on operational and security risk without a clear understanding of where agents are deployed, what they can access, and how to shut them down quickly if something goes wrong.
Cisco Closes NeuralFabric Deal and Advances Security Reasoning Model
Cisco has completed its acquisition of NeuralFabric and is positioning a Security Reasoning Model as the AI layer that will correlate signals and drive decisions across its security portfolio and data fabric. The aim is to move beyond isolated tools toward an integrated, AI-driven platform that can interpret complex telemetry and recommend or automate responses. Customers will need to assess how this reasoning layer fits with their existing SOC tooling, what degree of visibility and override control they maintain, and how much strategic dependence they are comfortable placing on a single vendor’s AI stack.
Gartner: 40% of Agentic AI Projects Will Be Canceled by 2027
Gartner projects that more than forty percent of agentic AI projects will be canceled by the end of 2027, largely due to rising costs, unclear business value, and weak risk controls. The firm also warns about “agent washing,” where conventional AI tools are rebranded as agents without delivering genuine autonomy or measurable outcomes. For executive teams, this is a reminder to combine ambition with discipline: insist on clear ROI, define milestones and exit criteria, and ensure governance and ethics are built in from the start rather than bolted on later.
Topics We’re Tracking (But Didn’t Make the Cut)
Dropped Topic: Smaller AI product feature updates and incremental releases
Why It Didn’t Make the Cut: Limited strategic or risk impact for most enterprises compared with the major regulatory, governance, and security trends highlighted today.
Why It Caught Our Eye: Illustrates how quickly AI capabilities are proliferating and why leaders need a framework to distinguish signal from noise in the AI product landscape.
Quick Disclaimer and Sources Note: The author used AI in part to create this newscast. Our goal is to be transparent and show you how we sourced the info we used.
This newscast was developed using only public sources of information.
The Exchange Daily is a production of Metora Solutions. For more information about how to participate in this daily newscast, contact us at podcasts@metorasolutions.com.







