The Exchange
The Exchange - Vision meets Reality
The Exchange Daily - December 8, 2025
0:00
-7:34

The Exchange Daily - December 8, 2025

Today's Show Notes: AI copyright, national lab AI, sovereign AI in Canada, deepfake defense, React2Shell, and a fresh hospital ransomware case.

New York Times vs Perplexity AI raises stakes for copyright and AI governance

The New York Times is testing where courts will draw the line on AI training and answer engines by suing Perplexity AI for allegedly copying and republishing millions of its articles without permission, including paywalled content. The complaint also accuses Perplexity of fabricating stories while displaying Times branding, pulling the legal conversation beyond scraping and into product design and output liability.

For technology and security leaders, this case is a live rehearsal of your own exposure when you mix internal, licensed, and public data into AI systems that generate answers rather than links. It raises questions about how you document the provenance of training data, respect robots.txt and paywalls, govern retrieval-augmented generation, and handle takedown or correction requests when an AI system gets things wrong at scale.

Sources:


DOE’s AMP2 biotech platform shows Genesis Mission turning into AI infrastructure

At Pacific Northwest National Laboratory, the Department of Energy has switched on the Anaerobic Microbial Phenotyping Platform (AMP2). This largely autonomous biotech system combines robotics and AI to accelerate microbial research. DOE is framing AMP2 as a flagship early project under the Genesis Mission national AI science agenda and as a prototype for even larger autonomous lab infrastructure.

For CIOs and CTOs working in regulated research and production environments, AMP2 is a reference design for an AI factory that blends autonomous labs, safety controls, and human oversight. It’s a reminder that AI strategy isn’t just about software and cloud models anymore, but about how you connect those models to physical systems, data acquisition, and compliance workflows in a way that regulators and boards can live with.

Sources:


ServiceNow’s CA$110 million bet on AI infrastructure for Canada’s public sector

ServiceNow is putting down 110 million Canadian dollars to support AI adoption across Canada’s public sector, pairing Canadian-hosted AI-ready infrastructure with a new national Center of Excellence and roughly one hundred new high-skilled jobs. The company is positioning this as a way for government agencies to run AI workloads on the Now platform while keeping data residency and sovereignty requirements front and center.

For public sector and highly regulated enterprises, this move is a signal of where large platforms are heading as governments demand more control over where AI runs and how telemetry is shared. It offers a template for the kinds of commitments you can seek in multi-year AI contracts, including local hosting, dedicated governance teams, and shared responsibility models that reach beyond traditional SaaS boundaries.

Sources:


Monday AI Market Maker: Imper.ai’s 28 million dollar launch to fight AI impersonation

Imper.ai stepped out of stealth with 28 million dollars in funding to build a real-time defense layer against AI-driven impersonation, from deepfake video calls to synthetic voice and chat. Its pitch is to sit across collaboration and communication channels, fusing signals from identity, devices, behavior, and content to flag high-risk interactions before someone approves a payment or shares sensitive data.

For CISOs and fraud leaders, this is an early view of what an identity-focused control plane for social engineering in the AI era might look like. It invites hard questions about where you place these controls, how they integrate with existing identity and access management stacks, and how you measure success when the main value is preventing a single catastrophic mistake rather than logging millions of clean transactions.

Sources:


React2Shell critical React Server Components flaw tests KEV-driven patch governance

The React2Shell vulnerability, tracked as CVE-2025-55182, is a maximum-severity remote code execution flaw in React Server Components that’s already attracting the attention of both vendors and threat actors. Because it affects server-side React and popular frameworks like Next.js under default configurations, many modern web and AI front ends are in scope even if teams don’t think of themselves as running React on the server.

For enterprise architects, this is exactly the sort of issue CISA’s Known Exploited Vulnerabilities catalog was designed for and a test of how quickly your organization can identify where a framework’s in use, push patches, and verify that third-party providers have done the same. It should also reinforce the value of software bills of materials, automated dependency discovery, and clear ownership for shared frameworks that can otherwise fall through the cracks.

Sources:

https://react2shell.com/


LockBit 5.0 at Insight Hospital underlines the healthcare ransomware crisis

LockBit 5.0 has listed Insight Hospital and Medical Center in Chicago on its leak site, threatening to release stolen data and adding another name to the growing list of health systems under ransomware pressure. Reporting to date points to the familiar mix of data theft, extortion, and potential disruption of hospital operations, even as investigators work to confirm the full scope and impact.

For healthcare executives and leaders handling regulated data, this incident is another reminder that ransomware is a long tail operational risk, not just a weekend headline. It reinforces the need for realistic tabletop exercises, segmented clinical networks, well-practiced backup and recovery plans, and frank conversations with boards about how much downtime and data loss your current architecture would actually tolerate.

Sources:


Topics We’re Tracking (But Didn’t Make the Cut)

Dropped Topic: CISA and global partners issue AI-in-OT security guidance

  • Why It Didn’t Make the Cut: We covered the initial release of this guidance in a recent edition, and today’s developments didn’t materially change the recommendations for most operators.

  • Why It Caught Our Eye: The document continues to mature into a de facto checklist for how critical infrastructure owners should govern AI models attached to safety-critical operational technology.

Sources:

Dropped Topic: New Brickworm and Brickstorm malware disclosures targeting critical infrastructure

  • Why It Didn’t Make the Cut: The reporting’s still evolving, and many details are better handled in a focused, cyber-heavy edition rather than a quick headline mention.

  • Why It Caught Our Eye: Joint advisories from U.S. and Canadian partners point to an ongoing campaign that blends access operations, pre-positioning, and potential sabotage against IT and OT systems.

Sources:


Quick Disclaimer and Sources Note: The author used AI in part to create this newscast. Our goal is to be transparent and show you how we sourced the info we used.

This newscast was developed using only public sources of information.

This update was assembled using a mix of human editorial judgment, public records, and reputable national and sector-specific news sources, with help from artificial intelligence tools to summarize and organize information. All information is drawn from publicly available sources listed above. Every effort is made to keep details accurate as of publication time. Still, readers should always confirm time-sensitive items such as policy changes, budget figures, and timelines with official documents and briefings.

All original content, formatting, and presentation are copyright 2025 Metora Solutions LLC, all rights reserved. For more information about our work and other projects, drop us a note at info@metorasolutions.com.

Discussion about this episode

User's avatar

Ready for more?