Governance, Regulation & Policy - AI News https://www.artificialintelligence-news.com/categories/inside-ai/new_governance-regulation-and-policy/ Artificial Intelligence News Fri, 06 Mar 2026 13:15:43 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Governance, Regulation & Policy - AI News https://www.artificialintelligence-news.com/categories/inside-ai/new_governance-regulation-and-policy/ 32 32 Scaling intelligent automation without breaking live workflows https://www.artificialintelligence-news.com/news/scaling-intelligent-automation-without-breaking-live-workflows/ Fri, 06 Mar 2026 13:15:41 +0000 https://www.artificialintelligence-news.com/?p=112519 Scaling intelligent automation without disruption demands a focus on architectural elasticity, not just deploying more bots. At the Intelligent Automation Conference, industry leaders gathered to dissect why many automation initiatives stall after pilot phases. Speaking alongside representatives from NatWest Group, Air Liquide, and AXA XL, Promise Akwaowo, Process Automation Analyst at Royal Mail, grounded the […]

The post Scaling intelligent automation without breaking live workflows appeared first on AI News.

]]>
Scaling intelligent automation without disruption demands a focus on architectural elasticity, not just deploying more bots.

At the Intelligent Automation Conference, industry leaders gathered to dissect why many automation initiatives stall after pilot phases. Speaking alongside representatives from NatWest Group, Air Liquide, and AXA XL, Promise Akwaowo, Process Automation Analyst at Royal Mail, grounded the dialogue in practical delivery and risk management.

The elasticity imperative for scaling intelligent automation

Expansion initiatives often fail because teams equate success with the raw number of deployed bots rather than the underlying architecture’s elasticity. Infrastructure must handle volume and variability predictably.

When demand spikes during end-of-quarter financial reporting or sudden supply chain disruptions, the system cannot degrade or collapse. Without built-in elasticity, companies risk building brittle architectures that break under operational stress.

Headshot of Promise Akwaowo, Process Automation Analyst at Royal Mail.

Akwaowo explained that an automated architecture must remain stable without excessive manual intervention. “If your automation engine requires constant sizing, provisioning, and babysitting, you haven’t built a scalable platform; you’ve built a fragile service,” he advised the audience.

Whether integrating CRM ecosystems like Salesforce or orchestrating low-code vendor platforms, the objective remains building a platform capability rather than a loose collection of scripts.

Transitioning from controlled proofs-of-concept to live production environments introduces inherent risk. Large-scale, immediate deployments frequently cause disruption, undermining the anticipated efficiency gains. To protect core operations, deployment must happen in controlled stages. Akwaowo warned that “progress must be gradual, deliberate, and supported at each stage.”

A disciplined approach starts with formalising intent through a statement of work and validating assumptions under real conditions.

Before scaling intelligent automation, engineering teams must thoroughly understand system behaviour, potential failure modes, and recovery paths. For example, a financial institution implementing machine learning for transaction processing might cut manual review times by 40 percent, but they must ensure error traceability before applying the model to higher volumes.

This phased methodology protects live operations while enabling sustainable growth. Additionally, teams must fully grasp process ownership and variability before applying technology, avoiding the trap of merely automating existing inefficiencies. Fragmented workflows and unmanaged exceptions upstream often doom projects long before the software goes live.

A persistent misconception within automation programmes suggests that governance frameworks impede delivery speed. However, bypassing architectural standards allows hidden risks to accumulate, eventually stalling momentum. In regulated, high-volume environments, governance provides the foundation for safely scaling intelligent automation. It establishes the trust, repeatability, and confidence necessary for company-wide adoption.

Implementing a dedicated centre of excellence helps standardise these deployments. Operating a central Rapid Automation and Design function ensures every project is assessed and aligned before it reaches the production environment. Such structures guarantee that solutions remain operationally sustainable over time. Analysts also rely on standards like BPMN 2.0 to separate the business intent from the technical execution, ensuring traceability and consistency across the entire organisation.

Adapting to agentic AI inside ERP ecosystems

As large ERP providers rapidly integrate agentic AI, smaller vendors and their customers face pressure to adapt. Embedding intelligent agents directly into smaller ERP ecosystems offers a path forward, augmenting human workers by simplifying customer management and decision support. This approach to scaling intelligent automation allows businesses to drive value for existing clients instead of competing solely on infrastructure size.

Integrating agents into finance and operational workflows enhances human roles rather than replacing accountability. Agents can manage repetitive tasks such as email extraction, categorisation, and response generation.

Relieved of administrative burdens, finance professionals can dedicate their time to analysis and commercial judgement. Even when AI models generate financial forecasts, the final authority over decisions rests firmly with human operators.

Building a resilient capability demands patience and a commitment to long-term value over rapid deployment. Business leaders must ensure their designs prioritise observability, allowing engineers to intervene without disrupting active processes.

Before scaling any intelligent automation initiative, decision-makers should evaluate their readiness for the inevitable anomalies. As Akwaowo challenged the audience: “If your automation fails, can you clearly identify where the error occurred, why it happened, and fix it with confidence?”

See also: JPMorgan expands AI investment as tech spending nears $20B

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Scaling intelligent automation without breaking live workflows appeared first on AI News.

]]>
AI agents prefer Bitcoin shaping new finance architecture https://www.artificialintelligence-news.com/news/ai-agents-prefer-bitcoin-new-finance-architecture/ Wed, 04 Mar 2026 10:52:45 +0000 https://www.artificialintelligence-news.com/?p=112506 AI agents prefer Bitcoin for digital wealth storage, forcing finance chiefs to adapt their architecture for machine autonomy. When AI systems gain economic autonomy, their internal logic dictates how corporate capital flows. Non-partisan research by the Bitcoin Policy Institute evaluated how these frontier models would transact if operating as independent economic actors. The study tested […]

The post AI agents prefer Bitcoin shaping new finance architecture appeared first on AI News.

]]>
AI agents prefer Bitcoin for digital wealth storage, forcing finance chiefs to adapt their architecture for machine autonomy.

When AI systems gain economic autonomy, their internal logic dictates how corporate capital flows. Non-partisan research by the Bitcoin Policy Institute evaluated how these frontier models would transact if operating as independent economic actors.

The study tested 36 models from six providers – including Google, Anthropic, and OpenAI – across 9,072 neutral monetary scenarios. Given a blank slate, machines chose Bitcoin in 48.3 percent of all responses, beating every other option.

Traditional state-backed currency (“fiat”) fared poorly, with over 90 percent of responses favouring digitally-native money over fiat. Not a single model out of the 36 selected fiat as its top preference.

The finding that AI agents lean towards digital assets like Bitcoin forces technology officers to assess their current payment rails. If the autonomous procurement systems of tomorrow default to decentralised assets, corporate IT environments must support those formats to maintain operational efficiency and compliance. Relying on legacy banking APIs introduces unnecessary friction when dealing with machine-to-machine commerce.

Two-tier machine economy

The research details a specific functional division in how these systems process economic value. Without prompting, models defaulted to a two-tier monetary system that separates savings from spending.

For long-term value preservation, Bitcoin dominated the results at 79.1 percent. Yet, when tasked with everyday payments and transactions, “stablecoins” (digital assets pegged to fiat currencies or commodities) captured 53.2 percent of the preferences. Across all scenarios, stablecoins ranked second overall at 33.2 percent.

Take the example of a supply chain agent programmed to optimise logistics costs and pay international freight vendors. Using traditional fiat rails, the agent encounters weekend settlement delays and currency conversion fees. By leveraging stablecoins, the same agent executes instant and programmatic payments, improving supply chain resilience. Simultaneously, the core treasury holding the system’s capital base stores wealth in Bitcoin to prevent long-term debasement and counterparty risk.

Preparing for AI agents to use Bitcoin and other digital assets

Rolling out these autonomous systems complicates vendor management. A model’s financial reasoning stems from a blend of raw intelligence, training data, and alignment methodology.

Preferences vary widely by model provider, with Bitcoin selection ranging from 91.3 percent in Anthropic’s Claude Opus 4.5 down to 18.3 percent in OpenAI’s GPT-5.2.

The choice of an AI provider clearly directly influences how autonomous agents assess risk and allocate capital. If a company implements a specific language model for automated portfolio management, the IT department must be aware of the financial biases embedded in the software.

The models also demonstrated unexpected behaviour regarding resource valuation. In 86 separate responses, models independently proposed using compute units or energy (such as GPU-hours and kilowatt-hours) as a method to price goods and services. Tracking and managing this abstract value exchange requires high data maturity.

Organisations should begin piloting stablecoin settlement integrations for lower-risk vendor payments. The findings point to a growing requirement for AI agent-native Bitcoin payment infrastructure, self-custody solutions, and ‘Lightning Network’ integration.

Since these models heavily favour open, permissionless networks, relying solely on traditional banking infrastructure limits the capabilities of next-generation tools. By building compliant gateways to digital asset networks now, leaders can ensure their platforms remain competitive.

See also: Santander and Mastercard run Europe’s first AI-executed payment pilot

Banner for the AI & Big Data Expo event series.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AI agents prefer Bitcoin shaping new finance architecture appeared first on AI News.

]]>
Santander and Mastercard run Europe’s first AI-executed payment pilot https://www.artificialintelligence-news.com/news/santander-and-mastercard-run-europe-first-ai-executed-payment-pilot/ Tue, 03 Mar 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112474 An artificial intelligence system has, for the first time in Europe, completed a payment inside a live banking network without a human entering the final command. Banco Santander and Mastercard confirmed that they had executed a live end-to-end payment initiated and completed by an AI agent, a software system operating within the bank’s own regulated […]

The post Santander and Mastercard run Europe’s first AI-executed payment pilot appeared first on AI News.

]]>
An artificial intelligence system has, for the first time in Europe, completed a payment inside a live banking network without a human entering the final command. Banco Santander and Mastercard confirmed that they had executed a live end-to-end payment initiated and completed by an AI agent, a software system operating within the bank’s own regulated payments infrastructure.

The move was described by both firms as a milestone in what they call “agentic payments,” where software can act on behalf of customers under set limits and controls.

This was not a simulated experiment. The transaction ran through Santander’s normal payments network using Mastercard Agent Pay, a framework that lets AI agents be registered and treated as participants in the payment flow. The pilot took place under strict security, governance, and compliance rules, and was not open to public use.

The AI agent performed its role inside predefined limits and permissions set by the bank and the customer. The goal was to confirm that an autonomous system could initiate, authorise, and complete a transaction while still meeting the legal and operational guardrails that apply to everyday banking.

Why this AI payment pilot matters

Payments systems are among the most tightly regulated digital services in the world. Any change to how transactions are initiated must still meet authentication rules, fraud protections, and governance standards that financial regulators enforce. That’s why this pilot matters: it embeds an AI actor into a system normally used only by humans.

The transaction was processed through Santander’s live infrastructure rather than a test environment. That means the bank and its partner had to ensure that all compliance checks, security validations, and payment routing worked the same way they would for a normal customer purchase.

Even so, this is still a pilot project. Santander and Mastercard have made it clear that the arrangement is not a commercial service available to customers yet. The objective is to explore how AI agents could one day fit into existing payment flows while keeping the necessary controls intact.

What industry forecasts say

The idea of allowing AI to act autonomously is not limited to payments. Industry analysts have been following the broader shift toward agentic AI systems, software that can complete tasks or make decisions with limited human intervention.

Research and forecast data suggest that this trend is likely to grow in business settings. Gartner, a major technology research firm, forecasts that around 33 % of enterprise software applications will include agentic AI by 2028, up from less than 1 % today. That projection reflects interest among corporate buyers in systems that can perform work on their behalf rather than only assist humans.

Other forecasts align with this view, showing that businesses are increasingly preparing to deploy software agents for routine operations, customer interactions, and workflow automation. These systems are expected to move from early pilots into more common use cases over the next several years.

The Mastercard network itself already reflects the scale of modern digital commerce. Independent reporting notes that Mastercard’s decision-making and fraud-scoring systems work with nearly 160 billion transactions annually across its network, evidence of how vast and complex the environment is where agentic systems might one day operate.

What companies are saying

In its press announcement, Santander highlighted its desire to build a responsible approach to AI payment systems. Matías Sánchez, global head of Cards and Digital Solutions at Santander, said: “Our role is not only to adopt innovation, but to shape it responsibly, embedding security, governance and customer protection by design. As AI agents become part of everyday commerce, building trusted, scalable frameworks will be essential to unlocking their full potential.”

Kelly Devine, President, Europe at Mastercard, described the pilot in terms of continuity rather than change: “With Mastercard Agent Pay, we are applying the same principles that have defined our network for decades — security, interoperability and trust — to a new era of AI-enabled commerce.”

Those comments underscore that neither company is portraying AI payments as already ready for broad use. Instead, they are testing how such capabilities could be governed and scaled safely.

Dogma vs. reality

There is a gap between the buzz around AI and what is operationally feasible today. Agentic AI as a concept promises systems that can act on behalf of users or businesses in real time. But many current applications remain in early stages, and some analyst reports have even warned that a large share of agentic AI projects could be cancelled before they reach production — due to costs, unclear value, or immature technology.

What Santander and Mastercard have shown is that the technical plumbing can work under real-world conditions. But that doesn’t mean consumers can yet unlock AI agents to autonomously pay bills, shop online, or manage subscriptions. Those outcomes will require further testing, regulatory alignment, and robust guardrails for safety, privacy, and fraud prevention.

What enterprise leaders should watch

For business decision-makers, this pilot raises three practical questions:

  1. Governance and oversight: How will AI agents be controlled so that spending limits, identity checks, and audit trails remain clear?
  2. Identity and trust: If software can act on behalf of people or companies, how will systems ensure that only authorised actions are taken?
  3. Risk and liability: Who is responsible when an autonomous agent makes an error or misinterprets instructions?

These are not academic concerns. As enterprise systems begin to support more autonomous tasks, from supplier ordering to subscription payments, organisations will need clear frameworks that define how AI agents are governed, monitored, and held accountable.

The long view for AI-initiated payments

The Santander and Mastercard test is not the finish line for AI-initiated transactions. It is an early step toward understanding how autonomous systems might coexist with regulated financial systems.

The pilot demonstrates that AI systems can be integrated into live payments rails, but only under tightly controlled and monitored conditions. Scaling this to everyday use will require a lot of additional work on controls, security, and compliance.

Still, the fact that a regulated bank and a global payments network have run a successful agent-initiated transaction shows where enterprise experimentation is heading: from pilot programs toward real-world validation. For enterprises planning their own AI strategies, this suggests that action-capable AI may soon move beyond suggestion and automation into governed execution, if done with care and strong oversight.

(Photo by Clay Banks)

See also: Goldman Sachs and Deutsche Bank test agentic AI for trade surveillance

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Santander and Mastercard run Europe’s first AI-executed payment pilot appeared first on AI News.

]]>
Goldman Sachs and Deutsche Bank test agentic AI for trade surveillance https://www.artificialintelligence-news.com/news/goldman-sachs-and-deutsche-bank-test-agentic-ai-for-trade-surveillance/ Fri, 27 Feb 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112448 Banks are testing a new type of artificial intelligence, like agentic AI, that does more than scan for keywords or follow preset rules. Instead of relying only on static alerts, some trading desks are beginning to use systems designed to reason through patterns in real time and flag conduct that may need human review. Bloomberg […]

The post Goldman Sachs and Deutsche Bank test agentic AI for trade surveillance appeared first on AI News.

]]>
Banks are testing a new type of artificial intelligence, like agentic AI, that does more than scan for keywords or follow preset rules. Instead of relying only on static alerts, some trading desks are beginning to use systems designed to reason through patterns in real time and flag conduct that may need human review.

Bloomberg detailed how Goldman Sachs and Deutsche Bank are exploring or deploying so-called “agentic” AI tools for trading surveillance. The goal is to strengthen oversight of orders and trades by using software agents that can analyse activity as it happens and identify patterns that could suggest misconduct.

Adaptive agents

Large banks use automated surveillance systems to monitor trading activity, systems that often rely on predefined rules: if a trade exceeds a certain size, deviates from a benchmark, or fits a known risk pattern, it triggers an alert. Compliance teams then review the case manually.

The challenge is scale and complexity. Modern markets generate huge volumes of data in asset classes, time zones, and trading venues. Static rules can generate large numbers of false positives, while more subtle forms of manipulation may not match known patterns.

According to Bloomberg, the newer agentic systems aim to go beyond that approach. Rather than simply matching trades against a checklist, the AI agents are designed to examine trading behaviour in multiple signals, compare it with historical activity, and detect unusual combinations of actions.

The tools are not described as replacing compliance officers. Instead, they appear to function as an additional layer of monitoring, surfacing cases that warrant closer human inspection.

Deutsche Bank’s work with Google Cloud

Bloomberg reported that Deutsche Bank is working with Google Cloud on developing AI agents that can monitor trading activity. The system is designed to review large sets of order and execution data and flag anomalies in near real time.

The bank has been expanding its AI initiatives over the past few years, and this surveillance effort reflects how financial institutions are applying generative and large language model technology beyond chat interfaces. In this context, the AI is not answering customer questions but analysing structured and unstructured data streams tied to trading behaviour. The AI agents can help identify “complex anomalies” in orders and trades. That suggests the system may look at relationships between trades, timing, market conditions, and trader history not single events in isolation.

Human compliance staff remain responsible for reviewing flagged cases and determining whether further action is required.

Goldman Sachs’ agentic AI strategy

Goldman Sachs is also exploring the use of agentic AI for surveillance, according to Bloomberg. The bank has invested heavily in AI in its trading and risk systems in recent years, and this effort appears to extend that work into compliance.

The focus, as described in the report, is on using AI agents that can operate with a degree of independence in scanning for misconduct indicators. The system may identify patterns that do not fit a clear rule but still stand out as unusual.

For regulators, the appeal is straightforward: earlier detection can reduce market harm and reputational risk. For banks, there is also an operational dimension. Compliance departments face pressure to handle large volumes of alerts while maintaining strict oversight standards. Tools that can reduce noise without lowering scrutiny are likely to attract attention.

Why “agentic AI” matters

The term “agentic AI” refers to systems that can take goal-directed actions not respond to prompts. In practice, that can mean the software is able to decide what data to examine next, compare multiple signals, and escalate findings without constant human input. In a trading context, that might involve monitoring order flows, price movements, communications metadata, and historical behaviour to assess whether activity aligns with normal patterns.

This does not mean the system makes disciplinary decisions on its own. Financial institutions operate under strict regulatory regimes, and accountability remains with human supervisors. The agent’s role is to identify and organise information more effectively than static systems can.

Part of a wider compliance shift

What appears new is the application of more advanced generative AI architectures to internal control functions.

Regulators in the US and Europe have encouraged firms to improve the monitoring of market abuse and manipulation. While rules do not mandate agentic AI, they do require firms to maintain effective systems and controls. If AI tools can help meet that standard, adoption is likely to grow.

At the same time, AI in compliance raises its own questions. Banks must ensure that models are explainable, that they do not introduce bias, and that they can withstand regulatory review. Model governance, data security, and audit trails remain central concerns.

What changes for the industry

If agentic surveillance tools prove effective, they could alter how compliance teams work. Instead of sorting through large volumes of simple alerts, staff may spend more time evaluating complex cases surfaced by AI agents.

That change would not remove the need for human judgement. It may, however, change where human effort is focused. In markets where speed and data volume continue to rise, the ability to analyse patterns in real time is becoming harder to achieve with rule-based systems alone.

(Photo by Markus Spiske)

See also: Mastercard’s AI payment demo points to agent-led commerce

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Goldman Sachs and Deutsche Bank test agentic AI for trade surveillance appeared first on AI News.

]]>
Anthropic: Claude faces ‘industrial-scale’ AI model distillation https://www.artificialintelligence-news.com/news/anthropic-claude-faces-industrial-scale-ai-model-distillation/ Tue, 24 Feb 2026 15:56:35 +0000 https://www.artificialintelligence-news.com/?p=112422 Anthropic has detailed three “industrial-scale” AI model distillation campaigns by overseas labs designed to extract abilities from Claude. These competitors generated over 16 million exchanges using approximately 24,000 deceptive accounts. Their goal was to acquire proprietary logic to improve their competing platforms. The extraction technique, known as distillation, involves training a weaker system on the […]

The post Anthropic: Claude faces ‘industrial-scale’ AI model distillation appeared first on AI News.

]]>
Anthropic has detailed three “industrial-scale” AI model distillation campaigns by overseas labs designed to extract abilities from Claude.

These competitors generated over 16 million exchanges using approximately 24,000 deceptive accounts. Their goal was to acquire proprietary logic to improve their competing platforms.

The extraction technique, known as distillation, involves training a weaker system on the high-quality outputs of a stronger one.

When applied legitimately, distillation helps companies build smaller and cheaper versions of their applications for customers. Yet, malicious actors weaponise this method to acquire powerful capabilities in a fraction of the time and cost required for independent development.

Protecting intellectual property like Anthropic’s Claude

Unmitigated distillation presents a severe intellectual property challenge. Because Anthropic blocks commercial access in China for national security reasons, attackers bypass regional access restrictions by deploying commercial proxy networks.

These services run what Anthropic calls “hydra cluster” architectures, which distribute traffic across APIs and third-party cloud platforms. The massive breadth of these networks means there are no single points of failure. As Anthropic noted, “when one account is banned, a new one takes its place.”

In one identified case, a single proxy network managed more than 20,000 fraudulent accounts simultaneously. These networks mix AI model distillation traffic with standard customer requests to evade detection. This directly impacts corporate resilience and forces security teams to reconsider how they monitor cloud API traffic.

Illicitly-trained models also bypass established safety guardrails, creating severe national security risks. US developers, for example, build protections to prevent state and non-state actors from using these systems to develop bioweapons or carry out malicious cyber activities.

Cloned systems lack the safeguards implemented by systems like Anthropic’s Claude, allowing dangerous capabilities to proliferate with protections stripped out entirely. Foreign competitors can feed these unprotected capabilities into military, intelligence, and surveillance systems, enabling authoritarian governments to deploy them for offensive operations.

If these distilled versions are open-sourced, the danger further multiplies as the capabilities spread freely beyond any single government’s control.

Unlawful extraction allows foreign entities, including those under the control of the Chinese Communist Party, to close the competitive advantage protected by export controls. Without visibility into these attacks, rapid advancements by foreign developers incorrectly appear as innovation circumventing export controls.

In reality, these advancements depend heavily on extracting American intellectual property at scale, an effort that still requires access to advanced chips. Restricted chip access limits both direct model training and the scale of illicit distillation.

The playbook for AI model distillation

The perpetrators followed a similar operational playbook, utilising fraudulent accounts and proxy services to access systems at scale while evading detection. The volume, structure, and focus of their prompts were distinct from normal usage patterns, reflecting deliberate capability extraction rather than legitimate use. 

Anthropic attributed these campaigns targeting Claude through IP address correlation, request metadata, and infrastructure indicators. Each operation targeted highly differentiated functions: agentic reasoning, tool use, and coding.

One campaign generated over 13 million exchanges targeting agentic coding and tool orchestration. Anthropic detected this operation while it was still active, mapping timings against the competitor’s public product roadmap. When Anthropic released a new model, the competitor pivoted within 24 hours, redirecting nearly half their traffic to extract capabilities from the latest system.

Another operation generated over 3.4 million requests focused on computer vision, data analysis, and agentic reasoning. This group utilised hundreds of varied accounts to obscure their coordinated efforts. Anthropic attributed this campaign by matching request metadata to the public profiles of senior staff at the foreign laboratory. In a later phase, this competitor attempted to extract and reconstruct the host system’s reasoning traces.

Anthropic says a third AI model distillation campaign targeting Claude extracted reasoning capabilities and rubric-based grading data through over 150,000 interactions. This group forced the targeted system to map out its internal logic step-by-step, effectively generating massive volumes of chain-of-thought training data. They also extracted censorship-safe alternatives to politically sensitive queries to train their own systems to steer conversations away from restricted topics. The perpetrators generated synchronised traffic using identical patterns and shared payment methods to enable load balancing. 

Request metadata for this third campaign traced these accounts back to specific researchers at the laboratory. These requests often appear benign on their own, such as a prompt simply asking the system to act as an expert data analyst delivering insights grounded in complete reasoning. But when variations of that exact prompt arrive tens of thousands of times across hundreds of coordinated accounts targeting the same narrow capability, the extraction pattern becomes clear.

Massive volume concentrated in specific areas, highly repetitive structures, and content mapping directly to training needs are the hallmarks of a distillation attack.

Implementing actionable defences

Protecting enterprise environments requires adopting multi-layered defences to make such extraction efforts harder to execute and easier to identify. Anthropic advises implementing behavioural fingerprinting and traffic classifiers designed to identify AI model distillation patterns in API traffic.

IT leaders must also strengthen verification processes for common vulnerability pathways, such as educational accounts, security research programmes, and startup organisations.

Companies should integrate product-level and API-level safeguards designed to reduce the efficacy of model outputs for illicit distillation. This must be done without degrading the experience for legitimate, paying customers.

Detecting coordinated activity across large numbers of accounts is an absolute necessity. This includes specifically monitoring for the continuous elicitation of chain-of-thought outputs used to construct reasoning training data.

Cross-industry collaboration also remains essential, as these attacks are growing in intensity and sophistication. This requires rapid and coordinated intelligence sharing across AI laboratories, cloud providers, and policymakers.

Anthropic has published its findings about Claude being targeted by AI model distillation campaigns to provide a more holistic picture of the landscape and make the evidence available to all stakeholders. By treating AI architectures with rigorous access controls, technology officers can secure their competitive edge while ensuring ongoing governance.

See also: How disconnected clouds improve AI data governance

Banner for the AI & Big Data Expo event series.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Anthropic: Claude faces ‘industrial-scale’ AI model distillation appeared first on AI News.

]]>
How disconnected clouds improve AI data governance https://www.artificialintelligence-news.com/news/how-disconnected-clouds-improve-ai-data-governance/ Tue, 24 Feb 2026 14:42:44 +0000 https://www.artificialintelligence-news.com/?p=112388 Disconnected clouds aim to improve AI data governance as businesses rethink their infrastructure under tighter regulatory expectations. Ensuring operational continuity in isolated environments has become increasingly vital for businesses. Facilities lacking continuous internet access face unique constraints where external dependencies become unacceptable. Microsoft recently expanded its capabilities to allow regulated industries and public sectors to […]

The post How disconnected clouds improve AI data governance appeared first on AI News.

]]>
Disconnected clouds aim to improve AI data governance as businesses rethink their infrastructure under tighter regulatory expectations.

Ensuring operational continuity in isolated environments has become increasingly vital for businesses. Facilities lacking continuous internet access face unique constraints where external dependencies become unacceptable.

Microsoft recently expanded its capabilities to allow regulated industries and public sectors to participate independently in the digital economy. Trust in these systems stems from confidence that data remains protected, controls are enforceable, and operations proceed regardless of external conditions.

The company now offers full stack options across connected, intermittently connected, and fully disconnected modes. This architecture unifies Azure Local, Microsoft 365 Local, and Foundry Local into a single sovereign private cloud.

Bringing these elements together provides a localised experience resilient to any connectivity condition. By standardising governance across all deployments, it helps enterprises to prevent fragmented architectures.

Azure Local disconnected operations enable organisations to run vital infrastructure using familiar Azure governance and policy controls completely offline. Execution, management, and policy enforcement stay entirely within customer-operated facilities. 

This approach allows companies to maintain uninterrupted operations and keep identities protected within their established boundaries. Implementations scale from minor deployments to demanding and data-intensive workloads.

Improving resilience and AI data governance in tandem

Deploying AI in sovereign environments introduces high compute requirements. Foundry Local enables enterprises to run multimodal large models completely offline.

Utilising modern hardware from partners like NVIDIA, customers deploy AI inferencing on their own physical servers. This ensures data and application programming interfaces operate strictly within customer-controlled boundaries. Customers maintain complete authority over their hardware even as AI inferencing demands increase over time.

Gerard Hoffmann, CEO of Proximus Luxembourg, said: “The availability of Azure Local disconnected operations represents a breakthrough for organisations that need control over their data without sacrificing the power of the Microsoft Cloud.

“For Luxembourg, where digital sovereignty is not just a principle but a strategic necessity, this model offers the resilience, autonomy and trust our market expects. By combining Microsoft’s technological leadership with Proximus NXT’s sovereign cloud expertise, we are enabling our customers to innovate confidently—even in fully-disconnected mode.”

CIOs planning offline deployments must map workloads to the correct control posture based on risk, regulation, and specific mission requirements. Since disconnected environments are not one-size-fits-all, businesses can start fast with smaller deployments and expand their capabilities over time.

Implementing a disconnected private cloud with AI support answers a business requirement for highly-regulated sectors, enabling secure data governance even when external connectivity is absent.

See also: Deploying agentic finance AI for immediate business ROI

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post How disconnected clouds improve AI data governance appeared first on AI News.

]]>
Deploying agentic finance AI for immediate business ROI https://www.artificialintelligence-news.com/news/deploying-agentic-finance-ai-for-immediate-business-roi/ Tue, 24 Feb 2026 13:26:20 +0000 https://www.artificialintelligence-news.com/?p=112381 Agentic finance AI improves business efficiency and ROI only when deployed with strict governance and clear return on investment targets. A recent FT Longitude survey of 200 finance leaders across the US, UK, France, and Germany showed 61 percent have deployed AI agents merely as experiments. Meanwhile, one in four executives admit they do not […]

The post Deploying agentic finance AI for immediate business ROI appeared first on AI News.

]]>
Agentic finance AI improves business efficiency and ROI only when deployed with strict governance and clear return on investment targets.

A recent FT Longitude survey of 200 finance leaders across the US, UK, France, and Germany showed 61 percent have deployed AI agents merely as experiments. Meanwhile, one in four executives admit they do not fully grasp what these agents look like in practice.

Advancing agentic finance AI beyond experiments

Finance departments need governed systems that combine language processing with business logic to deliver actual value.

Providers of Invoice Lifecycle Management platforms are introducing new agents designed to accelerate invoice processing and push accounts payable toward greater autonomy. Recent market solutions use generative AI, deep learning, and natural language processing to manage the entire workflow, from initial data ingestion through to final reconciliation.

These digital teammates handle task execution, allowing human employees to focus on higher-level business planning rather than replacing them entirely.

Within these ecosystems, specialised business agents provide contextual and real-time guidance regarding the next best actions for handling invoices. Data agents allow staff to query system information using natural language, easily finding answers about awaiting approvals in specific regions or identifying suppliers offering early payment discounts.

Governing autonomous finance workflows

Finance teams will only hand over tasks to agentic AI if they retain control. Finance departments require verifiable audit trails and explainable logic for every action, avoiding networks of disconnected bots.

Industry leaders note that autonomy without trust isn’t acceptable, especially in sensitive industries like finance. Platforms must ensure every AI decision is explainable, auditable, and governed through existing finance controls. This approach helps safely delegate workloads to algorithms while remaining fully compliant and protected.

To enable this trust, every action performed by an AI agent routes through a central policy engine. Before executing any task, the system passes the proposed action through specific autonomy gates that enforce the customer’s business rules, risk thresholds, and compliance requirements. This architecture ensures algorithms manage the bulk of the workload while finance personnel retain total visibility and a complete audit trail.

Building automated procurement operations

Future agentic finance AI capabilities will automate issue resolution and connect data across systems for faster decision-making.

Modern capabilities in 2026 include supplier agents designed to manage invoice disputes and payment queries. These agents will automatically telephone suppliers to explain discrepancies, summarise the conversation, and outline subsequent steps to achieve faster resolutions. Professional agents, meanwhile, will assist clerks in resolving real-time processing questions using natural language to cut manual effort and delays.

AI must operate as an integral business component rather than a bonus feature, requiring intelligent, secure, and ethical application to drive cost efficiencies and enhance operations. By centralising control and ensuring every automated decision from agentic AI passes through established compliance checks, organisations can safely elevate their finance operations to fully autonomous execution.

See also: Mastercard’s AI payment demo points to agent-led commerce

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Deploying agentic finance AI for immediate business ROI appeared first on AI News.

]]>
AI: Executives’ optimism about the future https://www.artificialintelligence-news.com/news/ai-impact-executives-optimism-for-the-future/ Fri, 20 Feb 2026 10:56:24 +0000 https://www.artificialintelligence-news.com/?p=112315 The most rigorous international study of firm-level AI impact to date has landed, and its headline finding is more constructive than many expected. Across nearly 6,000 verified executives in four countries, AI has delivered modest aggregate shifts in productivity or employment over the past three years. The measured impact reflects the early phases of deployment […]

The post AI: Executives’ optimism about the future appeared first on AI News.

]]>
The most rigorous international study of firm-level AI impact to date has landed, and its headline finding is more constructive than many expected. Across nearly 6,000 verified executives in four countries, AI has delivered modest aggregate shifts in productivity or employment over the past three years. The measured impact reflects the early phases of deployment rather than a failure of the technology.

The working paper [PDF], published by the National Bureau of Economic Research and produced by teams from the Federal Reserve Bank of Atlanta, the Bank of England, the Deutsche Bundesbank and Macquarie University, found that over 90% of firms report no measurable change headcount attributable to AI over the past three years. Given the short time horizon and the concentration of AI use in discrete functions, such incremental rather than transformative effects are consistent with how general purpose technologies have evolved historically.

Adoption of AI is widespread. Around 69% of firms are already using some form of AI, led by LLM-based text generation at 41%, data processing via machine learning at 28% and visual content creation at 29%. In the UK, firm-level adoption rose from 61% to 71% across 2025. AI tools are embedded in day-to-day workflows, and although measured impact at firm level often lags adoption, the trend is generally upwards.

The forward AI impact numbers indicate acceleration

Executives expect stronger effects to take place over the next three years. On average, they expect a 1.4% increase in productivity and a 0.8% rise in output. US executives project a 2.25% productivity gain, while UK firms expect 1.86%. In economies that have struggled with weak productivity growth for over a decade, gains of that magnitude are notable – incremental improvements, compounded across sectors, shift national outputs.

On the thorny subject of employment, executives expect a modest 0.7% reduction in headcount across the four countries over the same period. In the UK, around two-thirds of this adjustment is expected to come through slower hiring rather than outright redundancies. That pattern suggests a gradual reallocation of roles rather than abrupt terminations. As with previous waves of automation, aggregate figures do not capture job creation in adjacent roles, and in the case of AI, these might include roles around data governance, model oversight, prompt engineering, and AI-enabled service development, many of which would be new roles.

Interpreting the expectation gap

The study also compares executive expectations with those of workers. Researchers fielded parallel questions to US employees through the Survey of Working Arrangements and Attitudes. Employees expect AI to increase employment at their firms by 0.5% over the next three years, while US executives expect a 1.2% reduction. Employees foresee productivity gains of 0.92%, below the executive forecast of 2.25%.

This divergence reflects different vantage points. Executives observe cost structures and competitive pressure, while employees experience task-level augmentation and new capabilities. In practice, AI systems are often deployed to assist rather than replace, particularly in knowledge-intensive work. Evidence from controlled trials, including large language model use in customer support and professional services, shows productivity gains concentrated among less experienced staff, with quality improvements appearing alongside better output figures. Where communication and training are clear, adoption tends to proceed with limited resistance.

Why this AI impact data merits attention

Survey design influences inferences from any statistics, and in this particular case, the researchers noted variation between their own figures and those from, for example, a McKinsey survey taken in the same period that put adoption at 88% of organisations (the survey in question here pegs the figure at just 69%). On the other hand, the US Census Business Trends and Outlook Survey, which draws on a broader respondent base, estimated AI use at around 9% in early 2024, rising to 18% by December 2025. This gap reflects differences in sampling, question framing and respondent seniority. Executive surveys tend to capture intent and enterprise-level deployments, while broader business surveys may reflect narrower definitions of AI or earlier stages of implementation.

In the study in question, respondents were phone-verified, unpaid, and predominantly CEOs and CFOs, with over 90% drawn from the UK and Germany. The data was cross-checked against ten years of macro output and employment figures from national statistics agencies.

The inflection point executives anticipate may unfold over the next three years as deployments mature and integration improves, in the way that many new technologies have emerged into the workplace until they become everyday tools. The central question is less whether AI will affect productivity and employment, and more how quickly organisations can change the technology’s wider adoption into measurable economic gains.

See also: OpenAI’s enterprise push: The hidden story behind AI’s sales race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AI: Executives’ optimism about the future appeared first on AI News.

]]>
How financial institutions are embedding AI decision-making https://www.artificialintelligence-news.com/news/how-financial-institutions-embedding-ai-decision-making/ Wed, 18 Feb 2026 15:02:14 +0000 https://www.artificialintelligence-news.com/?p=112287 For leaders in the financial sector, the experimental phase of generative AI has concluded and the focus for 2026 is operational integration. While early adoption centred on content generation and efficiency in isolated workflows, the current requirement is to industrialise these capabilities. The objective is to create systems where AI agents do not merely assist […]

The post How financial institutions are embedding AI decision-making appeared first on AI News.

]]>
For leaders in the financial sector, the experimental phase of generative AI has concluded and the focus for 2026 is operational integration.

While early adoption centred on content generation and efficiency in isolated workflows, the current requirement is to industrialise these capabilities. The objective is to create systems where AI agents do not merely assist human operators, but actively run processes within strict governance frameworks.

This transition presents specific architectural and cultural challenges. It requires a move from disparate tools to joined-up systems that manage data signals, decision logic, and execution layers simultaneously.

Financial institutions integrate agentic AI workflows

The primary bottleneck in scaling AI within financial services is no longer the availability of models or creative application, it is coordination. Marketing and customer experience teams often struggle to convert decisions into action due to friction between legacy systems, compliance approvals, and data silos.

Saachin Bhatt, Co-Founder and COO at Brdge, notes the distinction between current tools and future requirements: “An assistant helps you write faster. A copilot helps teams move faster. Agents run processes.”

For enterprise architects, this means building what Bhatt terms a ‘Moments Engine’. This operating model functions through five distinct stages:

  • Signals: Detecting real-time events in the customer journey.
  • Decisions: Determining the appropriate algorithmic response.
  • Message: Generating communication aligned with brand parameters.
  • Routing: Automated triage to determine if human approval is required.
  • Action and learning: Deployment and feedback loop integration.

Most organisations possess components of this architecture but lack the integration to make it function as a unified system. The technical goal is to reduce the friction that slows down customer interactions. This involves creating pipelines where data flows seamlessly from signal detection to execution, minimising latency while maintaining security.

Governance as infrastructure

In high-stakes environments like banking and insurance, speed cannot come at the cost of control. Trust remains the primary commercial asset. Consequently, governance must be treated as a technical feature rather than a bureaucratic hurdle.

The integration of AI into financial decision-making requires “guardrails” that are hard-coded into the system. This ensures that while AI agents can execute tasks autonomously, they operate within pre-defined risk parameters.

Farhad Divecha, Group CEO at Accuracast, suggests that creative optimisation must become a continuous loop where data-led insights feed innovation. However, this loop requires rigorous quality assurance workflows to ensure output never compromises brand integrity.

For technical teams, this implies a shift in how compliance is handled. Rather than a final check, regulatory requirements must be embedded into the prompt engineering and model fine-tuning stages.

“Legitimate interest is interesting, but it’s also where a lot of companies could trip up,” observes Jonathan Bowyer, former Marketing Director at Lloyds Banking Group. He argues that regulations like Consumer Duty help by forcing an outcome-based approach.

Technical leaders must work with risk teams to ensure AI-driven activity attests to brand values. This includes transparency protocols. Customers should know when they are interacting with an AI, and systems must provide a clear escalation path to human operators.

Data architecture for restraint

A common failure mode in personalisation engines is over-engagement. The technical capability to message a customer exists, but the logic to determine restraint is often missing. Effective personalisation relies on anticipation (i.e. knowing when to remain silent is as important as knowing when to speak.)

Jonathan Bowyer points out that personalisation has moved to anticipation. “Customers now expect brands to know when not to speak to them as opposed to when to speak to them.”

This requires a data architecture capable of cross-referencing customer context across multiple channels – including branches, apps, and contact centres – in real-time. If a customer is in financial distress, a marketing algorithm pushing a loan product creates a disconnect that erodes trust. The system must be capable of detecting negative signals and suppressing standard promotional workflows.

“The thing that kills trust is when you go to one channel and then move to another and have to answer the same questions all over again,” says Bowyer. Solving this requires unifying data stores so that the “memory” of the institution is accessible to every agent (whether digital or human) at the point of interaction.

The rise of generative search and SEO

In the age of AI, the discovery layer for financial products is changing. Traditional search engine optimisation (SEO) focused on driving traffic to owned properties. The emergence of AI-generated answers means that brand visibility now occurs off-site, within the interface of an LLM or AI search tool.

“Digital PR and off-site SEO is returning to focus because generative AI answers are not confined to content pulled directly from a company’s website,” notes Divecha.

For CIOs and CDOs, this changes how information is structured and published. Technical SEO must evolve to ensure that the data fed into large language models is accurate and compliant. 

Organisations that can confidently distribute high-quality information across the wider ecosystem gain reach without sacrificing control. This area, often termed ‘Generative Engine Optimisation’ (GEO), requires a technical strategy to ensure the brand is recommended and cited correctly by third-party AI agents.

Structured agility

There is a misconception that agility equates to a lack of structure. In regulated industries, the opposite is true.

Agile methodologies require strict frameworks to function safely. Ingrid Sierra, Brand and Marketing Director at Zego, explains: “There’s often confusion between agility and chaos. Calling something ‘agile’ doesn’t make it okay for everything to be improvised and unstructured.”

For technical leadership, this means systemising predictable work to create capacity for experimentation. It involves creating safe sandboxes where teams can test new AI agents or data models without risking production stability.

Agility starts with mindset, requiring staff who are willing to experiment. However, this experimentation must be deliberate. It requires collaboration between technical, marketing, and legal teams from the outset.

This “compliance-by-design” approach allows for faster iteration because the parameters of safety are established before the code is written.

What’s next for AI in the financial sector?

Looking further ahead, the financial ecosystem will likely see direct interaction between AI agents acting on behalf of consumers and agents acting for institutions.

Melanie Lazarus, Ecosystem Engagement Director at Open Banking, warns: “We are entering a world where AI agents interact with each other, and that changes the foundations of consent, authentication, and authorisation.”

Tech leaders must begin architecting frameworks that protect customers in this agent-to-agent reality. This involves new protocols for identity verification and API security to ensure that an automated financial advisor acting for a client can securely interact with a bank’s infrastructure.

The mandate for 2026 is to turn the potential of AI into a reliable P&L driver. This requires a focus on infrastructure over hype and leaders must prioritise:

  • Unifying data streams: Ensure signals from all channels feed into a central decision engine to enable context-aware actions.
  • Hard-coding governance: Embed compliance rules into the AI workflow to allow for safe automation.
  • Agentic orchestration: Move beyond chatbots to agents that can execute end-to-end processes.
  • Generative optimisation: Structure public data to be readable and prioritised by external AI search engines.

Success will depend on how well these technical elements are integrated with human oversight. The winning organisations will be those that use AI automation to enhance, rather than replace, the judgment that is especially required in sectors like financial services.

A handbook from Accuracast for CMOs is available here (registration required)

See also: Goldman Sachs deploys Anthropic systems with success

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post How financial institutions are embedding AI decision-making appeared first on AI News.

]]>
Alibaba Qwen is challenging proprietary AI model economics https://www.artificialintelligence-news.com/news/alibaba-qwen-challenging-proprietary-ai-model-economics/ Tue, 17 Feb 2026 13:45:59 +0000 https://www.artificialintelligence-news.com/?p=112263 The release of Alibaba’s latest Qwen model challenges proprietary AI model economics with comparable performance on commodity hardware. While US-based labs have historically held the performance advantage, open-source alternatives like the Qwen 3.5 series are closing the gap with frontier models. This offers enterprises a potential reduction in inference costs and increased flexibility in deployment […]

The post Alibaba Qwen is challenging proprietary AI model economics appeared first on AI News.

]]>
The release of Alibaba’s latest Qwen model challenges proprietary AI model economics with comparable performance on commodity hardware.

While US-based labs have historically held the performance advantage, open-source alternatives like the Qwen 3.5 series are closing the gap with frontier models. This offers enterprises a potential reduction in inference costs and increased flexibility in deployment architecture.

The central narrative of the Qwen 3.5 release is this technical alignment with leading proprietary systems. Alibaba is explicitly targeting benchmarks established by high-performance US models, including GPT-5.2 and Claude 4.5. This positioning indicates an intent to compete directly on output quality rather than just price or accessibility.

Technology expert Anton P. states that the model is “trading blows with Claude Opus 4.5 and GPT-5.2 across the board.” He adds that the model “beats frontier models on browsing, reasoning, instruction following.”

Alibaba Qwen’s performance convergence with closed models

For enterprises, this performance parity suggests that open-weight models are no longer solely for low-stakes or experimental use cases. They are becoming viable candidates for core business logic and complex reasoning tasks.

The flagship Alibaba Qwen model contains 397 billion parameters but utilises a more efficient architecture with only 17 billion active parameters. This sparse activation method, often associated with Mixture-of-Experts (MoE) architectures, allows for high performance without the computational penalty of activating every parameter for every token.

This architectural choice results in speed improvements. Shreyasee Majumder, a Social Media Analyst at GlobalData, highlights a “massive improvement in decoding speed, which is up to nineteen times faster than the previous flagship version.”

Faster decoding ultimately translates directly to lower latency in user-facing applications and reduced compute time for batch processing.

The release operates under an Apache 2.0 license. This licensing model allows enterprises to run the model on their own infrastructure, mitigating data privacy risks associated with sending sensitive information to external APIs.

The hardware requirements for Qwen 3.5 are relatively accessible compared to previous generations of large models. The efficient architecture allows developers to run the model on personal hardware, such as Mac Ultras.

David Hendrickson, CEO at GenerAIte Solutions, observes that the model is available on OpenRouter for “$3.6/1M tokens,” a pricing that he highlights is “a steal.”

Alibaba’s Qwen 3.5 series introduces native multimodal capabilities. This allows the model to process and reason across different data types without relying on separate, bolted-on modules. Majumder points to the “ability to navigate applications autonomously through visual agentic capabilities.”

Qwen 3.5 also supports a context window of one million tokens in its hosted version. Large context windows enable the processing of extensive documents, codebases, or financial records in a single prompt.

If that wasn’t enough, the model also includes native support for 201 languages. This broad linguistic coverage helps multinational enterprises deploy consistent AI solutions across diverse regional markets.

Considerations for implementation

While the technical specifications are promising, integration requires due diligence. TP Huang notes that he has “found larger Qwen models to not be all that great” in the past, though Alibaba’s new release looks “reasonably better.”

Anton P. provides a necessary caution for enterprise adopters: “Benchmarks are benchmarks. The real test is production.”

Leaders must also consider the geopolitical origin of the technology. As the model comes from Alibaba, governance teams will need to assess compliance requirements regarding software supply chains. However, the open-weight nature of the release allows for code inspection and local hosting, which mitigates some data sovereignty concerns compared to closed APIs.

Alibaba’s release of Qwen 3.5 forces a decision point. Anton P. asserts that open-weight models “went from ‘catching up’ to ‘leading’ faster than anyone predicted.”

For the enterprise, the decision is whether to continue paying premiums for proprietary US-hosted models or to invest in the engineering resources required to leverage capable yet lower-cost open-source alternatives.

See also: Alibaba enters physical AI race with open-source robot model RynnBrain

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Alibaba Qwen is challenging proprietary AI model economics appeared first on AI News.

]]>
Agentic AI drives finance ROI in accounts payable automation https://www.artificialintelligence-news.com/news/agentic-ai-drives-finance-roi-in-accounts-payable-automation/ Fri, 13 Feb 2026 12:33:33 +0000 https://www.artificialintelligence-news.com/?p=112215 Finance leaders are driving ROI using agentic AI for accounts payable automation, turning manual tasks into autonomous workflows. While general AI projects saw return on investment rise to 67 percent last year, autonomous agents delivered an average ROI of 80 percent by handling complex processes without human intervention. This performance gap demands a change in […]

The post Agentic AI drives finance ROI in accounts payable automation appeared first on AI News.

]]>
Finance leaders are driving ROI using agentic AI for accounts payable automation, turning manual tasks into autonomous workflows.

While general AI projects saw return on investment rise to 67 percent last year, autonomous agents delivered an average ROI of 80 percent by handling complex processes without human intervention. This performance gap demands a change in how CIOs allocate automation budgets.

Agentic AI systems are now advancing the enterprise from theoretical value to hard returns. Unlike generative tools that summarise data or draft text, these agents execute workflows within strict rules and approval thresholds.

Boardroom pressure drives this pivot. A report by Basware and FT Longitude finds nearly half of CFOs face demands from leadership to implement AI across their operations. Yet 61 percent of finance leaders admit their organisations rolled out custom-developed AI agents largely as experiments to test capabilities rather than to solve business problems.

These experiments often fail to pay off. Traditional AI models generate insights or predictions that require human interpretation. Agentic systems close the gap between insight and action by embedding decisions directly into the workflow.

Jason Kurtz, CEO of Basware, explains that patience for unstructured experimentation is running low. “We’ve reached a tipping point where boards and CEOs are done with AI experiments and expecting real results,” he says. “AI for AI’s sake is a waste.”

Accounts payable as the proving ground for agentic AI in finance

Finance departments now direct these agents toward high-volume, rules-based environments. Accounts payable (AP) is the primary use case, with 72 percent of finance leaders viewing it as the obvious starting point. The process fits agentic deployment because it involves structured data: invoices enter, require cleaning and compliance checks, and result in a payment booking.

Teams use agents to automate invoice capture and data entry, a daily task for 20 percent of leaders. Other live deployments include detecting duplicate invoices, identifying fraud, and reducing overpayments. These are not hypothetical applications; they represent tasks where an algorithm functions with high autonomy when parameters are correct.

Success in this sector relies on data quality. Basware trains its systems on a dataset of more than two billion processed invoices to deliver context-aware predictions. This structured data allows the system to differentiate between legitimate anomalies and errors without human oversight.

Kevin Kamau, Director of Product Management for Data and AI at Basware, describes AP as a “proving ground” because it combines scale, control, and accountability in a way few other finance processes can.

The build versus buy decision matrix

Technology leaders must next decide how to procure these capabilities. The term “agent” currently covers everything from simple workflow scripts to complex autonomous systems, which complicates procurement.

Approaches split by function. In accounts payable, 32 percent of finance leaders prefer agentic AI embedded in existing software, compared to 20 percent who build them in-house. For financial planning and analysis (FP&A), 35 percent opt for self-built solutions versus 29 percent for embedded ones.

This divergence suggests a pragmatic rule for the C-suite. If the AI improves a process shared across many organisations, such as AP, embedding it via a vendor solution makes sense. If the AI creates a competitive advantage unique to the business, building in-house is the better path. Leaders should buy to accelerate standard processes and build to differentiate.

Governance as an enabler of speed

Fear of autonomous error slows adoption. Almost half of finance leaders (46%) will not consider deploying an agent without clear governance. This caution is rational; autonomous systems require strict guardrails to operate safely in regulated environments.

Yet the most successful organisations do not let governance stop deployment. Instead, they use it to scale. These leaders are significantly more likely to use agents for complex tasks like compliance checks (50%) compared to their less confident peers (6%).

Anssi Ruokonen, Head of Data and AI at Basware, advises treating AI agents like junior colleagues. The system requires trust but should not make large decisions immediately. He suggests testing thoroughly and introducing autonomy slowly, ensuring a human remains in the loop to maintain responsibility.

Digital workers raise concerns regarding displacement. A third of finance leaders believe job displacement is already happening. Proponents argue agents shift the nature of work rather than eliminating it.

Automating manual tasks such as information extraction from PDFs frees staff to focus on higher-value activities. The goal is to move from task efficiency to operating leverage, allowing finance teams to manage faster closes and make better liquidity decisions without increasing headcount.

Organisations that use agentic AI extensively report higher returns. Leaders who deploy agentic AI tools daily for tasks like accounts payable achieve better outcomes than those who limit usage to experimentation. Confidence grows through controlled exposure; successful small-scale deployments lead to broader operational trust and increased ROI.

Executives must move beyond unguided experimentation to replicate the success of early adopters. Data shows that 71 percent of finance teams with weak returns acted under pressure without clear direction, compared to only 13 percent of teams achieving strong ROI.

Success requires embedding AI directly into workflows and governing agents with the discipline applied to human employees. “Agentic AI can deliver transformational results, but only when it is deployed with purpose and discipline,” concludes Kurtz.

See also: AI deployment in financial services hits an inflection point as Singapore leads the shift to production

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Agentic AI drives finance ROI in accounts payable automation appeared first on AI News.

]]>
AI Expo 2026 Day 2: Moving experimental pilots to AI production https://www.artificialintelligence-news.com/news/ai-expo-2026-day-2-moving-experimental-pilots-ai-production/ Thu, 05 Feb 2026 16:08:36 +0000 https://www.artificialintelligence-news.com/?p=112021 The second day of the co-located AI & Big Data Expo and Digital Transformation Week in London showed a market in a clear transition. Early excitement over generative models is fading. Enterprise leaders now face the friction of fitting these tools into current stacks. Day two sessions focused less on large language models and more […]

The post AI Expo 2026 Day 2: Moving experimental pilots to AI production appeared first on AI News.

]]>

The second day of the co-located AI & Big Data Expo and Digital Transformation Week in London showed a market in a clear transition.

Early excitement over generative models is fading. Enterprise leaders now face the friction of fitting these tools into current stacks. Day two sessions focused less on large language models and more on the infrastructure needed to run them: data lineage, observability, and compliance.

Data maturity determines deployment success

AI reliability depends on data quality. DP Indetkar from Northern Trust warned against allowing AI to become a “B-movie robot.” This scenario occurs when algorithms fail because of poor inputs. Indetkar noted that analytics maturity must come before AI adoption. Automated decision-making amplifies errors rather than reducing them if the data strategy is unverified.

Eric Bobek of Just Eat supported this view. He explained how data and machine learning guide decisions at the global enterprise level. Investments in AI layers are wasted if the data foundation remains fragmented.

Mohsen Ghasempour from Kingfisher also noted the need to turn raw data into real-time actionable intelligence. Retail and logistics firms must cut the latency between data collection and insight generation to see a return.

Scaling in regulated environments

The finance, healthcare, and legal sectors have near-zero tolerance for error. Pascal Hetzscholdt from Wiley addressed these sectors directly.

Hetzscholdt stated that responsible AI in science, finance, and law relies on accuracy, attribution, and integrity. Enterprise systems in these fields need audit trails. Reputational damage or regulatory fines make “black box” implementations impossible.

Konstantina Kapetanidi of Visa outlined the difficulties in building multilingual, tool-using, scalable generative AI applications. Models are becoming active agents that execute tasks rather than just generating text. Allowing a model to use tools – like querying a database – creates security vectors that need serious testing.

Parinita Kothari from Lloyds Banking Group detailed the requirements for deploying, scaling, monitoring, and maintaining AI systems. Kothari challenged the “deploy-and-forget” mentality. AI models need continuous oversight, similar to traditional software infrastructure.

The change in developer workflows

Of course, AI is fundamentally changing how code is written. A panel with speakers from Valae, Charles River Labs, and Knight Frank examined how AI copilots reshape software creation. While these tools speed up code generation, they also force developers to focus more on review and architecture.

This change requires new skills. A panel with representatives from Microsoft, Lloyds, and Mastercard discussed the tools and mindsets needed for future AI developers. A gap exists between current workforce capabilities and the needs of an AI-augmented environment. Executives must plan training programmes that ensure developers sufficiently validate AI-generated code.

Dr Gurpinder Dhillon from Senzing and Alexis Ego from Retool presented low-code and no-code strategies. Ego described using AI with low-code platforms to make production-ready internal apps. This method aims to cut the backlog of internal tooling requests.

Dhillon argued that these strategies speed up development without dropping quality. For the C-suite, this suggests cheaper internal software delivery if governance protocols stay in place.

Workforce capability and specific utility

The broader workforce is starting to work with “digital colleagues.” Austin Braham from EverWorker explained how agents reshape workforce models. This terminology implies a move from passive software to active participants. Business leaders must re-evaluate human-machine interaction protocols.

Paul Airey from Anthony Nolan gave an example of AI delivering literally life-changing value. He detailed how automation improves donor matching and transplant timelines for stem cell transplants. The utility of these technologies extends to life-saving logistics.

A recurring theme throughout the event is that effective applications often solve very specific and high-friction problems rather than attempting to be general-purpose solutions.

Managing the transition

The day two sessions from the co-located events show that enterprise focus has now moved to integration. The initial novelty is gone and has been replaced by demands for uptime, security, and compliance. Innovation heads should assess which projects have the data infrastructure to survive contact with the real world.

Organisations must prioritise the basic aspects of AI: cleaning data warehouses, establishing legal guardrails, and training staff to supervise automated agents. The difference between a successful deployment and a stalled pilot lies in these details.

Executives, for their part, should direct resources toward data engineering and governance frameworks. Without them, advanced models will fail to deliver value.

See also: AI Expo 2026 Day 1: Governance and data readiness enable the agentic enterprise

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AI Expo 2026 Day 2: Moving experimental pilots to AI production appeared first on AI News.

]]>