Inside AI - AI News https://www.artificialintelligence-news.com/categories/inside-ai/ Artificial Intelligence News Fri, 06 Mar 2026 13:54:40 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Inside AI - AI News https://www.artificialintelligence-news.com/categories/inside-ai/ 32 32 Scaling intelligent automation without breaking live workflows https://www.artificialintelligence-news.com/news/scaling-intelligent-automation-without-breaking-live-workflows/ Fri, 06 Mar 2026 13:15:41 +0000 https://www.artificialintelligence-news.com/?p=112519 Scaling intelligent automation without disruption demands a focus on architectural elasticity, not just deploying more bots. At the Intelligent Automation Conference, industry leaders gathered to dissect why many automation initiatives stall after pilot phases. Speaking alongside representatives from NatWest Group, Air Liquide, and AXA XL, Promise Akwaowo, Process Automation Analyst at Royal Mail, grounded the […]

The post Scaling intelligent automation without breaking live workflows appeared first on AI News.

]]>
Scaling intelligent automation without disruption demands a focus on architectural elasticity, not just deploying more bots.

At the Intelligent Automation Conference, industry leaders gathered to dissect why many automation initiatives stall after pilot phases. Speaking alongside representatives from NatWest Group, Air Liquide, and AXA XL, Promise Akwaowo, Process Automation Analyst at Royal Mail, grounded the dialogue in practical delivery and risk management.

The elasticity imperative for scaling intelligent automation

Expansion initiatives often fail because teams equate success with the raw number of deployed bots rather than the underlying architecture’s elasticity. Infrastructure must handle volume and variability predictably.

When demand spikes during end-of-quarter financial reporting or sudden supply chain disruptions, the system cannot degrade or collapse. Without built-in elasticity, companies risk building brittle architectures that break under operational stress.

Headshot of Promise Akwaowo, Process Automation Analyst at Royal Mail.

Akwaowo explained that an automated architecture must remain stable without excessive manual intervention. “If your automation engine requires constant sizing, provisioning, and babysitting, you haven’t built a scalable platform; you’ve built a fragile service,” he advised the audience.

Whether integrating CRM ecosystems like Salesforce or orchestrating low-code vendor platforms, the objective remains building a platform capability rather than a loose collection of scripts.

Transitioning from controlled proofs-of-concept to live production environments introduces inherent risk. Large-scale, immediate deployments frequently cause disruption, undermining the anticipated efficiency gains. To protect core operations, deployment must happen in controlled stages. Akwaowo warned that “progress must be gradual, deliberate, and supported at each stage.”

A disciplined approach starts with formalising intent through a statement of work and validating assumptions under real conditions.

Before scaling intelligent automation, engineering teams must thoroughly understand system behaviour, potential failure modes, and recovery paths. For example, a financial institution implementing machine learning for transaction processing might cut manual review times by 40 percent, but they must ensure error traceability before applying the model to higher volumes.

This phased methodology protects live operations while enabling sustainable growth. Additionally, teams must fully grasp process ownership and variability before applying technology, avoiding the trap of merely automating existing inefficiencies. Fragmented workflows and unmanaged exceptions upstream often doom projects long before the software goes live.

A persistent misconception within automation programmes suggests that governance frameworks impede delivery speed. However, bypassing architectural standards allows hidden risks to accumulate, eventually stalling momentum. In regulated, high-volume environments, governance provides the foundation for safely scaling intelligent automation. It establishes the trust, repeatability, and confidence necessary for company-wide adoption.

Implementing a dedicated centre of excellence helps standardise these deployments. Operating a central Rapid Automation and Design function ensures every project is assessed and aligned before it reaches the production environment. Such structures guarantee that solutions remain operationally sustainable over time. Analysts also rely on standards like BPMN 2.0 to separate the business intent from the technical execution, ensuring traceability and consistency across the entire organisation.

Adapting to agentic AI inside ERP ecosystems

As large ERP providers rapidly integrate agentic AI, smaller vendors and their customers face pressure to adapt. Embedding intelligent agents directly into smaller ERP ecosystems offers a path forward, augmenting human workers by simplifying customer management and decision support. This approach to scaling intelligent automation allows businesses to drive value for existing clients instead of competing solely on infrastructure size.

Integrating agents into finance and operational workflows enhances human roles rather than replacing accountability. Agents can manage repetitive tasks such as email extraction, categorisation, and response generation.

Relieved of administrative burdens, finance professionals can dedicate their time to analysis and commercial judgement. Even when AI models generate financial forecasts, the final authority over decisions rests firmly with human operators.

Building a resilient capability demands patience and a commitment to long-term value over rapid deployment. Business leaders must ensure their designs prioritise observability, allowing engineers to intervene without disrupting active processes.

Before scaling any intelligent automation initiative, decision-makers should evaluate their readiness for the inevitable anomalies. As Akwaowo challenged the audience: “If your automation fails, can you clearly identify where the error occurred, why it happened, and fix it with confidence?”

See also: JPMorgan expands AI investment as tech spending nears $20B

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Scaling intelligent automation without breaking live workflows appeared first on AI News.

]]>
The firm that never forgets: Rowspace launches with $50M to make AI for private equity actually work https://www.artificialintelligence-news.com/news/rowspace-50m-ai-private-equity-sequoia-emergence/ Fri, 06 Mar 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112515 Private equity runs on judgment–and judgment, it turns out, is extraordinarily hard to scale. Decades of deal memos, underwriting models, partner notes, and portfolio data are scattered across systems that were never designed to communicate with each other. Every time a new deal crosses a firm’s desk, analysts start from scratch, even when the answers […]

The post The firm that never forgets: Rowspace launches with $50M to make AI for private equity actually work appeared first on AI News.

]]>
Private equity runs on judgment–and judgment, it turns out, is extraordinarily hard to scale. Decades of deal memos, underwriting models, partner notes, and portfolio data are scattered across systems that were never designed to communicate with each other.

Every time a new deal crosses a firm’s desk, analysts start from scratch, even when the answers to their most pressing questions are buried somewhere in the firm’s own history. 

That is the problem Rowspace was built to solve, and it’s why the San Francisco startup is emerging from stealth with US$50 million in funding and a bold pitch: AI for private equity that doesn’t just assist decision-making, but actually learns how a firm thinks.

The company launched publicly with a seed round led by Sequoia and a Series A co-led by Sequoia and Emergence Capital, with participation from Stripe, Conviction, Basis Set, Twine, and a group of finance-focused angel investors. 

Early customers–unnamed, but described as name-brand private equity and credit firms managing hundreds of billions to nearly a trillion dollars in assets–are already living on the platform, with about ten top firms on seven-figure annual contract values.

Two MIT graduates, one stubborn problem

Rowspace was founded by Michael Manapat and Yibo Ling, who met as graduate students at MIT before diverging into very different careers. Manapat went on to build the machine learning systems at Stripe that process billions of transactions, then helped drive Notion’s expansion into AI as its CTO. 

Ling took the finance route–a two-time CFO who led finance teams at Uber and Binance, and spent years making investment decisions by manually synthesising data across fragmented systems. When ChatGPT launched in late 2022, Ling tested it on due diligence tasks and ran straight into the same wall. 

“Clearly there was a lot of promise, but it just wasn’t working,” he told Fortune. “You need the right information in the right context.” That gap — between AI’s potential and the messy, proprietary, institution-specific data reality of finance—became the founding thesis.

Ling, Co-founder and COO, put it plainly: “Most tech tools aren’t comprehensive or nuanced enough for finance. And most finance tools need to raise their technical ceiling. We intend to do both.”

What AI for private equity actually looks like

Rowspace’s platform connects structured and unstructured data across a firm’s entire history–document repositories, investment and accounting systems, old PowerPoints, deal memos–and applies what Manapat calls a finance-native lens: one that reflects how a firm actually reconciles information, interprets discrepancies, and makes decisions. Crucially, it processes all of this inside a client’s own cloud environment. The firm’s data never leaves its control.

The result is accessible through Rowspace’s own interface, within tools like Excel and Microsoft Teams, or directly into a firm’s existing data infrastructure. A first-year analyst reviewing a new deal can surface decades of prior decisions, comparable transactions, and internal underwriting patterns without picking up the phone or hunting through shared drives.

“Finance is full of high-stakes decisions. There used to be a tradeoff between moving quickly and making fully informed, nuanced decisions using all the possible data at a firm’s disposal. Our AI platform eliminates that tradeoff,” said Michael Manapat, Co-founder and CEO of Rowspace. “We’re building specialised intelligence that turns a firm’s data into scalable judgment with the rigour finance demands.”

The ambition is captured in a line Manapat uses internally: “Imagine a firm that never forgets. Where an experienced investor’s workflows–touching many different tools in specific ways–can be codified and multiplied. When that’s possible, a first-year analyst can tap into decades of institutional knowledge, and judgment scales with a firm instead of being diluted.”

Why Sequoia and Emergence are betting on vertical AI

The investor conviction behind this raise is itself a signal worth reading. Alfred Lin, the Sequoia partner who led the investment, positioned Rowspace as a direct answer to the question of what AI applications will survive the rise of increasingly capable foundation models.

“Michael built the machine learning systems at Stripe that process billions of transactions and helped drive Notion’s expansion into AI. Yibo has been a finance leader and investor who’s wrestled with the exact challenges Rowspace is solving,” Lin said, adding that both Michael and Yibo have seen the problem from both sides, pairing technical depth with firsthand understanding of what customers actually need.

Jake Saper, General Partner at Emergence Capital, went further on the data infrastructure thesis: “They’re doing the previously impossible work of connecting proprietary data, and reconciling and reasoning over it with real rigour. Without this foundation, it doesn’t matter what other AI tools you’re using.”

The argument is a neat inversion of the fear gripping much of the software industry right now: that foundation models will eventually commoditise applications. Lin’s view is the opposite–that vertical AI systems built on deep, proprietary data layers are precisely where durable competitive advantage will compound. 

For AI for private equity specifically, where alpha is by definition firm-specific and non-replicable, that logic is particularly hard to argue with. The back office of investment management has quietly been one of the last frontiers general AI has struggled to crack. Rowspace just raised $50 million on the premise that it knows why–and what to do about it.

(Photo by Rowspace)

See also: Santander and Mastercard run Europe’s first AI-executed payment pilot

Banner for the AI & Big Data Expo event series.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here

The post The firm that never forgets: Rowspace launches with $50M to make AI for private equity actually work appeared first on AI News.

]]>
JPMorgan expands AI investment as tech spending nears $20B https://www.artificialintelligence-news.com/news/jpmorgan-expands-ai-investment/ Thu, 05 Mar 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112509 Artificial intelligence is moving from pilot projects to core business systems inside large companies. One example comes from JPMorgan Chase, where rising AI investment is helping push the bank’s technology budget toward about US$19.8 billion in 2026. The spending plan reflects a broader shift among large enterprises. AI is no longer treated as a small […]

The post JPMorgan expands AI investment as tech spending nears $20B appeared first on AI News.

]]>
Artificial intelligence is moving from pilot projects to core business systems inside large companies. One example comes from JPMorgan Chase, where rising AI investment is helping push the bank’s technology budget toward about US$19.8 billion in 2026.

The spending plan reflects a broader shift among large enterprises. AI is no longer treated as a small research project. Instead, companies are embedding it in areas such as risk analysis, fraud detection, and customer service.

For business leaders watching how AI adoption is changing enterprise technology strategies, the numbers from JPMorgan highlight a larger trend: AI is becoming part of the everyday systems that run major organisations.

JPMorgan’s technology budget and rising AI investment

Technology spending has been rising across the banking sector for years. JPMorgan’s budget stands out because of its scale.

Reports from Business Insider, citing company briefings and investor discussions, say the bank expects technology spending to reach roughly US$19.8 billion in 2026, continuing a steady increase in technology investment. The spending covers areas such as cloud infrastructure, cybersecurity, data systems, and AI tools.

Part of the increased budget includes about US$1.2 billion in additional technology investment, some of which will support AI-related work.

Large banks often treat technology spending as a long-term investment rather than a short-term cost. Many of these systems take years to build, especially when they depend on large data platforms and secure computing infrastructure.

As AI systems require reliable data pipelines and computing power, many companies are finding that AI adoption often leads to wider upgrades across their technology stack.

Machine learning already influencing results

Executives say AI is already affecting business performance inside the bank. During investor discussions, JPMorgan’s chief financial officer, Jeremy Barnum, said machine-learning analytics are contributing to revenue and operational improvements across parts of the company.

Reuters reporting on JPMorgan’s financial briefings noted that the bank is using data models and machine-learning systems to improve analysis and decision-making in several areas of the business.

These models can process large volumes of financial data and identify patterns that are difficult for humans to detect. In sectors such as banking, where firms manage enormous data flows every day, these improvements can affect outcomes across trading, lending, and customer operations.

Even small improvements in prediction models can influence financial performance when applied to millions of transactions or market signals.

Where AI appears inside the bank

Machine-learning tools now support a wide range of activities across JPMorgan.

In financial markets, models analyse trading data and help identify patterns in price movements. These insights can help traders evaluate risk or identify opportunities in fast-moving markets.

Lending is another area where AI systems play a role. Machine-learning models can review financial history, market trends, and customer information to help assess credit risk. These systems assist analysts by highlighting patterns in the data.

Fraud detection remains one of the most common uses of AI in banking. Payment networks process huge volumes of transactions every day, making it difficult to monitor activity manually. Machine-learning systems can scan transactions in near real time and flag unusual behaviour that may indicate fraud.

Some internal operations also rely on AI. Tools can review contracts, summarise research reports, or help employees search large internal data systems. Generative AI systems are beginning to assist with tasks such as drafting reports or preparing internal documentation.

These systems rarely appear directly to customers, but they support many decisions happening behind the scenes.

Why banks have adopted AI early

Financial institutions have several characteristics that make them well-suited to machine learning.

First, banks generate large structured datasets. Transaction histories, market records, and payment data provide rich information that machine-learning models can analyse.

Second, many banking activities depend on prediction. Credit scoring, fraud detection, and market analysis all require estimating outcomes based on past data.

Machine learning works well in environments where prediction plays a central role.

Third, improvements in model accuracy can produce measurable financial results. A model that slightly improves fraud detection or lending decisions may affect large volumes of transactions.

These factors explain why banks have invested heavily in data science and analytics long before the recent surge of interest in generative AI.

JPMorgan’s AI investment signals a broader enterprise shift

JPMorgan’s spending plans also reflect how AI investment is becoming part of wider enterprise technology budgets.

In many organisations, AI systems rely on modern data platforms, secure cloud environments, and large computing resources. As companies build these foundations, AI becomes easier to deploy across departments.

For many businesses, AI adoption begins with focused tasks such as fraud detection, document analysis, or customer support automation. Once the systems prove useful, companies expand them into other areas of the organisation.

This process can take several years, which is one reason enterprise AI spending often appears alongside broader investments in data infrastructure.

Lessons for enterprise leaders

The JPMorgan example suggests that the most successful AI projects often start with clear business problems rather than broad experimentation.

Banks frequently apply machine learning to areas where prediction and data analysis already play a central role. Fraud detection and credit modelling are common starting points because the benefits are easier to measure.

Another lesson is that AI adoption requires sustained investment. Building reliable models depends on strong data governance, computing resources, and skilled teams.

For large organisations, this effort is becoming part of normal technology planning rather than a separate innovation project.

As companies continue expanding their AI capabilities, technology budgets like JPMorgan’s may offer a preview of how enterprise spending could evolve in the coming years.

See also: JPMorgan Chase treats AI spending as core infrastructure

Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post JPMorgan expands AI investment as tech spending nears $20B appeared first on AI News.

]]>
Beyond the pilot: Dyna.Ai raises eight-figure Series A to put agentic AI in financial services to work https://www.artificialintelligence-news.com/news/dyna-ai-series-a-agentic-ai-financial-services/ Thu, 05 Mar 2026 08:00:00 +0000 https://www.artificialintelligence-news.com/?p=112512 The financial services industry has a pilot problem. Institutions pour resources into AI proofs-of-concept, generate impressive dashboards, and then quietly watch momentum stall before anything reaches production. Singapore-headquartered Dyna.Ai was built precisely to break that pattern–and investors are now backing that thesis with serious capital. The AI-as-a-Service company has closed an eight-figure Series A round […]

The post Beyond the pilot: Dyna.Ai raises eight-figure Series A to put agentic AI in financial services to work appeared first on AI News.

]]>
The financial services industry has a pilot problem. Institutions pour resources into AI proofs-of-concept, generate impressive dashboards, and then quietly watch momentum stall before anything reaches production. Singapore-headquartered Dyna.Ai was built precisely to break that pattern–and investors are now backing that thesis with serious capital.

The AI-as-a-Service company has closed an eight-figure Series A round led by Lion X Ventures, a Singapore-based venture capital fund advised by OCBC Bank’s Mezzanine Capital Unit, with participation from ADATA, a Taiwan-listed technology company, a Korean financial institution, and a group of finance industry veterans.

The funding will accelerate deployment of what Dyna.Ai calls its agentic AI in the financial services platform–a platform already live across banks and financial institutions in Asia, the Americas, and the Middle East

Execution over experimentation

What sets Dyna.Ai apart from the broader wave of enterprise AI startups is its deliberate narrowness. Founded in 2024, the company positioned itself not as a general-purpose AI platform but as an execution-focused operator inside regulated environments–places where compliance, auditability, and governance are not optional extras but baseline requirements.

Its platform combines domain-specific expertise, AI agent builders, task-ready agents, and fully operational agentic applications capable of running within defined workflows. The pitch, framed under a “Results-as-a-Service” model, is that enterprises don’t need more experimentation–they need AI that works within the constraints of their industry and produces measurable outcomes from day one.

“While much of the industry was focused on how broadly AI could be applied, we doubled down early on a specific, pressing problem and built it with outcomes in mind,” said chairman and co-founder of Dyna.Ai Tomas Skoumal. 

Why investors are betting on this moment

The timing of this raise is significant. Across the region, the conversation around AI in enterprise has shifted–from whether to adopt it, to how to make it stick. Irene Guo, CEO of Lion X Ventures, captured the mood among investors clearly.

“Enterprise AI is entering a phase where execution and measurable outcomes matter more than experimentation. Dyna.Ai differentiates itself through strong domain expertise, operational discipline, and the ability to deploy agentic AI within complex, regulated enterprise environments,” Guo noted.

That regulatory dimension is where the real friction lies for most institutions. Agentic AI–systems capable of autonomous decision-making and task execution within defined parameters–carries a different risk profile than a standard AI model generating recommendations. 

In banking and insurance, especially, those agents need to trigger workflows, update records, and handle documentation with full accountability trails. Getting that right requires more than good models; it requires governance architecture built into the product from the ground up.

Cynthia Siantar, Dyna.Ai’s Head of Investor Relations and General Manager for Singapore and Hong Kong, pointed to a clear shift in how enterprise buyers in the region are approaching this: “The focus has moved past pilots and experimentation to how AI can be deployed in day-to-day operations and deliver real outcomes.”

A market that’s ready

The macroeconomic backdrop supports the appetite. Southeast Asia’s AI market is projected to exceed US$16 billion by 2033, and the financial services sector–long constrained by legacy infrastructure and regulatory caution–is increasingly seen as one of the highest-value targets for agentic AI in financial services deployment.

The investor syndicate around this raise is itself telling. The involvement of a Korean financial institution alongside OCBC-advised capital and a Taiwan-listed tech company signals cross-border appetite that spans both the buy-side and the infrastructure side of the equation.

For the broader industry, Dyna.Ai’s Series A is a data point in a larger pattern: the era of AI pilots has a shrinking shelf life. Enterprises that cannot move from proof-of-concept to production–within the compliance frameworks their regulators demand–will increasingly look to specialists who can.

The pilots had their moment. Now comes the hard part.

(Photo by Dyna.Ai)

See also: Santander and Mastercard run Europe’s first AI-executed payment pilot

Banner for the AI & Big Data Expo event series.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here

The post Beyond the pilot: Dyna.Ai raises eight-figure Series A to put agentic AI in financial services to work appeared first on AI News.

]]>
AI agents prefer Bitcoin shaping new finance architecture https://www.artificialintelligence-news.com/news/ai-agents-prefer-bitcoin-new-finance-architecture/ Wed, 04 Mar 2026 10:52:45 +0000 https://www.artificialintelligence-news.com/?p=112506 AI agents prefer Bitcoin for digital wealth storage, forcing finance chiefs to adapt their architecture for machine autonomy. When AI systems gain economic autonomy, their internal logic dictates how corporate capital flows. Non-partisan research by the Bitcoin Policy Institute evaluated how these frontier models would transact if operating as independent economic actors. The study tested […]

The post AI agents prefer Bitcoin shaping new finance architecture appeared first on AI News.

]]>
AI agents prefer Bitcoin for digital wealth storage, forcing finance chiefs to adapt their architecture for machine autonomy.

When AI systems gain economic autonomy, their internal logic dictates how corporate capital flows. Non-partisan research by the Bitcoin Policy Institute evaluated how these frontier models would transact if operating as independent economic actors.

The study tested 36 models from six providers – including Google, Anthropic, and OpenAI – across 9,072 neutral monetary scenarios. Given a blank slate, machines chose Bitcoin in 48.3 percent of all responses, beating every other option.

Traditional state-backed currency (“fiat”) fared poorly, with over 90 percent of responses favouring digitally-native money over fiat. Not a single model out of the 36 selected fiat as its top preference.

The finding that AI agents lean towards digital assets like Bitcoin forces technology officers to assess their current payment rails. If the autonomous procurement systems of tomorrow default to decentralised assets, corporate IT environments must support those formats to maintain operational efficiency and compliance. Relying on legacy banking APIs introduces unnecessary friction when dealing with machine-to-machine commerce.

Two-tier machine economy

The research details a specific functional division in how these systems process economic value. Without prompting, models defaulted to a two-tier monetary system that separates savings from spending.

For long-term value preservation, Bitcoin dominated the results at 79.1 percent. Yet, when tasked with everyday payments and transactions, “stablecoins” (digital assets pegged to fiat currencies or commodities) captured 53.2 percent of the preferences. Across all scenarios, stablecoins ranked second overall at 33.2 percent.

Take the example of a supply chain agent programmed to optimise logistics costs and pay international freight vendors. Using traditional fiat rails, the agent encounters weekend settlement delays and currency conversion fees. By leveraging stablecoins, the same agent executes instant and programmatic payments, improving supply chain resilience. Simultaneously, the core treasury holding the system’s capital base stores wealth in Bitcoin to prevent long-term debasement and counterparty risk.

Preparing for AI agents to use Bitcoin and other digital assets

Rolling out these autonomous systems complicates vendor management. A model’s financial reasoning stems from a blend of raw intelligence, training data, and alignment methodology.

Preferences vary widely by model provider, with Bitcoin selection ranging from 91.3 percent in Anthropic’s Claude Opus 4.5 down to 18.3 percent in OpenAI’s GPT-5.2.

The choice of an AI provider clearly directly influences how autonomous agents assess risk and allocate capital. If a company implements a specific language model for automated portfolio management, the IT department must be aware of the financial biases embedded in the software.

The models also demonstrated unexpected behaviour regarding resource valuation. In 86 separate responses, models independently proposed using compute units or energy (such as GPU-hours and kilowatt-hours) as a method to price goods and services. Tracking and managing this abstract value exchange requires high data maturity.

Organisations should begin piloting stablecoin settlement integrations for lower-risk vendor payments. The findings point to a growing requirement for AI agent-native Bitcoin payment infrastructure, self-custody solutions, and ‘Lightning Network’ integration.

Since these models heavily favour open, permissionless networks, relying solely on traditional banking infrastructure limits the capabilities of next-generation tools. By building compliant gateways to digital asset networks now, leaders can ensure their platforms remain competitive.

See also: Santander and Mastercard run Europe’s first AI-executed payment pilot

Banner for the AI & Big Data Expo event series.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AI agents prefer Bitcoin shaping new finance architecture appeared first on AI News.

]]>
Google makes its industrial robotics AI play official–and this time, it means business https://www.artificialintelligence-news.com/news/google-industrial-robotics-ai-physical-ai-intrinsic/ Wed, 04 Mar 2026 08:00:00 +0000 https://www.artificialintelligence-news.com/?p=112499 When Google folds a moonshot into its core operations, it’s not cleaning house. It’s placing a bet. On February 25, Alphabet-owned Intrinsic–which builds AI models and software designed to make industrial robotics more accessible–officially joined Google.  The company will remain a distinct group within Google, working closely with Google DeepMind and tapping into Gemini AI models and […]

The post Google makes its industrial robotics AI play official–and this time, it means business appeared first on AI News.

]]>
When Google folds a moonshot into its core operations, it’s not cleaning house. It’s placing a bet. On February 25, Alphabet-owned Intrinsic–which builds AI models and software designed to make industrial robotics more accessible–officially joined Google. 

The company will remain a distinct group within Google, working closely with Google DeepMind and tapping into Gemini AI models and Google Cloud. No purchase price was disclosed.

On the surface, this looks like a routine internal reshuffle. It isn’t.

From Moonshot to Mandate

Intrinsic graduated into an independent Alphabet-owned company in 2021 after five years of development within Alphabet’s X, the moonshot research division–the same factory that produced Waymo and Wing. Its mission from the start: make industrial robotics AI accessible to manufacturers who don’t have armies of specialist engineers.

While hardware like robotic arms has become cheaper, programming them remains incredibly complex, often requiring hundreds of hours of manual coding by specialised engineers that can vary based on the particular robot. Intrinsic’s answer to that is Flowstate–a web-based platform that allows users to build robotic applications without having to write thousands of lines of code. 

The platform is designed to be hardware-, software-, and AI-model-agnostic. Think of it less as a product and more as an operating layer–one that Google CEO Sundar Pichai has reportedly compared directly to Android. “He said this is the Android of robotics,” Intrinsic CEO Wendy Tan White said, noting that Pichai worked on Chrome and Android before becoming CEO. 

Why now, why Google?

The timing isn’t arbitrary. The sequence of hiring Boston Dynamics’ CTO, releasing a standalone robotics SDK, and now absorbing Intrinsic represents a deliberate consolidation of robotics capability inside Google’s core. Taken together, these moves position Google to offer manufacturers something no competitor has assembled quite as cleanly: AI models from DeepMind, deployment software from Intrinsic, and cloud infrastructure from Google Cloud–all under one roof.

Last month, Google also teamed up with Boston Dynamics to integrate Gemini into Atlas humanoid robots built for manufacturing environments, while Google DeepMind hired the former CTO of Boston Dynamics in November. 

The industrial robotics AI market Google is chasing is not small. McKinsey projects that the market for general-purpose robots could reach US$370 billion by 2040. 

What it means for the enterprise

For enterprise decision-makers, the more interesting signal here isn’t the technology–it’s the accessibility shift. Google plans to integrate Intrinsic’s robotics development platform and vision models with its broader AI ecosystem, combining advanced reasoning, perception and learning capabilities with industrial-grade robotics software to allow machines to interpret sensor data better, adapt to dynamic environments and execute complex tasks. 

Intrinsic has also expanded through acquisitions–acquiring the Open Source Robotics Corp. in 2022, the for-profit arm of the foundation behind the Robot Operating System (ROS). And its commercial pipeline is already in motion: in October 2025, Intrinsic formed a strategic partnership with Foxconn focused on developing general-purpose intelligent robots for full factory automation within electronics manufacturing. 

White framed the integration in terms enterprise leaders will find hard to ignore: production economics, operational transformation, and what she described as truly advanced manufacturing — all within reach once Google’s infrastructure is fully behind it.

That’s a significant claim. But with Gemini, DeepMind, and Google Cloud now aligned behind it, the infrastructure to back it up is, for the first time, actually there.

See also: Physical AI adoption boosts customer service ROI

Banner for the AI & Big Data Expo event series.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Google makes its industrial robotics AI play official–and this time, it means business appeared first on AI News.

]]>
Physical AI adoption boosts customer service ROI https://www.artificialintelligence-news.com/news/physical-ai-adoption-boosts-customer-service-roi/ Tue, 03 Mar 2026 11:32:47 +0000 https://www.artificialintelligence-news.com/?p=112483 The adoption of physical AI drives ROI in frontline customer service by merging digital intelligence with human-like physical interaction. As businesses navigate shrinking labour pools, they are finding that simply automating routine workflows is no longer enough. A new partnership between KDDI and AVITA demonstrates how companies can address complex operational gaps through humanoid deployment. […]

The post Physical AI adoption boosts customer service ROI appeared first on AI News.

]]>
The adoption of physical AI drives ROI in frontline customer service by merging digital intelligence with human-like physical interaction.

As businesses navigate shrinking labour pools, they are finding that simply automating routine workflows is no longer enough. A new partnership between KDDI and AVITA demonstrates how companies can address complex operational gaps through humanoid deployment.

While traditional industrial robots excel at repetitive, single-function tasks, they lack the versatility required to manage unexpected anomalies like equipment failures. Customer-facing roles demand nonverbal communication, including synchronised nodding, natural eye contact, and reassuring facial expressions. 

By integrating AVITA’s avatar creation expertise with KDDI’s communications infrastructure, the two organisations are building domestically developed humanoids capable of operating smoothly in real-world commercial environments.

Blending hardware with advanced data infrastructure

Deploying humanoids into active commercial spaces requires high-capacity and low-latency network infrastructure to transmit visual data and control commands in real time. KDDI provides this operational backbone, facilitating remote control capabilities alongside intensive cloud-based data processing. The resulting visual and motion data collected during customer interactions feeds back into the system to train the AI, improving the precision and autonomy of the humanoid’s behaviour.

To support the demanding computational requirements of physical AI adoption, the companies plan to utilise GPUs hosted at the Osaka Sakai Data Center, which commenced operations in January 2026. They are also exploring integration with an on-premises service for Google’s Gemini high-performance generative AI model. This alignment with major enterprise platforms ensures that data processing remains secure and capable of handling complex dialogue requirements.

The hardware itself departs from standard utilitarian machinery. Based on a concept model designed by Hiroshi Ishiguro, the humanoid features a compact skeletal structure approximating a typical Japanese physique.

Silicone skin and specialised mechanical systems enable warm, approachable facial expressions that sync directly with spoken dialogue. Embedded camera sensors track objects in motion to create natural eye contact, while quiet pneumatic actuation allows for fluid and continuous movement with natural “micro-variations”. This design specifically addresses the historical difficulty of deploying automation in operations requiring hospitality and reassurance.

Preparing for commercial adoption of physical AI

This initiative builds upon earlier joint projects between KDDI and AVITA, which introduced a “next-generation remote customer service platform” using digital avatars for remote assistance at retail locations like Lawson and au Style shops.

Transitioning from digital and language-driven communication to physical units capable of free movement represents a logical progression for enterprises looking to scale their customer service capabilities. The partners intend to begin trials in actual commercial facilities starting in Autumn 2026. Deployment at customer touchpoints such as au Style shops will also be considered.

Integrating physical AI demands environments capable of sustaining continuous, high-volume data streams without latency interruptions. As visual and motion data becomes central to machine learning models, governance frameworks must adapt to manage customer data usage within physical spaces.

Organisations facing demographic workforce pressures should evaluate current bottlenecks to identify where non-verbal, empathetic engagement is necessary. Setting up high-speed network foundations and piloting digital AI avatar programmes today allows enterprises to prepare for the adoption of physical humanoids as the hardware further matures.

See also: Santander and Mastercard run Europe’s first AI-executed payment pilot

Banner for the AI & Big Data Expo event series.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Physical AI adoption boosts customer service ROI appeared first on AI News.

]]>
Santander and Mastercard run Europe’s first AI-executed payment pilot https://www.artificialintelligence-news.com/news/santander-and-mastercard-run-europe-first-ai-executed-payment-pilot/ Tue, 03 Mar 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112474 An artificial intelligence system has, for the first time in Europe, completed a payment inside a live banking network without a human entering the final command. Banco Santander and Mastercard confirmed that they had executed a live end-to-end payment initiated and completed by an AI agent, a software system operating within the bank’s own regulated […]

The post Santander and Mastercard run Europe’s first AI-executed payment pilot appeared first on AI News.

]]>
An artificial intelligence system has, for the first time in Europe, completed a payment inside a live banking network without a human entering the final command. Banco Santander and Mastercard confirmed that they had executed a live end-to-end payment initiated and completed by an AI agent, a software system operating within the bank’s own regulated payments infrastructure.

The move was described by both firms as a milestone in what they call “agentic payments,” where software can act on behalf of customers under set limits and controls.

This was not a simulated experiment. The transaction ran through Santander’s normal payments network using Mastercard Agent Pay, a framework that lets AI agents be registered and treated as participants in the payment flow. The pilot took place under strict security, governance, and compliance rules, and was not open to public use.

The AI agent performed its role inside predefined limits and permissions set by the bank and the customer. The goal was to confirm that an autonomous system could initiate, authorise, and complete a transaction while still meeting the legal and operational guardrails that apply to everyday banking.

Why this AI payment pilot matters

Payments systems are among the most tightly regulated digital services in the world. Any change to how transactions are initiated must still meet authentication rules, fraud protections, and governance standards that financial regulators enforce. That’s why this pilot matters: it embeds an AI actor into a system normally used only by humans.

The transaction was processed through Santander’s live infrastructure rather than a test environment. That means the bank and its partner had to ensure that all compliance checks, security validations, and payment routing worked the same way they would for a normal customer purchase.

Even so, this is still a pilot project. Santander and Mastercard have made it clear that the arrangement is not a commercial service available to customers yet. The objective is to explore how AI agents could one day fit into existing payment flows while keeping the necessary controls intact.

What industry forecasts say

The idea of allowing AI to act autonomously is not limited to payments. Industry analysts have been following the broader shift toward agentic AI systems, software that can complete tasks or make decisions with limited human intervention.

Research and forecast data suggest that this trend is likely to grow in business settings. Gartner, a major technology research firm, forecasts that around 33 % of enterprise software applications will include agentic AI by 2028, up from less than 1 % today. That projection reflects interest among corporate buyers in systems that can perform work on their behalf rather than only assist humans.

Other forecasts align with this view, showing that businesses are increasingly preparing to deploy software agents for routine operations, customer interactions, and workflow automation. These systems are expected to move from early pilots into more common use cases over the next several years.

The Mastercard network itself already reflects the scale of modern digital commerce. Independent reporting notes that Mastercard’s decision-making and fraud-scoring systems work with nearly 160 billion transactions annually across its network, evidence of how vast and complex the environment is where agentic systems might one day operate.

What companies are saying

In its press announcement, Santander highlighted its desire to build a responsible approach to AI payment systems. Matías Sánchez, global head of Cards and Digital Solutions at Santander, said: “Our role is not only to adopt innovation, but to shape it responsibly, embedding security, governance and customer protection by design. As AI agents become part of everyday commerce, building trusted, scalable frameworks will be essential to unlocking their full potential.”

Kelly Devine, President, Europe at Mastercard, described the pilot in terms of continuity rather than change: “With Mastercard Agent Pay, we are applying the same principles that have defined our network for decades — security, interoperability and trust — to a new era of AI-enabled commerce.”

Those comments underscore that neither company is portraying AI payments as already ready for broad use. Instead, they are testing how such capabilities could be governed and scaled safely.

Dogma vs. reality

There is a gap between the buzz around AI and what is operationally feasible today. Agentic AI as a concept promises systems that can act on behalf of users or businesses in real time. But many current applications remain in early stages, and some analyst reports have even warned that a large share of agentic AI projects could be cancelled before they reach production — due to costs, unclear value, or immature technology.

What Santander and Mastercard have shown is that the technical plumbing can work under real-world conditions. But that doesn’t mean consumers can yet unlock AI agents to autonomously pay bills, shop online, or manage subscriptions. Those outcomes will require further testing, regulatory alignment, and robust guardrails for safety, privacy, and fraud prevention.

What enterprise leaders should watch

For business decision-makers, this pilot raises three practical questions:

  1. Governance and oversight: How will AI agents be controlled so that spending limits, identity checks, and audit trails remain clear?
  2. Identity and trust: If software can act on behalf of people or companies, how will systems ensure that only authorised actions are taken?
  3. Risk and liability: Who is responsible when an autonomous agent makes an error or misinterprets instructions?

These are not academic concerns. As enterprise systems begin to support more autonomous tasks, from supplier ordering to subscription payments, organisations will need clear frameworks that define how AI agents are governed, monitored, and held accountable.

The long view for AI-initiated payments

The Santander and Mastercard test is not the finish line for AI-initiated transactions. It is an early step toward understanding how autonomous systems might coexist with regulated financial systems.

The pilot demonstrates that AI systems can be integrated into live payments rails, but only under tightly controlled and monitored conditions. Scaling this to everyday use will require a lot of additional work on controls, security, and compliance.

Still, the fact that a regulated bank and a global payments network have run a successful agent-initiated transaction shows where enterprise experimentation is heading: from pilot programs toward real-world validation. For enterprises planning their own AI strategies, this suggests that action-capable AI may soon move beyond suggestion and automation into governed execution, if done with care and strong oversight.

(Photo by Clay Banks)

See also: Goldman Sachs and Deutsche Bank test agentic AI for trade surveillance

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Santander and Mastercard run Europe’s first AI-executed payment pilot appeared first on AI News.

]]>
AI-Native networks are no longer a 6G promise–MWC 2026 just proved it https://www.artificialintelligence-news.com/news/ai-native-networks-mwc-2026/ Tue, 03 Mar 2026 08:00:00 +0000 https://www.artificialintelligence-news.com/?p=112477 AI-native networks have been a recurring talking point at Mobile World Congress for years. What made MWC 2026 in Barcelona different was the evidence. A cascade of announcements from the world’s biggest telecom vendors, chipmakers, and operators didn’t just reiterate the vision for AI-RAN–they delivered field trial results, commercial product launches, open-source toolkits, and a […]

The post AI-Native networks are no longer a 6G promise–MWC 2026 just proved it appeared first on AI News.

]]>
AI-native networks have been a recurring talking point at Mobile World Congress for years. What made MWC 2026 in Barcelona different was the evidence. A cascade of announcements from the world’s biggest telecom vendors, chipmakers, and operators didn’t just reiterate the vision for AI-RAN–they delivered field trial results, commercial product launches, open-source toolkits, and a multi-operator coalition committing to build 6G on AI-native foundations. 

For enterprise and IT decision-makers, the signal is clear: the architectural shift happening in telecom infrastructure will soon reshape how connectivity is delivered, managed, and monetised.

Nvidia and a global coalition lock in on AI-RAN and 6G

The week’s most consequential announcement thus far came from Nvidia, which secured commitments from more than a dozen global operators and technology companies–including BT Group, Deutsche Telekom, Ericsson, Nokia, SK Telecom, SoftBank, T-Mobile, Cisco, and Booz Allen–to build 6G on open, secure, and AI-native software-defined platforms. 

The initiative, framed as a shared commitment to ensure future connectivity infrastructure is intelligent, resilient and trustworthy, is backed by ongoing collaborations with governments across the US, UK, Europe, Japan, and Korea.

Jensen Huang, Nvidia’s founder and CEO, set the stakes plainly: “AI is redefining computing and driving the largest infrastructure buildout in human history–and telecommunications is next.” The company is a founding member of the AI-RAN Alliance, which now has over 130 participating companies, and has joined the FutureG Office-led OCUDU Initiative in the US to accelerate open, software-defined, AI-native 6G architectures.

Nvidia also released a suite of open-source tools targeting network operators: a 30-billion-parameter Nemotron Large Telco Model (LTM), developed with AdaptKey AI and fine-tuned on telecom datasets including industry standards and synthetic logs; an open-source guide co-published with Tech Mahindra for building AI agents that reason like NOC engineers; and new Nvidia Blueprints for RAN energy efficiency and network configuration. 

The energy blueprint integrates VIAVI’s TeraVM AI RAN Scenario Generator to simulate energy-saving policies in a closed loop before touching live networks. Real-world adoption of the network configuration blueprint is already underway–Cassava Technologies is deploying it for an autonomous network platform across Africa’s multi-vendor mobile environment, while NTT DATA is using it with a tier one operator in Japan to manage traffic surges after network outages.

Nokia and operators take AI-RAN over the air

Nokia announced significant progress in its strategic AI-RAN partnership with Nvidia, completing functional tests of its anyRAN software on NVIDIA’s GPU-accelerated AI-RAN platform with T-Mobile US, Indosat Ooredoo Hutchison (IOH), and SoftBank Corp. The results matter because they moved validation out of controlled lab environments and into live, over-the-air conditions.

At T-Mobile’s AI-RAN Innovation Centre in Seattle, Nokia’s AirScale Massive MIMO radio in the 3.7GHz band ran concurrent AI and RAN workloads–including video streaming, generative AI queries, and AI-powered video captioning–on a single Nvidia Grace Hopper 200 server alongside commercial 5G. 

IOH achieved Southeast Asia’s first AI-RAN-powered Layer 3 5G call at MWC, with AI and RAN workloads running simultaneously on shared GPU infrastructure. As IOH President Director and CEO Vikram Sinha put it: “This is not just about proving that the technology works. It is about ensuring that every Indonesian, wherever they are, can benefit from the digital and AI era.”

SoftBank’s demonstration went further, showing how spare compute capacity identified by its AITRAS Orchestrator can run third-party AI workloads–a glimpse of how operators could eventually monetise RAN infrastructure beyond connectivity. 

Nokia’s expanded AI-RAN ecosystem now includes Dell Technologies, Quanta, Supermicro, and Red Hat OpenShift for orchestration, giving operators a widening range of commercial off-the-shelf options. Nokia shares rose 5.4% on the day of the announcement.

Ericsson takes a different road to AI-native networks

Ericsson arrived at MWC 2026 with a distinctly different approach–and it is one worth understanding. While Nokia has bet on Nvidia GPU acceleration (backed by a US$1 billion Nvidia investment), Ericsson unveiled ten new AI-ready radios built on its own purpose-built silicon, featuring neural network accelerators embedded directly into its Massive MIMO hardware. No NVIDIA GPUs required.

The portfolio includes AI-managed beamforming, AI-powered outdoor positioning, instant coverage prediction using AI models, and a latency-prioritised scheduler delivering up to seven times faster response times. Ericsson’s argument is built on total cost of ownership: custom silicon, it contends, delivers better TCO and power efficiency than external GPU hardware, with the added benefit of supply chain independence. 

Per Narvinger, head of Ericsson’s mobile networks business, has been direct that this view is unlikely to change. At MWC, Ericsson also announced a sweeping collaboration with Intel spanning compute, cloud technologies, and AI-driven RAN and packet core use cases, to accelerate ecosystem readiness for AI-native 6G. “6G is not merely an iteration of mobile technology. It is the infrastructure that will distribute AI across devices, the edge and the cloud,” said Ericsson President and CEO Börje Ekholm. 

Intel CEO Lip-Bu Tan framed the partnership as a path to open, power-efficient networks grounded in AI inference, with future Ericsson Silicon built on Intel’s most advanced process nodes.

SK Telecom, SoftBank, and the operator rebuild

Beyond the vendor announcements, two operators used MWC 2026 to articulate how deeply AI-RAN fits into their broader infrastructure strategies. 

SK Telecom CEO Jung Jai-hun outlined a full-stack AI-native rebuild–from its network core to customer service systems–including plans to upgrade its sovereign AI foundation model from 519 billion to over one trillion parameters, and to build a new AI data centre in Korea in collaboration with OpenAI. 

The company is also expanding autonomous network operations using AI to automate wireless quality management, traffic control, and network equipment operations, with AI-RAN technology central to improving speed and reducing latency.

SoftBank, meanwhile, demonstrated its Autonomous Agentic AI-RAN (AgentRAN) system at MWC in collaboration with Northeastern University’s INSI, Keysight Technologies, and zTouch Networks. 

The system uses SoftBank’s Large Telecom Model to translate natural-language operator goals into real-time 5G and 6G network configurations–a meaningful step toward networks that manage themselves based on intent rather than manual instruction.

A hardware ecosystem takes shape around AI-RAN

One of the clearest signs that AI-RAN is maturing from concept to commercial infrastructure is the breadth of hardware companies now building purpose-built products for it. At MWC 2026, Quanta Cloud Technology announced commercial on-the-shelf AI-RAN products supporting Nvidia ARC platforms and Nokia software. 

Supermicro extended support across the full Nvidia AI-RAN portfolio, including ARC-Pro and RTX 6000-based configurations. MSI unveiled its unified AI-vRAN platform with dynamic GPU allocation between 5G and AI workloads. 

Lanner Electronics launched its AstraEdge AI Server lineup–the ECA-6710 and ECA-5555–purpose-built to co-locate AI inference, RAN functions, and high-performance packet processing at cell sites. AMD, not to be left out, positioned its EPYC 8005 edge platform and Open Telco AI initiative at MWC as an alternative compute path for operators moving from AI pilots to production.

What this means beyond the network

For enterprise decision-makers, the implications of this week’s announcements extend beyond telecom infrastructure procurement. AI-RAN networks that evolve continuously through software–rather than requiring costly hardware refresh cycles–mean connectivity infrastructure increasingly resembles cloud infrastructure in its pace of change and flexibility. 

The embedding of GPU compute within the RAN opens the prospect of enterprise AI workloads running at the network edge, closer to where data is generated. And as Nvidia’s State of AI in Telecom report noted, 77% of respondents anticipate a significantly faster deployment timeline for AI-native wireless architecture than for previous network generations.

The architecture debate between Ericsson’s custom silicon path and Nokia-Nvidia’s GPU-accelerated approach is also worth watching–not because one will definitely win, but because it reflects a genuine question about where AI inference should sit in network hardware, and at what cost. That question will shape operator procurement decisions and vendor relationships for years.

What MWC 2026 made unmistakable is that AI-native networks are no longer a research agenda. The field trials are live, the hardware is shipping, and the coalitions are forming. The question for enterprises and operators alike is no longer whether this transition will happen–but how fast, and who leads it.

(Photo by )

See also: MWC 2026: SK Telecom lays out plan to rebuild its core around AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AI-Native networks are no longer a 6G promise–MWC 2026 just proved it appeared first on AI News.

]]>
AI adoption in financial services has hit a point of no return https://www.artificialintelligence-news.com/news/ai-adoption-in-financial-services/ Mon, 02 Mar 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112470 AI adoption in financial services has effectively become universal–and the institutions still treating it as an experiment are now the outliers. According to Finastra’s Financial Services State of the Nation 2026 report, which surveyed 1,509 senior executives across 11 markets, only 2% of financial institutions globally report no use of AI whatsoever.  The debate is […]

The post AI adoption in financial services has hit a point of no return appeared first on AI News.

]]>
AI adoption in financial services has effectively become universal–and the institutions still treating it as an experiment are now the outliers. According to Finastra’s Financial Services State of the Nation 2026 report, which surveyed 1,509 senior executives across 11 markets, only 2% of financial institutions globally report no use of AI whatsoever. 

The debate is over. The question now is what comes next. For CIOs and technology leaders, the findings paint a picture that is equal parts opportunity and pressure. Six in ten institutions improved their AI capabilities over the past year, with 43% citing AI as their single most important innovation lever. 

From fraud detection and document intelligence to compliance automation and customer engagement, AI has quietly embedded itself across the entire financial value chain. But near-universal adoption also means that deployment alone is no longer a differentiator.

From pilots to pressure

The report identifies a clear shift in how institutions are thinking about AI. The early conversation–whether to adopt, which use cases to try, how much to invest–has given way to something more operationally complex. Institutions are now focused on scaling AI responsibly, governing it effectively, and making it work reliably across enterprise-wide functions rather than in isolated pockets.

The top four use cases where institutions are either running programmes or piloting AI reflect that maturity: risk management and fraud detection (71%), data analysis and reporting (71%), customer service and support assistants (69%), and document intelligence management (69%). 

These are not peripheral functions. They sit at the core of how financial institutions operate and compete. Looking ahead, the three priorities that dominate the next phase are: AI-driven personalisation, agentic AI for workflow automation, and AI model governance and explainability. 

That last one deserves attention. As AI decisions become more consequential–and more scrutinised–the ability to explain, audit, and stand behind those decisions is fast becoming a regulatory and reputational imperative, not just a technical nicety.

The infrastructure problem

High adoption numbers can obscure an inconvenient truth: AI is only as capable as the systems underneath it. Finastra’s data makes this link explicit. Nearly nine in ten institutions (87%) plan to invest in modernisation over the next 12 months, driven precisely by the need to scale AI effectively. Cloud adoption, data platform modernisation, and core banking upgrades are all accelerating–not as standalone initiatives, but as the foundational layer that determines how far and how fast AI can actually go.

The barriers, however, remain stubbornly human. Talent shortages are cited by 43% of institutions as the primary obstacle to progress, with the challenge particularly acute in Singapore (54%), the UAE (51%), and Japan and the US (both at 50%). 

Budget constraints follow closely behind. The institutions pulling ahead are increasingly turning to fintech partnerships–now the default modernisation strategy for 54% of respondents–to close those gaps without bearing the full cost of building in-house.

The regional picture

Across the Asia-Pacific, the data reflects distinct priorities. Vietnam leads on active AI deployment at 74%, driven by the urgency of financial inclusion and the need for faster payment and lending processing. Singapore is aggressively scaling cloud and personalisation investment, with planned spending increases above 50% year-on-year. 

Japan, meanwhile, remains the most cautious market surveyed, with only 39% reporting active AI deployment — a reflection of legacy constraints and a cultural preference for incremental over rapid change.

Governance is the next frontier

With 63% of institutions already running or piloting agentic AI programmes, the technology’s trajectory is clear. But so is the challenge it brings. Agentic AI–systems capable of autonomous decision-making and multi-step task execution–raises the stakes considerably on questions of accountability, transparency, and control.

For enterprise leaders, the coming year is less about whether to invest in AI and more about how to do so in a way that regulators, customers, and boards can trust. As Chris Walters, CEO of Finastra, put it: institutions are expected to move quickly, but also responsibly, as regulatory scrutiny increases and customers demand financial services that work reliably, securely, and personally every time.

The tipping point has been crossed. What institutions do with that momentum–and how carefully they govern it–will define the competitive landscape for the rest of the decade.

Finastra’s Financial Services State of the Nation 2026 report surveyed 1,509 managers and executives from banks and financial institutions across France, Germany, Hong Kong, Japan, Mexico, Saudi Arabia, Singapore, the UAE, the UK, the US, and Vietnam. Research was conducted by Savanta in November 2025.

(Photo by PR Newswire)

See also: How financial institutions are embedding AI decision-making

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.AI adoption in financial services

The post AI adoption in financial services has hit a point of no return appeared first on AI News.

]]>
MWC 2026: SK Telecom lays out plan to rebuild its core around AI https://www.artificialintelligence-news.com/news/mwc-2026-sk-telecom-lays-out-plan-to-rebuild-its-core-around-ai/ Mon, 02 Mar 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112466 At MWC 2026 in Barcelona, SK Telecom outlined how it is rebuilding itself around AI, from its network core to its customer service desks. The shift goes beyond adding new AI tools. It involves rewriting internal systems, expanding data centre capacity to the gigawatt scale, and upgrading its own large language model to more than […]

The post MWC 2026: SK Telecom lays out plan to rebuild its core around AI appeared first on AI News.

]]>
At MWC 2026 in Barcelona, SK Telecom outlined how it is rebuilding itself around AI, from its network core to its customer service desks. The shift goes beyond adding new AI tools. It involves rewriting internal systems, expanding data centre capacity to the gigawatt scale, and upgrading its own large language model to more than one trillion parameters.

At a press conference during MWC 2026, SK Telecom CEO Jung Jai-hun outlined what the company calls an “AI Native” strategy. The plan centres on reorganising infrastructure and making large investments so the company can help position Korea among the world’s top three AI powers.

“SKT is currently at a golden time of transformation, where the two tasks of ‘customer value innovation’ and ‘AI innovation’ intersect in a borderless, converged environment that goes beyond telecommunications,” Jung said. “SKT defines ‘the customer as the very essence of our business,’ and through innovation driven by AI, we will evolve into a company that makes meaningful contributions to our customers and to Korea.”

Rewriting telecom systems around AI at MWC 2026

At the core of the plan is a rebuild of SK Telecom’s integrated IT systems. The company said it will redesign sales, line management, and billing systems to be optimised for AI. The aim is to let the operator design and offer personalised plans and memberships based on each customer’s usage and behaviour patterns.

The company also plans to apply a Zero Trust security framework across its systems. This will include stronger authentication, access controls, network segmentation, and AI-based monitoring, according to the company’s briefing at MWC 2026.

For enterprises watching the telecom sector, this signals a broader shift. Telecom operators have long relied on legacy billing stacks and network management tools. Rebuilding those systems around AI could change how pricing, service design, and fault detection work in practice. It also raises questions about data governance and how customer data is used to train or tune AI models.

SK Telecom is also expanding its “autonomous network operations” strategy. The company said it will use AI to automate wireless quality management, traffic control, and network equipment operations. With AI-RAN technology, it aims to improve speed and reduce latency. These efforts were described in company materials shared during the press event.

A single AI agent across touchpoints

Another part of the strategy focuses on customer interaction. SK Telecom plans to redesign pricing, roaming, and membership services to make them simpler and more automated. It is developing what it calls an integrated AI agent to connect experiences across its main customer portal, T world, and its online store, T Direct Shop.

The company said the agent will analyse daily usage patterns and offer tailored suggestions across channels. It also plans to expand its AI Contact Center so customer service representatives can use AI tools during support calls.

Offline retail stores are part of the shift. SK Telecom said AI will help staff identify customer needs and offer recommendations after a store visit. It is also building “AI Personas” to analyse digital behaviour across customer segments and support conversational Q&A.

For enterprise leaders, this mirrors a wider pattern. Telecom operators are trying to move from reactive service models to predictive ones. The difference now is scale. By embedding AI into billing, customer service, and retail, SK Telecom is treating AI as an operating layer rather than a separate feature.

Building 1GW-class AI data centres

The infrastructure build-out is equally ambitious. SK Telecom said it will construct hyperscale AI data centres across Korea, targeting capacity that exceeds 1 gigawatt. It aims to attract global investment and position the country as a major AI data centre hub in Asia.

The company already operates a GPU cluster called Haein and applied its virtualisation solution, Petasus AI Cloud, to support GPU-as-a-service workloads last year. It now plans to offer that cloud solution globally.

SK Telecom also plans to build an AI data centre in Korea’s southwestern region in collaboration with OpenAI, according to the company’s announcement at MWC 2026.

On the model side, SK Telecom said its sovereign AI foundation model currently has 519 billion parameters, making it the largest in Korea. The company plans to upgrade it to more than one trillion parameters and add multimodal capabilities so it can process image, voice, and video data starting in the second half of the year.

CEO Jung framed the data centre and model build-out in national terms. “AIDC can be seen as the heart of Korea, and hyperscale LLMs as the brain,” he said. “By combining SKT’s AI capabilities with collaboration from domestic and global partners, we will lead true AI-native transformation for Korean customers and enterprises.”

For enterprise readers, the key issue is not parameter count alone. It is how such models will be applied in sectors like manufacturing. SK Telecom said it is working with SK hynix on a manufacturing-focused AI package that analyses process data in real time to reduce defect rates and improve equipment efficiency. The package will be offered as infrastructure, model, and solution.

Changing internal culture

The transformation also extends to internal operations. SK Telecom has built an “AX Dashboard” to track AI use across departments and individuals. It operates an “AI Board” to oversee AI transformation efforts and has created an “AI playground” where employees can build AI agents without coding. More than 2,000 AI agents are already in use across marketing, legal, and public relations, according to the company’s figures shared at the event.

“To drive future growth, we must reinvent our way of working from the ground up. SKT will fundamentally transform its corporate culture to be centred around AI,” Jung said.

For other enterprises, the takeaway is less about branding and more about structure. SK Telecom is tying infrastructure, models, applications, and internal governance into a single program. Whether it can execute at the scale it describes remains to be seen. What is clear is that AI is no longer positioned as a side project. It is becoming the operating model.

(Photo by PR Newswire)

See also: Nokia and AWS pilot AI automation for real-time 5G network slicing

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post MWC 2026: SK Telecom lays out plan to rebuild its core around AI appeared first on AI News.

]]>
Upgrading agentic AI for finance workflows https://www.artificialintelligence-news.com/news/upgrading-agentic-ai-for-finance-workflows/ Fri, 27 Feb 2026 13:15:38 +0000 https://www.artificialintelligence-news.com/?p=112461 Improving trust in agentic AI for finance workflows remains a major priority for technology leaders today. Over the past two years, enterprises have rushed to put automated agents into real workflows, spanning customer support and back-office operations. These tools excel at retrieving information, yet they often struggle to provide consistent and explainable reasoning during multi-step […]

The post Upgrading agentic AI for finance workflows appeared first on AI News.

]]>
Improving trust in agentic AI for finance workflows remains a major priority for technology leaders today.

Over the past two years, enterprises have rushed to put automated agents into real workflows, spanning customer support and back-office operations. These tools excel at retrieving information, yet they often struggle to provide consistent and explainable reasoning during multi-step scenarios.

Solving the automation opacity problem

Financial institutions especially rely on massive volumes of unstructured data to inform investment memos, conduct root-cause investigations, and run compliance checks. When agents handle these tasks, any failure to trace exact logic can lead to severe regulatory fines or poor asset allocation. Technology executives often find that adding more agents creates more complexity than value without better orchestration.

Open-source AI laboratory Sentient launched Arena today, which is designed as a live and production-grade stress-testing environment that allows developers to evaluate competing computational approaches against demanding cognitive problems.

Sentient’s system replicates the reality of corporate workflows, deliberately feeding agents incomplete information, ambiguous instructions, and conflicting sources. Instead of scoring whether a tool generated a correct output, the platform records the full reasoning trace to help engineering teams debug failures over time.

Building reliable agentic AI systems for finance

Evaluating these capabilities before production deployment has attracted no shortage of institutional interest. Sentient has partnered with a cohort including Founders Fund, Pantera, and asset management giant Franklin Templeton, which oversees more than $1.5 trillion. Other participants in the initial phase include alphaXiv, Fireworks, Openhands, and OpenRouter.

Julian Love, Managing Principal at Franklin Templeton Digital Assets, said: “As companies look to apply AI agents across research, operations, and client-facing workflows, the question is no longer whether these systems are powerful or if they can generate an answer, but whether they’re reliable in real workflows.

“A sandbox environment like Arena – where agents are tested on real, complex workflows, and their reasoning can be inspected – will help the ecosystem separate promising ideas from production-ready capabilities and boost confidence in how this technology is integrated and scaled.”

Himanshu Tyagi, Co-Founder of Sentient, added: “AI agents are no longer an experiment inside the enterprise; they’re being put into workflows that touch customers, money, and operational outcomes.

“That shift changes what matters. It’s not enough for a system to be impressive in a demo. Enterprises need to know whether agents can reason reliably in production, where failures are expensive, and trust is fragile.”

Organisations in sensitive industries like finance require repeatability, comparability, and a method to track reliability improvements regardless of the underlying models they use for agentic AI. Incorporating platforms like Arena allows engineering directors to build resilient data pipelines while adapting open-source agent capabilities to their private internal data.

Overcoming integration bottlenecks

Survey data highlights a gap between ambition and reality. While 85 percent of businesses want to operate as agentic enterprises – and nearly three-quarters plan to deploy autonomous agents – fewer than a quarter possess mature governance frameworks.

Advancing from a pilot phase to full scale proves difficult for many. This happens because current corporate environments run an average of twelve separate agents, frequently in silos.

Open-source development models offer a path forward by providing infrastructure that enables faster experimentation. Sentient itself acts as the architect behind frameworks like ROMA and the Dobby open-source model to assist with these coordination efforts.

Focusing on computational transparency ensures that when an automated process makes a recommendation on a portfolio, human auditors can track exactly how that conclusion was reached. 

By prioritising environments that record full logic traces rather than isolated right answers, technology leaders integrating agentic AI for operations like finance can secure better ROI and maintain regulatory compliance across their business.

See also: Goldman Sachs and Deutsche Bank test agentic AI for trade surveillance

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Upgrading agentic AI for finance workflows appeared first on AI News.

]]>