AI Market Trends - AI News https://www.artificialintelligence-news.com/categories/inside-ai/ai-market-trends/ Artificial Intelligence News Wed, 04 Mar 2026 07:50:49 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png AI Market Trends - AI News https://www.artificialintelligence-news.com/categories/inside-ai/ai-market-trends/ 32 32 Google makes its industrial robotics AI play official–and this time, it means business https://www.artificialintelligence-news.com/news/google-industrial-robotics-ai-physical-ai-intrinsic/ Wed, 04 Mar 2026 08:00:00 +0000 https://www.artificialintelligence-news.com/?p=112499 When Google folds a moonshot into its core operations, it’s not cleaning house. It’s placing a bet. On February 25, Alphabet-owned Intrinsic–which builds AI models and software designed to make industrial robotics more accessible–officially joined Google.  The company will remain a distinct group within Google, working closely with Google DeepMind and tapping into Gemini AI models and […]

The post Google makes its industrial robotics AI play official–and this time, it means business appeared first on AI News.

]]>
When Google folds a moonshot into its core operations, it’s not cleaning house. It’s placing a bet. On February 25, Alphabet-owned Intrinsic–which builds AI models and software designed to make industrial robotics more accessible–officially joined Google. 

The company will remain a distinct group within Google, working closely with Google DeepMind and tapping into Gemini AI models and Google Cloud. No purchase price was disclosed.

On the surface, this looks like a routine internal reshuffle. It isn’t.

From Moonshot to Mandate

Intrinsic graduated into an independent Alphabet-owned company in 2021 after five years of development within Alphabet’s X, the moonshot research division–the same factory that produced Waymo and Wing. Its mission from the start: make industrial robotics AI accessible to manufacturers who don’t have armies of specialist engineers.

While hardware like robotic arms has become cheaper, programming them remains incredibly complex, often requiring hundreds of hours of manual coding by specialised engineers that can vary based on the particular robot. Intrinsic’s answer to that is Flowstate–a web-based platform that allows users to build robotic applications without having to write thousands of lines of code. 

The platform is designed to be hardware-, software-, and AI-model-agnostic. Think of it less as a product and more as an operating layer–one that Google CEO Sundar Pichai has reportedly compared directly to Android. “He said this is the Android of robotics,” Intrinsic CEO Wendy Tan White said, noting that Pichai worked on Chrome and Android before becoming CEO. 

Why now, why Google?

The timing isn’t arbitrary. The sequence of hiring Boston Dynamics’ CTO, releasing a standalone robotics SDK, and now absorbing Intrinsic represents a deliberate consolidation of robotics capability inside Google’s core. Taken together, these moves position Google to offer manufacturers something no competitor has assembled quite as cleanly: AI models from DeepMind, deployment software from Intrinsic, and cloud infrastructure from Google Cloud–all under one roof.

Last month, Google also teamed up with Boston Dynamics to integrate Gemini into Atlas humanoid robots built for manufacturing environments, while Google DeepMind hired the former CTO of Boston Dynamics in November. 

The industrial robotics AI market Google is chasing is not small. McKinsey projects that the market for general-purpose robots could reach US$370 billion by 2040. 

What it means for the enterprise

For enterprise decision-makers, the more interesting signal here isn’t the technology–it’s the accessibility shift. Google plans to integrate Intrinsic’s robotics development platform and vision models with its broader AI ecosystem, combining advanced reasoning, perception and learning capabilities with industrial-grade robotics software to allow machines to interpret sensor data better, adapt to dynamic environments and execute complex tasks. 

Intrinsic has also expanded through acquisitions–acquiring the Open Source Robotics Corp. in 2022, the for-profit arm of the foundation behind the Robot Operating System (ROS). And its commercial pipeline is already in motion: in October 2025, Intrinsic formed a strategic partnership with Foxconn focused on developing general-purpose intelligent robots for full factory automation within electronics manufacturing. 

White framed the integration in terms enterprise leaders will find hard to ignore: production economics, operational transformation, and what she described as truly advanced manufacturing — all within reach once Google’s infrastructure is fully behind it.

That’s a significant claim. But with Gemini, DeepMind, and Google Cloud now aligned behind it, the infrastructure to back it up is, for the first time, actually there.

See also: Physical AI adoption boosts customer service ROI

Banner for the AI & Big Data Expo event series.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Google makes its industrial robotics AI play official–and this time, it means business appeared first on AI News.

]]>
AI adoption in financial services has hit a point of no return https://www.artificialintelligence-news.com/news/ai-adoption-in-financial-services/ Mon, 02 Mar 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112470 AI adoption in financial services has effectively become universal–and the institutions still treating it as an experiment are now the outliers. According to Finastra’s Financial Services State of the Nation 2026 report, which surveyed 1,509 senior executives across 11 markets, only 2% of financial institutions globally report no use of AI whatsoever.  The debate is […]

The post AI adoption in financial services has hit a point of no return appeared first on AI News.

]]>
AI adoption in financial services has effectively become universal–and the institutions still treating it as an experiment are now the outliers. According to Finastra’s Financial Services State of the Nation 2026 report, which surveyed 1,509 senior executives across 11 markets, only 2% of financial institutions globally report no use of AI whatsoever. 

The debate is over. The question now is what comes next. For CIOs and technology leaders, the findings paint a picture that is equal parts opportunity and pressure. Six in ten institutions improved their AI capabilities over the past year, with 43% citing AI as their single most important innovation lever. 

From fraud detection and document intelligence to compliance automation and customer engagement, AI has quietly embedded itself across the entire financial value chain. But near-universal adoption also means that deployment alone is no longer a differentiator.

From pilots to pressure

The report identifies a clear shift in how institutions are thinking about AI. The early conversation–whether to adopt, which use cases to try, how much to invest–has given way to something more operationally complex. Institutions are now focused on scaling AI responsibly, governing it effectively, and making it work reliably across enterprise-wide functions rather than in isolated pockets.

The top four use cases where institutions are either running programmes or piloting AI reflect that maturity: risk management and fraud detection (71%), data analysis and reporting (71%), customer service and support assistants (69%), and document intelligence management (69%). 

These are not peripheral functions. They sit at the core of how financial institutions operate and compete. Looking ahead, the three priorities that dominate the next phase are: AI-driven personalisation, agentic AI for workflow automation, and AI model governance and explainability. 

That last one deserves attention. As AI decisions become more consequential–and more scrutinised–the ability to explain, audit, and stand behind those decisions is fast becoming a regulatory and reputational imperative, not just a technical nicety.

The infrastructure problem

High adoption numbers can obscure an inconvenient truth: AI is only as capable as the systems underneath it. Finastra’s data makes this link explicit. Nearly nine in ten institutions (87%) plan to invest in modernisation over the next 12 months, driven precisely by the need to scale AI effectively. Cloud adoption, data platform modernisation, and core banking upgrades are all accelerating–not as standalone initiatives, but as the foundational layer that determines how far and how fast AI can actually go.

The barriers, however, remain stubbornly human. Talent shortages are cited by 43% of institutions as the primary obstacle to progress, with the challenge particularly acute in Singapore (54%), the UAE (51%), and Japan and the US (both at 50%). 

Budget constraints follow closely behind. The institutions pulling ahead are increasingly turning to fintech partnerships–now the default modernisation strategy for 54% of respondents–to close those gaps without bearing the full cost of building in-house.

The regional picture

Across the Asia-Pacific, the data reflects distinct priorities. Vietnam leads on active AI deployment at 74%, driven by the urgency of financial inclusion and the need for faster payment and lending processing. Singapore is aggressively scaling cloud and personalisation investment, with planned spending increases above 50% year-on-year. 

Japan, meanwhile, remains the most cautious market surveyed, with only 39% reporting active AI deployment — a reflection of legacy constraints and a cultural preference for incremental over rapid change.

Governance is the next frontier

With 63% of institutions already running or piloting agentic AI programmes, the technology’s trajectory is clear. But so is the challenge it brings. Agentic AI–systems capable of autonomous decision-making and multi-step task execution–raises the stakes considerably on questions of accountability, transparency, and control.

For enterprise leaders, the coming year is less about whether to invest in AI and more about how to do so in a way that regulators, customers, and boards can trust. As Chris Walters, CEO of Finastra, put it: institutions are expected to move quickly, but also responsibly, as regulatory scrutiny increases and customers demand financial services that work reliably, securely, and personally every time.

The tipping point has been crossed. What institutions do with that momentum–and how carefully they govern it–will define the competitive landscape for the rest of the decade.

Finastra’s Financial Services State of the Nation 2026 report surveyed 1,509 managers and executives from banks and financial institutions across France, Germany, Hong Kong, Japan, Mexico, Saudi Arabia, Singapore, the UAE, the UK, the US, and Vietnam. Research was conducted by Savanta in November 2025.

(Photo by PR Newswire)

See also: How financial institutions are embedding AI decision-making

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.AI adoption in financial services

The post AI adoption in financial services has hit a point of no return appeared first on AI News.

]]>
Mastercard’s AI payment demo points to agent-led commerce https://www.artificialintelligence-news.com/news/mastercard-ai-payment-demo-points-to-agent-led-commerce/ Mon, 23 Feb 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112338 A recent demonstration from Mastercard suggests that payment systems may be heading toward a future where software agents, not people, complete purchases. During the India AI Impact Summit 2026, Mastercard showed what it described as its first fully authenticated “agentic commerce” transaction. In the demo, as reported by Times of India, an AI agent searched […]

The post Mastercard’s AI payment demo points to agent-led commerce appeared first on AI News.

]]>
A recent demonstration from Mastercard suggests that payment systems may be heading toward a future where software agents, not people, complete purchases. During the India AI Impact Summit 2026, Mastercard showed what it described as its first fully authenticated “agentic commerce” transaction.

In the demo, as reported by Times of India, an AI agent searched for a product, assessed the website, and completed the purchase using stored payment credentials, without the user opening an app or entering card details. The company said the transaction took place inside a secure payment framework designed to verify both the user and the AI acting on their behalf.

The demonstration was controlled, not a public rollout. Mastercard executives told reporters that broader deployment would depend on regulatory approval and ecosystem readiness. Still, the test highlights a change that many enterprises may need to prepare for: the possibility that customers – or corporate systems – will increasingly rely on AI agents to initiate and complete transactions.

Assisted checkout to delegated spending

Digital payments have usually focused on reducing friction for human users through tokenisation, saved credentials, and one-click checkout. Agentic commerce goes further. Instead of helping a user complete a purchase, the system allows software to handle the process from start to finish once permission rules are in place.

The model relies on several building blocks already used in modern payments: identity verification, tokenised card data, and risk monitoring. What changes is who performs the action. If AI agents can act in defined limits, like spending caps or merchant restrictions, checkout may change from a user interaction to a background workflow.

For enterprises, the issue is if software can spend money automatically, procurement rules, approval chains, and audit trails need to account for machine decisions, not human ones. Finance teams may need clearer policies on when an AI agent can commit funds, how liability is assigned if something goes wrong, and how fraud detection should treat automated transactions.

Payment networks position for machine customers

Mastercard is not alone in exploring this direction. Across the payments sector, providers are testing ways to embed transactions into AI-driven tools and digital assistants. The goal is to ensure that when autonomous software begins purchasing goods or services, payment networks remain part of the trust and verification layer.

In public statements tied to the summit demo, Mastercard framed the effort as building infrastructure that allows AI agents to transact safely on behalf of users. That framing points to a broader industry race: not to build smarter AI shopping tools, but to control the authentication systems that make those tools safe enough for financial use.

For banks and fintech firms, the change could affect how customer identity is managed. Traditional authentication often assumes a person is present, entering a password or approving a prompt. Agentic commerce assumes the opposite: the user may not be involved at the moment of purchase. That means identity systems must verify both the account owner’s prior consent and the agent’s authority at the time of transaction.

Merchants may need API-ready storefronts

If AI agents begin acting as buyers, merchant systems may also need to adapt. Online stores built mainly for human browsing may struggle if automated agents become a meaningful share of customers.

To support machine-driven purchases, product catalogues, pricing data, and checkout processes may need to be accessible through structured APIs not only visual web pages. Inventory accuracy, transparent pricing, and clear return policies become more important when decisions are made by software trained to compare options instantly.

This could also influence competition. If agents optimise for price and delivery speed, merchants with inconsistent data or hidden fees may be filtered out before a human even sees them.

Security risks move, not disappear

While agentic commerce promises convenience, it also introduces new risks. A compromised AI assistant with payment authority could execute purchases at scale before detection. Fraud models that look for unusual user behaviour may need updating to distinguish between legitimate automated spending and malicious activity.

Regulators are likely to take a cautious approach. Mastercard’s own comments that the system still awaits approvals suggest that compliance frameworks for AI-initiated payments are still taking shape.

In enterprises deploying AI internally, similar concerns apply. Automated purchasing agents integrated into enterprise resource planning systems could streamline routine procurement, but they also expand the attack surface. Access controls and spending thresholds will matter more when software can execute financial actions without real-time human confirmation.

Where commerce may head

Mastercard’s demonstration does not mean agent-led payments will reach consumers immediately. Yet it offers a glimpse of how commerce may change as AI systems move from advisory roles into operational ones.

If the model matures, the most visible change may be that checkout disappears as a distinct step. Instead of visiting a site and paying, users or companies may set rules, and their software will handle the rest.

For enterprises, the important takeaway is less about Mastercard’s AI technology and more about the direction of travel. As AI agents gain the authority to act, payment systems, identity frameworks, and digital storefronts may need to treat software not as a tool, but as a participant in the transaction.

(Photo by Cova Software)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Mastercard’s AI payment demo points to agent-led commerce appeared first on AI News.

]]>
AI: Executives’ optimism about the future https://www.artificialintelligence-news.com/news/ai-impact-executives-optimism-for-the-future/ Fri, 20 Feb 2026 10:56:24 +0000 https://www.artificialintelligence-news.com/?p=112315 The most rigorous international study of firm-level AI impact to date has landed, and its headline finding is more constructive than many expected. Across nearly 6,000 verified executives in four countries, AI has delivered modest aggregate shifts in productivity or employment over the past three years. The measured impact reflects the early phases of deployment […]

The post AI: Executives’ optimism about the future appeared first on AI News.

]]>
The most rigorous international study of firm-level AI impact to date has landed, and its headline finding is more constructive than many expected. Across nearly 6,000 verified executives in four countries, AI has delivered modest aggregate shifts in productivity or employment over the past three years. The measured impact reflects the early phases of deployment rather than a failure of the technology.

The working paper [PDF], published by the National Bureau of Economic Research and produced by teams from the Federal Reserve Bank of Atlanta, the Bank of England, the Deutsche Bundesbank and Macquarie University, found that over 90% of firms report no measurable change headcount attributable to AI over the past three years. Given the short time horizon and the concentration of AI use in discrete functions, such incremental rather than transformative effects are consistent with how general purpose technologies have evolved historically.

Adoption of AI is widespread. Around 69% of firms are already using some form of AI, led by LLM-based text generation at 41%, data processing via machine learning at 28% and visual content creation at 29%. In the UK, firm-level adoption rose from 61% to 71% across 2025. AI tools are embedded in day-to-day workflows, and although measured impact at firm level often lags adoption, the trend is generally upwards.

The forward AI impact numbers indicate acceleration

Executives expect stronger effects to take place over the next three years. On average, they expect a 1.4% increase in productivity and a 0.8% rise in output. US executives project a 2.25% productivity gain, while UK firms expect 1.86%. In economies that have struggled with weak productivity growth for over a decade, gains of that magnitude are notable – incremental improvements, compounded across sectors, shift national outputs.

On the thorny subject of employment, executives expect a modest 0.7% reduction in headcount across the four countries over the same period. In the UK, around two-thirds of this adjustment is expected to come through slower hiring rather than outright redundancies. That pattern suggests a gradual reallocation of roles rather than abrupt terminations. As with previous waves of automation, aggregate figures do not capture job creation in adjacent roles, and in the case of AI, these might include roles around data governance, model oversight, prompt engineering, and AI-enabled service development, many of which would be new roles.

Interpreting the expectation gap

The study also compares executive expectations with those of workers. Researchers fielded parallel questions to US employees through the Survey of Working Arrangements and Attitudes. Employees expect AI to increase employment at their firms by 0.5% over the next three years, while US executives expect a 1.2% reduction. Employees foresee productivity gains of 0.92%, below the executive forecast of 2.25%.

This divergence reflects different vantage points. Executives observe cost structures and competitive pressure, while employees experience task-level augmentation and new capabilities. In practice, AI systems are often deployed to assist rather than replace, particularly in knowledge-intensive work. Evidence from controlled trials, including large language model use in customer support and professional services, shows productivity gains concentrated among less experienced staff, with quality improvements appearing alongside better output figures. Where communication and training are clear, adoption tends to proceed with limited resistance.

Why this AI impact data merits attention

Survey design influences inferences from any statistics, and in this particular case, the researchers noted variation between their own figures and those from, for example, a McKinsey survey taken in the same period that put adoption at 88% of organisations (the survey in question here pegs the figure at just 69%). On the other hand, the US Census Business Trends and Outlook Survey, which draws on a broader respondent base, estimated AI use at around 9% in early 2024, rising to 18% by December 2025. This gap reflects differences in sampling, question framing and respondent seniority. Executive surveys tend to capture intent and enterprise-level deployments, while broader business surveys may reflect narrower definitions of AI or earlier stages of implementation.

In the study in question, respondents were phone-verified, unpaid, and predominantly CEOs and CFOs, with over 90% drawn from the UK and Germany. The data was cross-checked against ten years of macro output and employment figures from national statistics agencies.

The inflection point executives anticipate may unfold over the next three years as deployments mature and integration improves, in the way that many new technologies have emerged into the workplace until they become everyday tools. The central question is less whether AI will affect productivity and employment, and more how quickly organisations can change the technology’s wider adoption into measurable economic gains.

See also: OpenAI’s enterprise push: The hidden story behind AI’s sales race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AI: Executives’ optimism about the future appeared first on AI News.

]]>
Agentic AI drives finance ROI in accounts payable automation https://www.artificialintelligence-news.com/news/agentic-ai-drives-finance-roi-in-accounts-payable-automation/ Fri, 13 Feb 2026 12:33:33 +0000 https://www.artificialintelligence-news.com/?p=112215 Finance leaders are driving ROI using agentic AI for accounts payable automation, turning manual tasks into autonomous workflows. While general AI projects saw return on investment rise to 67 percent last year, autonomous agents delivered an average ROI of 80 percent by handling complex processes without human intervention. This performance gap demands a change in […]

The post Agentic AI drives finance ROI in accounts payable automation appeared first on AI News.

]]>
Finance leaders are driving ROI using agentic AI for accounts payable automation, turning manual tasks into autonomous workflows.

While general AI projects saw return on investment rise to 67 percent last year, autonomous agents delivered an average ROI of 80 percent by handling complex processes without human intervention. This performance gap demands a change in how CIOs allocate automation budgets.

Agentic AI systems are now advancing the enterprise from theoretical value to hard returns. Unlike generative tools that summarise data or draft text, these agents execute workflows within strict rules and approval thresholds.

Boardroom pressure drives this pivot. A report by Basware and FT Longitude finds nearly half of CFOs face demands from leadership to implement AI across their operations. Yet 61 percent of finance leaders admit their organisations rolled out custom-developed AI agents largely as experiments to test capabilities rather than to solve business problems.

These experiments often fail to pay off. Traditional AI models generate insights or predictions that require human interpretation. Agentic systems close the gap between insight and action by embedding decisions directly into the workflow.

Jason Kurtz, CEO of Basware, explains that patience for unstructured experimentation is running low. “We’ve reached a tipping point where boards and CEOs are done with AI experiments and expecting real results,” he says. “AI for AI’s sake is a waste.”

Accounts payable as the proving ground for agentic AI in finance

Finance departments now direct these agents toward high-volume, rules-based environments. Accounts payable (AP) is the primary use case, with 72 percent of finance leaders viewing it as the obvious starting point. The process fits agentic deployment because it involves structured data: invoices enter, require cleaning and compliance checks, and result in a payment booking.

Teams use agents to automate invoice capture and data entry, a daily task for 20 percent of leaders. Other live deployments include detecting duplicate invoices, identifying fraud, and reducing overpayments. These are not hypothetical applications; they represent tasks where an algorithm functions with high autonomy when parameters are correct.

Success in this sector relies on data quality. Basware trains its systems on a dataset of more than two billion processed invoices to deliver context-aware predictions. This structured data allows the system to differentiate between legitimate anomalies and errors without human oversight.

Kevin Kamau, Director of Product Management for Data and AI at Basware, describes AP as a “proving ground” because it combines scale, control, and accountability in a way few other finance processes can.

The build versus buy decision matrix

Technology leaders must next decide how to procure these capabilities. The term “agent” currently covers everything from simple workflow scripts to complex autonomous systems, which complicates procurement.

Approaches split by function. In accounts payable, 32 percent of finance leaders prefer agentic AI embedded in existing software, compared to 20 percent who build them in-house. For financial planning and analysis (FP&A), 35 percent opt for self-built solutions versus 29 percent for embedded ones.

This divergence suggests a pragmatic rule for the C-suite. If the AI improves a process shared across many organisations, such as AP, embedding it via a vendor solution makes sense. If the AI creates a competitive advantage unique to the business, building in-house is the better path. Leaders should buy to accelerate standard processes and build to differentiate.

Governance as an enabler of speed

Fear of autonomous error slows adoption. Almost half of finance leaders (46%) will not consider deploying an agent without clear governance. This caution is rational; autonomous systems require strict guardrails to operate safely in regulated environments.

Yet the most successful organisations do not let governance stop deployment. Instead, they use it to scale. These leaders are significantly more likely to use agents for complex tasks like compliance checks (50%) compared to their less confident peers (6%).

Anssi Ruokonen, Head of Data and AI at Basware, advises treating AI agents like junior colleagues. The system requires trust but should not make large decisions immediately. He suggests testing thoroughly and introducing autonomy slowly, ensuring a human remains in the loop to maintain responsibility.

Digital workers raise concerns regarding displacement. A third of finance leaders believe job displacement is already happening. Proponents argue agents shift the nature of work rather than eliminating it.

Automating manual tasks such as information extraction from PDFs frees staff to focus on higher-value activities. The goal is to move from task efficiency to operating leverage, allowing finance teams to manage faster closes and make better liquidity decisions without increasing headcount.

Organisations that use agentic AI extensively report higher returns. Leaders who deploy agentic AI tools daily for tasks like accounts payable achieve better outcomes than those who limit usage to experimentation. Confidence grows through controlled exposure; successful small-scale deployments lead to broader operational trust and increased ROI.

Executives must move beyond unguided experimentation to replicate the success of early adopters. Data shows that 71 percent of finance teams with weak returns acted under pressure without clear direction, compared to only 13 percent of teams achieving strong ROI.

Success requires embedding AI directly into workflows and governing agents with the discipline applied to human employees. “Agentic AI can deliver transformational results, but only when it is deployed with purpose and discipline,” concludes Kurtz.

See also: AI deployment in financial services hits an inflection point as Singapore leads the shift to production

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Agentic AI drives finance ROI in accounts payable automation appeared first on AI News.

]]>
Barclays bets on AI to cut costs and boost returns https://www.artificialintelligence-news.com/news/barclays-bets-on-ai-to-cut-costs-and-boost-returns/ Wed, 11 Feb 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112155 Barclays recorded a 12 % jump in annual profit for 2025, reporting £9.1 billion in earnings before tax, up from £8.1 billion a year earlier. The bank also raised its performance targets out through 2028, aiming for a return on tangible equity (RoTE) of more than 14 %, up from a previous goal of above […]

The post Barclays bets on AI to cut costs and boost returns appeared first on AI News.

]]>
Barclays recorded a 12 % jump in annual profit for 2025, reporting £9.1 billion in earnings before tax, up from £8.1 billion a year earlier. The bank also raised its performance targets out through 2028, aiming for a return on tangible equity (RoTE) of more than 14 %, up from a previous goal of above 12 % by 2026. A growing US business and cost reductions underpinned this outcome, with Barclays citing AI as a key driver of those efficiency gains.

At a time when many large companies are still experimenting with AI pilots, Barclays is tying the technology directly to its cost structure and profit outlook. In public statements and investor filings, leadership positions AI as one of the levers that can help the bank sustain lower costs and improved returns, especially as macroeconomic conditions shift.

Barclays’ 12 % profit rise this week matters, not just for its shareholders, but because it reflects a trend that traditional, highly regulated firms are now positioning AI as a core part of running the business, not something kept in separate innovation labs. For companies outside tech, linking AI to measurable results such as profit and efficiency marks a shift toward operational use over hype.

Why AI matters for cost discipline

Barclays has said that technology such as AI is part of its plan to cut costs and make its operations more efficient. That includes trimming parts of the legacy technology stack and rethinking where and how work happens. Investment in AI tools complements broader cost savings goals that stretch back multiple years.

For many large companies, labour and legacy systems still make up a large chunk of operating expenses. Using AI to automate repetitive tasks or streamline data processing can reduce that burden. In Barclays’ case, these efficiencies are part of the bank’s rationale for setting higher performance targets, even though margins remain under pressure in parts of its business.

It’s important to be specific about what these efficiencies mean in practice. AI technologies, for example, models that assist with risk analysis, customer service workflows, and internal reporting, can reduce the hours staff spend on manual work. That doesn’t always mean cutting jobs outright, but it can lower the overall cost base, especially in functions that are routine or transaction-driven.

From investment to impact

Investments in AI don’t translate to results overnight. Barclays’ approach combines these tools with structural cost reduction programs, helping the bank manage expenses at a time when revenue growth alone isn’t enough to lift returns to desired levels.

Barclays’ performance targets for 2028 reflect this dual focus. The bank’s leadership has said that its plans include returning more than £15 billion to shareholders between 2026 and 2028, supported by improved efficiency and profit strength.

Often, companies talk about technology investment in vague terms. Barclays’ latest figures make the link between tech and profit more concrete: the 12 % profit rise was reported in the same breath as the role of technology in trimming costs. It’s not the only factor; improved market conditions and growth in the US also helped, but it’s clearly part of the narrative that management is presenting to investors.

This emphasis on cost discipline and profit impact sets Barclays apart from firms that treat AI as a long-term bet or a future project. Here, AI is integrated into ongoing cost management and financial planning, giving the bank a plausible pathway to stronger returns in the years ahead.

What this means for legacy firms

Barclays is far from unique in exploring AI for cost savings and efficiency. Other banks have also flagged technology investments as part of broader restructuring efforts. But what makes Barclays’ case noteworthy is the scale of the strategy and the way it is tied to measured performance targets, not just experimentation or small-scale pilots.

In traditional industries, especially ones as regulated as banking, adopting AI is harder than in tech startups. Firms must navigate compliance, risk, customer privacy, and legacy systems that weren’t designed for automation. Yet Barclays’ public comments suggest that the bank is now comfortable enough with these tools to anchor part of its financial forecast on them. That signals a degree of maturity in how the institution operationalises AI.

Barclays isn’t simply building isolated AI projects; leadership is weaving technology into cost discipline, modernisation of systems, and long-term planning. That shift matters because it shows how legacy firms, even those with large, complex operations, can start to move beyond pilots and into business-wide use cases that affect the bottom line.

For other end-user companies evaluating AI investments, Barclays offers a working example: a large, regulated company can use technology to help hit cost and profitability targets, not just to explore new capabilities.

(Photo by Jose Marroquin)

See also: Goldman Sachs tests autonomous AI agents for process-heavy work

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Barclays bets on AI to cut costs and boost returns appeared first on AI News.

]]>
AI Expo 2026 Day 2: Moving experimental pilots to AI production https://www.artificialintelligence-news.com/news/ai-expo-2026-day-2-moving-experimental-pilots-ai-production/ Thu, 05 Feb 2026 16:08:36 +0000 https://www.artificialintelligence-news.com/?p=112021 The second day of the co-located AI & Big Data Expo and Digital Transformation Week in London showed a market in a clear transition. Early excitement over generative models is fading. Enterprise leaders now face the friction of fitting these tools into current stacks. Day two sessions focused less on large language models and more […]

The post AI Expo 2026 Day 2: Moving experimental pilots to AI production appeared first on AI News.

]]>

The second day of the co-located AI & Big Data Expo and Digital Transformation Week in London showed a market in a clear transition.

Early excitement over generative models is fading. Enterprise leaders now face the friction of fitting these tools into current stacks. Day two sessions focused less on large language models and more on the infrastructure needed to run them: data lineage, observability, and compliance.

Data maturity determines deployment success

AI reliability depends on data quality. DP Indetkar from Northern Trust warned against allowing AI to become a “B-movie robot.” This scenario occurs when algorithms fail because of poor inputs. Indetkar noted that analytics maturity must come before AI adoption. Automated decision-making amplifies errors rather than reducing them if the data strategy is unverified.

Eric Bobek of Just Eat supported this view. He explained how data and machine learning guide decisions at the global enterprise level. Investments in AI layers are wasted if the data foundation remains fragmented.

Mohsen Ghasempour from Kingfisher also noted the need to turn raw data into real-time actionable intelligence. Retail and logistics firms must cut the latency between data collection and insight generation to see a return.

Scaling in regulated environments

The finance, healthcare, and legal sectors have near-zero tolerance for error. Pascal Hetzscholdt from Wiley addressed these sectors directly.

Hetzscholdt stated that responsible AI in science, finance, and law relies on accuracy, attribution, and integrity. Enterprise systems in these fields need audit trails. Reputational damage or regulatory fines make “black box” implementations impossible.

Konstantina Kapetanidi of Visa outlined the difficulties in building multilingual, tool-using, scalable generative AI applications. Models are becoming active agents that execute tasks rather than just generating text. Allowing a model to use tools – like querying a database – creates security vectors that need serious testing.

Parinita Kothari from Lloyds Banking Group detailed the requirements for deploying, scaling, monitoring, and maintaining AI systems. Kothari challenged the “deploy-and-forget” mentality. AI models need continuous oversight, similar to traditional software infrastructure.

The change in developer workflows

Of course, AI is fundamentally changing how code is written. A panel with speakers from Valae, Charles River Labs, and Knight Frank examined how AI copilots reshape software creation. While these tools speed up code generation, they also force developers to focus more on review and architecture.

This change requires new skills. A panel with representatives from Microsoft, Lloyds, and Mastercard discussed the tools and mindsets needed for future AI developers. A gap exists between current workforce capabilities and the needs of an AI-augmented environment. Executives must plan training programmes that ensure developers sufficiently validate AI-generated code.

Dr Gurpinder Dhillon from Senzing and Alexis Ego from Retool presented low-code and no-code strategies. Ego described using AI with low-code platforms to make production-ready internal apps. This method aims to cut the backlog of internal tooling requests.

Dhillon argued that these strategies speed up development without dropping quality. For the C-suite, this suggests cheaper internal software delivery if governance protocols stay in place.

Workforce capability and specific utility

The broader workforce is starting to work with “digital colleagues.” Austin Braham from EverWorker explained how agents reshape workforce models. This terminology implies a move from passive software to active participants. Business leaders must re-evaluate human-machine interaction protocols.

Paul Airey from Anthony Nolan gave an example of AI delivering literally life-changing value. He detailed how automation improves donor matching and transplant timelines for stem cell transplants. The utility of these technologies extends to life-saving logistics.

A recurring theme throughout the event is that effective applications often solve very specific and high-friction problems rather than attempting to be general-purpose solutions.

Managing the transition

The day two sessions from the co-located events show that enterprise focus has now moved to integration. The initial novelty is gone and has been replaced by demands for uptime, security, and compliance. Innovation heads should assess which projects have the data infrastructure to survive contact with the real world.

Organisations must prioritise the basic aspects of AI: cleaning data warehouses, establishing legal guardrails, and training staff to supervise automated agents. The difference between a successful deployment and a stalled pilot lies in these details.

Executives, for their part, should direct resources toward data engineering and governance frameworks. Without them, advanced models will fail to deliver value.

See also: AI Expo 2026 Day 1: Governance and data readiness enable the agentic enterprise

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AI Expo 2026 Day 2: Moving experimental pilots to AI production appeared first on AI News.

]]>
AI Expo 2026 Day 1: Governance and data readiness enable the agentic enterprise https://www.artificialintelligence-news.com/news/ai-expo-2026-day-1-governance-data-readiness-enable-agentic-enterprise/ Wed, 04 Feb 2026 16:33:34 +0000 https://www.artificialintelligence-news.com/?p=112005 While the prospect of AI acting as a digital co-worker dominated the day one agenda at the co-located AI & Big Data Expo and Intelligent Automation Conference, the technical sessions focused on the infrastructure to make it work. A primary topic on the exhibition floor was the progression from passive automation to “agentic” systems. These […]

The post AI Expo 2026 Day 1: Governance and data readiness enable the agentic enterprise appeared first on AI News.

]]>
While the prospect of AI acting as a digital co-worker dominated the day one agenda at the co-located AI & Big Data Expo and Intelligent Automation Conference, the technical sessions focused on the infrastructure to make it work.

A primary topic on the exhibition floor was the progression from passive automation to “agentic” systems. These tools reason, plan, and execute tasks rather than following rigid scripts. Amal Makwana from Citi detailed how these systems act across enterprise workflows. This capability separates them from earlier robotic process automation (RPA).

Scott Ivell and Ire Adewolu of DeepL described this development as closing the “automation gap”. They argued that agentic AI functions as a digital co-worker rather than a simple tool. Real value is unlocked by reducing the distance between intent and execution. Brian Halpin from SS&C Blue Prism noted that organisations typically must master standard automation before they can deploy agentic AI.

This change requires governance frameworks capable of handling non-deterministic outcomes. Steve Holyer of Informatica, alongside speakers from MuleSoft and Salesforce, argued that architecting these systems requires strict oversight. A governance layer must control how agents access and utilise data to prevent operational failure.

Data quality blocks deployment

The output of an autonomous system relies on the quality of its input. Andreas Krause from SAP stated that AI fails without trusted, connected enterprise data. For GenAI to function in a corporate context, it must access data that is both accurate and contextually-relevant.

Meni Meller of Gigaspaces addressed the technical challenge of “hallucinations” in LLMs. He advocated for the use of eRAG (retrieval-augmented generation) combined with semantic layers to fix data access issues. This approach allows models to retrieve factual enterprise data in real-time.

Storage and analysis also present challenges. A panel featuring representatives from Equifax, British Gas, and Centrica discussed the necessity of cloud-native, real-time analytics. For these organisations, competitive advantage comes from the ability to execute analytics strategies that are scalable and immediate.

Physical safety and observability

The integration of AI extends into physical environments, introducing safety risks that differ from software failures. A panel including Edith-Clare Hall from ARIA and Matthew Howard from IEEE RAS examined how embodied AI is deployed in factories, offices, and public spaces. Safety protocols must be established before robots interact with humans.

Perla Maiolino from the Oxford Robotics Institute provided a technical perspective on this challenge. Her research into Time-of-Flight (ToF) sensors and electronic skin aims to give robots both self-awareness and environmental awareness. For industries such as manufacturing and logistics, these integrated perception systems prevent accidents.

In software development, observability remains a parallel concern. Yulia Samoylova from Datadog highlighted how AI changes the way teams build and troubleshoot software. As systems become more autonomous, the ability to observe their internal state and reasoning processes becomes necessary for reliability.

Infrastructure and adoption barriers

Implementation demands reliable infrastructure and a receptive culture. Julian Skeels from Expereo argued that networks must be designed specifically for AI workloads. This involves building sovereign, secure, and “always-on” network fabrics capable of handling high throughput.

Of course, the human element remains unpredictable. Paul Fermor from IBM Automation warned that traditional automation thinking often underestimates the complexity of AI adoption. He termed this the “illusion of AI readiness”. Jena Miller reinforced this point, noting that strategies must be human-centred to ensure adoption. If the workforce does not trust the tools, the technology yields no return.

Ravi Jay from Sanofi suggested that leaders need to ask operational and ethical questions early on in the process. Success depends on deciding where to build proprietary solutions versus where to buy established platforms.

The sessions from day one of the co-located events indicate that, while technology is moving toward autonomous agents, deployment requires a solid data foundation.

CIOs should focus on establishing data governance frameworks that support retrieval-augmented generation. Network infrastructure must be evaluated to ensure it supports the latency requirements of agentic workloads. Finally, cultural adoption strategies must run parallel to technical implementation.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Banner for AI & Big Data Expo by TechEx events.

The post AI Expo 2026 Day 1: Governance and data readiness enable the agentic enterprise appeared first on AI News.

]]>
China’s hyperscalers bet billions on agentic AI as commerce becomes the new battleground https://www.artificialintelligence-news.com/news/china-hyperscalers-agentic-ai-commerce-battleground/ Fri, 30 Jan 2026 09:00:00 +0000 https://www.artificialintelligence-news.com/?p=111928 The artificial intelligence industry’s pivot toward agentic AI – systems capable of autonomously executing multi-step tasks – has dominated technology discussions in recent months. But while Western firms focus on foundational models and cross-platform interoperability, China’s technology giants are racing to dominate through commerce integration, a divergence that could reshape how enterprises deploy autonomous systems […]

The post China’s hyperscalers bet billions on agentic AI as commerce becomes the new battleground appeared first on AI News.

]]>
The artificial intelligence industry’s pivot toward agentic AI – systems capable of autonomously executing multi-step tasks – has dominated technology discussions in recent months.

But while Western firms focus on foundational models and cross-platform interoperability, China’s technology giants are racing to dominate through commerce integration, a divergence that could reshape how enterprises deploy autonomous systems globally.

Alibaba, Tencent and ByteDance have rapidly upgraded their AI platforms to support agentic commerce, marking a pivot from conversational AI tools to agents capable of completing entire transaction cycles, from product discovery through payment.

Just last week, Alibaba upgraded its Qwen chatbot to let direct transaction completion in the interface, connecting the AI agent in its ecosystem, including Taobao, Alipay, Amap and travel platform Fliggy. The integration supports over 400 core digital tasks, allowing users to compare personalised recommendations in platforms and complete payments without leaving the chatbot environment.

“The agentic transformation of commercial services lets the maximal integration of user services and enhances user stickiness,” Shaochen Wang, research analyst at Counterpoint Research, told CNBC, referring to stronger long-term user engagement that creates sustainable competitive advantages.

The super app advantage

Before that, ByteDance upgraded its Doubao AI chatbot in December to autonomously handle tasks, including ticket bookings, through integrations with Douyin, the Chinese version of TikTok. The upgraded model was introduced on a ZTE-developed prototype smartphone as a system-level AI assistant; however, some planned features were later scaled back due to privacy and security concerns raised by rivals.

Tencent President Martin Lau indicated during the company’s May 2025 earnings call that AI agents could become core components of the WeChat ecosystem, which serves over one billion users with integrated messaging, payments, e-commerce and services.

The positioning reflects China’s structural advantage in agentic AI deployment: integrated ecosystems that eliminate the fragmentation constraining Western competitors.

“AI agents will be foundational to the evolution of super apps, with success depending on deep integration in payments, logistics, and social engagement,” Charlie Dai, VP and principal analyst at Forrester, told CNBC. “Chinese firms like Alibaba, Tencent and ByteDance all benefit from integrated ecosystems, rich behavioural data, and consumer familiarity with super apps.”

Western companies face more fragmented data environments and stricter privacy regulations that slow cross-service integration, despite leading in foundational AI model development and global reach, Dai noted.

Agentic AI’s enterprise trajectory

Commercial applications signal broader enterprise implications as agentic AI moves from auxiliary tools to autonomous actors capable of executing complex workflows. Industry experts expect multi-agent systems to emerge as a defining trend in AI deployment this year, extending from consumer services into organisational production.

In a report by Global Times, Tian Feng, president of the Fast Think Institute and former dean of SenseTime’s Intelligence Industry Research Institute, predicted that the first AI agent to surpass 300 million monthly active users could emerge as early as 2026, becoming “an indispensable assistant for work and daily life” capable of autonomously executing cross-app, composite services.

Approximately half of all consumers already use AI when searching online, according to a 2025 McKinsey study. The research firm estimated that AI agents could generate more than $1 trillion in economic value for US businesses by 2030 through streamlining routine steps in consumer decision-making.

Chinese cloud providers, including smaller players like JD Cloud and UCloud, have also begun supporting agentic AI tools, though high token use has driven some providers, like ByteDance’s Volcano Engine, to introduce fixed-subscription pricing models to address cost concerns.

Divergent deployment strategies

The contrasting approaches between Chinese integration and Western scalability reflect fundamental differences in market structure and regulatory environments that will likely define competitive positioning.

“China will prioritise domestic integration and expansion in selected regions, while US firms focus on global scalability and governance,” Dai said.

US players pursuing agentic commerce include OpenAI, Perplexity, and Amazon, while Google explores positioning itself as a “matchmaker” between merchants, consumers and AI agents – approaches that reflect fragmented platform environments requiring interoperability not closed-loop integration.

However, the autonomous nature of agentic systems has raised regulatory questions in China. ByteDance warned users about security and privacy risks when announcing Doubao’s abilities, recommending deployment on dedicated devices not those containing sensitive information, given the tool’s access to device data, digital accounts and internet connectivity in multiple ports.

The rapid commercialisation of agentic AI in China’s consumer sector provides enterprise decision-makers globally with early signals of how autonomous systems may reshape customer acquisition costs, platform economics and competitive moats as these abilities mature.

(Photo by Philip Oroni)

See also: Deloitte sounds alarm as AI agent deployment outruns safety frameworks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post China’s hyperscalers bet billions on agentic AI as commerce becomes the new battleground appeared first on AI News.

]]>
Franny Hsiao, Salesforce: Scaling enterprise AI https://www.artificialintelligence-news.com/news/franny-hsiao-salesforce-scaling-enterprise-ai/ Wed, 28 Jan 2026 15:00:44 +0000 https://www.artificialintelligence-news.com/?p=111906 Scaling enterprise AI requires overcoming architectural oversights that often stall pilots before production, a challenge that goes far beyond model selection. While generative AI prototypes are easy to spin up, turning them into reliable business assets involves solving the difficult problems of data engineering and governance. Ahead of AI & Big Data Global 2026 in […]

The post Franny Hsiao, Salesforce: Scaling enterprise AI appeared first on AI News.

]]>
Scaling enterprise AI requires overcoming architectural oversights that often stall pilots before production, a challenge that goes far beyond model selection. While generative AI prototypes are easy to spin up, turning them into reliable business assets involves solving the difficult problems of data engineering and governance.

Ahead of AI & Big Data Global 2026 in London, Franny Hsiao, EMEA Leader of AI Architects at Salesforce, discussed why so many initiatives hit a wall and how organisations can architect systems that actually survive the real world.

The ‘pristine island’ problem of scaling enterprise AI

Most failures stem from the environment in which the AI is built. Pilots frequently begin in controlled settings that create a false sense of security, only to crumble when faced with enterprise scale.

Headshot of Franny Hsiao, EMEA Leader of AI Architects at Salesforce.

“The single most common architectural oversight that prevents AI pilots from scaling is the failure to architect a production-grade data infrastructure with built-in end to end governance from the start,” Hsiao explains.

“Understandably, pilots often start on ‘pristine islands’ – using small, curated datasets and simplified workflows. But this ignores the messy reality of enterprise data: the complex integration, normalisation, and transformation required to handle real-world volume and variability.”

When companies attempt to scale these island-based pilots without addressing the underlying data mess, the systems break. Hsiao warns that “the resulting data gaps and performance issues like inference latency render the AI systems unusable—and, more importantly, untrustworthy.”

Hsiao argues that the companies successfully bridging this gap are those that “bake end-to-end observability and guardrails into the entire lifecycle.” This approach provides “visibility and control into how effective the AI systems are and how users are adopting the new technology.”

Engineering for perceived responsiveness

As enterprises deploy large reasoning models – like the ‘Atlas Reasoning Engine’ – they face a trade-off between the depth of the model’s “thinking” and the user’s patience. Heavy compute creates latency.

Salesforce addresses this by focusing on “perceived responsiveness through Agentforce Streaming,” according to Hsiao.

“This allows us to deliver AI-generated responses progressively, even while the reasoning engine performs heavy computation in the background. It’s an incredibly effective approach for reducing perceived latency, which often stalls production AI.”

Transparency also plays a functional role in managing user expectations when scaling enterprise AI. Hsiao elaborates on using design as a trust mechanism: “By surfacing progress indicators that show the reasoning steps or the tools being used, as well images like spinners and progress bars to depict loading states, we don’t just keep users engaged; we improve perceived responsiveness and build trust.

“This visibility, combined with strategic model selection – like choosing smaller models for fewer computations, meaning faster response times – and explicit length constraints, ensures the system feels deliberate and responsive.”

Offline intelligence at the edge

For industries with field operations, such as utilities or logistics, reliance on continuous cloud connectivity is a non-starter. “For many of our enterprise customers, the biggest practical driver is offline functionality,” states Hsiao.

Hsiao highlights the shift toward on-device intelligence, particularly in field services, where the workflow must continue regardless of signal strength.

“A technician can photograph a faulty part, error code, or serial number while offline. Then an on-device LLM can then identify the asset or error, and provide guided troubleshooting steps from a cached knowledge base instantly,” explains Hsiao.

Data synchronisation happens automatically once connectivity returns. “Once a connection is restored, the system handles the ‘heavy lifting’ of syncing that data back to the cloud to maintain a single source of truth. This ensures that work gets done, even in the most disconnected environments.”

Hsiao expects continued innovation in edge AI due to benefits like “ultra-low latency, enhanced privacy and data security, energy efficiency, and cost savings.”

High-stakes gateways

Autonomous agents are not set-and-forget tools. When scaling enterprise AI deployments, governance requires defining exactly when a human must verify an action. Hsiao describes this not as dependency, but as “architecting for accountability and continuous learning.”

Salesforce mandates a “human-in-the-loop” for specific areas Hsiao calls “high-stakes gateways”:

“This includes specific action categories, including any ‘CUD’ (Creating, Uploading, or Deleting) actions, as well as verified contact and customer contact actions,” says Hsiao. “We also default to human confirmation for critical decision-making or any action that could be potentially exploited through prompt manipulation.”

This structure creates a feedback loop where “agents learn from human expertise,” creating a system of “collaborative intelligence” rather than unchecked automation.

Trusting an agent requires seeing its work. Salesforce has built a “Session Tracing Data Model (STDM)” to provide this visibility. It captures “turn-by-turn logs” that offer granular insight into the agent’s logic.

“This gives us granular step-by-step visibility that captures every interaction including user questions, planner steps, tool calls, inputs/outputs, retrieved chunks, responses, timing, and errors,” says Hsiao.

This data allows organisations to run ‘Agent Analytics’ for adoption metrics, ‘Agent Optimisation’ to drill down into performance, and ‘Health Monitoring’ for uptime and latency tracking.

“Agentforce observability is the single mission control for all your Agentforce agents for unified visibility, monitoring, and optimisation,” Hsiao summarises.

Standardising agent communication

As businesses deploy agents from different vendors, these systems need a shared protocol to collaborate. “For multi-agent orchestration to work, agents can’t exist in a vacuum; they need common language,” argues Hsiao.

Hsiao outlines two layers of standardisation: orchestration and meaning. For orchestration, Salesforce is adopting open-source standards like MCP (Model Context Protocol) and A2A (Agent to Agent Protocol).”

“We believe open source standards are non-negotiable; they prevent vendor lock-in, enable interoperability, and accelerate innovation.”

However, communication is useless if the agents interpret data differently. To solve for fragmented data, Salesforce co-founded OSI (Open Semantic Interchange) to unify semantics so an agent in one system “truly understands the intent of an agent in another.”

The future enterprise AI scaling bottleneck: agent-ready data

Looking forward, the challenge will shift from model capability to data accessibility. Many organisations still struggle with legacy, fragmented infrastructure where “searchability and reusability” remain difficult.

Hsiao predicts the next major hurdle – and solution – will be making enterprise data “‘agent-ready’ through searchable, context-aware architectures that replace traditional, rigid ETL pipelines.” This shift is necessary to enable “hyper-personalised and transformed user experience because agents can always access the right context.”

“Ultimately, the next year isn’t about the race for bigger, newer models; it’s about building the orchestration and data infrastructure that allows production-grade agentic systems to thrive,” Hsiao concludes.

Salesforce is a key sponsor of this year’s AI & Big Data Global in London and will have a range of speakers, including Franny Hsiao, sharing their insights during the event. Be sure to swing by Salesforce’s booth at stand #163 for more from the company’s experts.

See also: Databricks: Enterprise AI adoption shifts to agentic systems

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Franny Hsiao, Salesforce: Scaling enterprise AI appeared first on AI News.

]]>
White House compares industrial revolution with AI era https://www.artificialintelligence-news.com/news/white-house-predicts-ai-growth-with-comparison-industrial-and-artificial-intelligence-revolutions/ Wed, 28 Jan 2026 12:04:00 +0000 https://www.artificialintelligence-news.com/?p=111895 A White House paper titled “Artificial Intelligence and the Great Divergence” sets out parallels between the effects of the industrial revolution in the 18th and 19th centuries and the current times, with artificial intelligence positioned as guiding the way the world’s economies will be shaped. Artificial intelligence now sits at the centre of US economic […]

The post White House compares industrial revolution with AI era appeared first on AI News.

]]>
A White House paper titled “Artificial Intelligence and the Great Divergence” sets out parallels between the effects of the industrial revolution in the 18th and 19th centuries and the current times, with artificial intelligence positioned as guiding the way the world’s economies will be shaped.

Artificial intelligence now sits at the centre of US economic strategy, currently representing a significant portion of the country’s economic activity, as characterised by the building of AI infrastructure, most notably in the form of data centres. The paper says AI investment raised US GDP by 1.3% percent in the first half of 2025, and compares this with the investment in the railway network during the industrial revolution.

Artificial Intelligence and the Great Divergence” says long-term growth depends primarily on gains in productivity, and AI is the tool to achieve those gains. It presents a range of estimates of AI’s impact on GDP, from single-digit increases to 20% productivity growth inside a decade. It also floats some more extreme scenarios, where GDP grows at more than 45% as AI substitutes for human labour in the longer term.

Capital deployment in the form of building AI infrastructure, not growing consumption or public spending, is now creating US economic growth. Investment in data processing equipment, buildings, infrastructure, and software grew 28% in early 2025, and AI-related infrastructure represented around a quarter of all US investment in 2025.

Training compute capacity used by AI models has increased roughly four-fold per year since 2010, and the length of tasks AI systems can complete has doubled every seven months for six years, the paper states. The cost per token of AI output has fallen by factors ranging from nine to nine hundred per year, depending on task and model.

By late 2025, around 78% percent of organisations had reported using AI, up from 55% in 2024, and it’s claimed that 40% of US workers use generative AI in their jobs. Nearly half of US businesses now pay for AI subscriptions. The report poses these figures as evidence that AI has moved from experimentation into routine production.

Internationally, the document frames AI as a factor in divergence of economic prosperity, with AI in the US increasing America’s GDP growth faster than in Europe and China. The US leads at the moment in private AI investment, model development, and compute capacity, while the EU’s share of world GDP has fallen since 1980. Additionally, the continent lags in comparable AI metrics – investment, construction, software development, overall capacity, etc. China remains a major player in AI actor, but the report notes that much of its model training relies on US-designed hardware.

The White House publication advocates for an integrated national strategy with investment incentives at its core. The One Big Beautiful Bill Act gave significant financial breaks for data centres and IT infrastructure, and created favourable conditions for speedy facility construction, in line with the Act’s aim to lift GDP growth by more than a percentage point per year over the medium term. The report argues that deregulation in the AI industry supports productivity by lowering costs, increasing competition, and speeding innovation. Trade agreements and foreign policy reinforce this approach, with overseas partners committing to large purchases of US-derived AI chips and infrastructure.

The paper notes that AI data centres are electricity-intensive, and projects that demand for power by AI infrastructure could reach up to 12% of domestic electricity consumption by 2028. It links the success of AI to energy availability and the ability of the power grid to deliver, positioning the control of energy supply as a prerequisite for international leadership in AI.

The report’s conclusion is that the countries that lead in AI investment and adoption will experience higher growth than the mean. The United States is aligning multiple policy rafts to ensure its leading position in the sector. Businesses that build systems in line with its national goals will be part of a dominant economic force shaping the next phase of global growth.

(Image source: “Chicago Thaws into Spring” by Trey Ratcliff is licensed under CC BY-NC-SA 2.0.)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post White House compares industrial revolution with AI era appeared first on AI News.

]]>
Databricks: Enterprise AI adoption shifts to agentic systems https://www.artificialintelligence-news.com/news/databricks-enterprise-ai-adoption-shifts-agentic-systems/ Tue, 27 Jan 2026 17:26:45 +0000 https://www.artificialintelligence-news.com/?p=111880 According to Databricks, enterprise AI adoption is shifting to agentic systems as organisations embrace intelligent workflows. Generative AI’s first wave promised business transformation but often delivered little more than isolated chatbots and stalled pilot programmes. Technology leaders found themselves managing high expectations with limited operational utility. However, new telemetry from Databricks suggests the market has […]

The post Databricks: Enterprise AI adoption shifts to agentic systems appeared first on AI News.

]]>
According to Databricks, enterprise AI adoption is shifting to agentic systems as organisations embrace intelligent workflows.

Generative AI’s first wave promised business transformation but often delivered little more than isolated chatbots and stalled pilot programmes. Technology leaders found themselves managing high expectations with limited operational utility. However, new telemetry from Databricks suggests the market has turned a corner.

Data from over 20,000 organisations – including 60 percent of the Fortune 500 – indicates a rapid shift toward “agentic” architectures where models do not just retrieve information but independently plan and execute workflows.

This evolution represents a fundamental reallocation of engineering resources. Between June and October 2025, the use of multi-agent workflows on the Databricks platform grew by 327 percent. This surge signals that AI is graduating to a core component of system architecture.

The ‘Supervisor Agent’ drives enterprise adoption of agentic AI

Driving this growth is the ‘Supervisor Agent’. Rather than relying on a single model to handle every request, a supervisor acts as an orchestrator, breaking down complex queries and delegating tasks to specialised sub-agents or tools.

Since its launch in July 2025, the Supervisor Agent has become the leading agent use case, accounting for 37 percent of usage by October. This pattern mirrors human organisational structures: a manager does not perform every task but ensures the team executes them. Similarly, a supervisor agent manages intent detection and compliance checks before routing work to domain-specific tools.

Technology companies currently lead this adoption, building nearly four times more multi-agent systems than any other industry. Yet the utility extends across sectors. A financial services firm, for instance, might employ a multi-agent system to handle document retrieval and regulatory compliance simultaneously, delivering a verified client response without human intervention.

Traditional infrastructure under pressure

As agents graduate from answering questions to executing tasks, underlying data infrastructure faces new demands. Traditional Online Transaction Processing (OLTP) databases were designed for human-speed interactions with predictable transactions and infrequent schema changes. Agentic workflows invert these assumptions.

AI agents now generate continuous, high-frequency read and write patterns, often creating and tearing down environments programmatically to test code or run scenarios. The scale of this automation is visible in the telemetry data. Two years ago, AI agents created just 0.1 percent of databases; today, that figure sits at 80 percent.

Furthermore, 97 percent of database testing and development environments are now built by AI agents. This capability allows developers and “vibe coders” to spin up ephemeral environments in seconds rather than hours. Over 50,000 data and AI apps have been created since the Public Preview of Databricks Apps, with a 250 percent growth rate over the past six months.

The multi-model standard

Vendor lock-in remains a persistent risk for enterprise leaders as they seek to increase agentic AI adoption. The data indicates that organisations are actively mitigating this by adopting multi-model strategies. As of October 2025, 78 percent of companies utilised two or more Large Language Model (LLM) families, such as ChatGPT, Claude, Llama, and Gemini.

The sophistication of this approach is increasing. The proportion of companies using three or more model families rose from 36 percent to 59 percent between August and October 2025. This diversity allows engineering teams to route simpler tasks to smaller and more cost-effective models while reserving frontier models for complex reasoning.

Retail companies are setting the pace, with 83 percent employing two or more model families to balance performance and cost. A unified platform capable of integrating various proprietary and open-source models is rapidly becoming a prerequisite for the modern enterprise AI stack.

Contrary to the big data legacy of batch processing, agentic AI operates primarily in the now. The report highlights that 96 percent of all inference requests are processed in real-time.

This is particularly evident in sectors where latency correlates directly with value. The technology sector processes 32 real-time requests for every single batch request. In healthcare and life sciences, where applications may involve patient monitoring or clinical decision support, the ratio is 13 to one. For IT leaders, this reinforces the need for inference serving infrastructure capable of handling traffic spikes without degrading user experience.

Governance accelerates enterprise AI deployments

Perhaps the most counter-intuitive finding for many executives is the relationship between governance and velocity. Often viewed as a bottleneck, rigorous governance and evaluation frameworks function as accelerators for production deployment.

Organisations using AI governance tools put over 12 times more AI projects into production compared to those that do not. Similarly, companies employing evaluation tools to systematically test model quality achieve nearly six times more production deployments.

The rationale is straightforward. Governance provides necessary guardrails – such as defining how data is used and setting rate limits – which gives stakeholders the confidence to approve deployment. Without these controls, pilots often get stuck in the proof-of-concept phase due to unquantified safety or compliance risks.

The value of ‘boring’ enterprise automation from agentic AI

While autonomous agents often conjure images of futuristic capabilities, current enterprise value from agentic AI lies in automating the routine, mundane, yet necessary tasks. The top AI use cases vary by sector but focus on solving specific business problems:

  • Manufacturing and automotive: 35% of use cases focus on predictive maintenance.
  • Health and life sciences: 23% of use cases involve medical literature synthesis.
  • Retail and consumer goods: 14% of use cases are dedicated to market intelligence.

Furthermore, 40 percent of the top AI use cases address practical customer concerns such as customer support, advocacy, and onboarding. These applications drive measurable efficiency and build the organisational muscle required for more advanced agentic workflows.

For the C-suite, the path forward involves less focus on the “magic” of AI and more on the engineering rigour surrounding it. Dael Williamson, EMEA CTO at Databricks, highlights that the conversation has shifted.

“For businesses across EMEA, the conversation has moved on from AI experimentation to operational reality,” says Williamson. “AI agents are already running critical parts of enterprise infrastructure, but the organisations seeing real value are those treating governance and evaluation as foundations, not afterthoughts.”

Williamson emphasises that competitive advantage is shifting back towards how companies build, rather than simply what they buy.

“Open, interoperable platforms allow organisations to apply AI to their own enterprise data, rather than relying on embedded AI features that deliver short-term productivity but not long-term differentiation.”

In highly regulated markets, this combination of openness and control is “what separates pilots from competitive advantage.”

See also: Anthropic selected to build government AI assistant pilot

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Databricks: Enterprise AI adoption shifts to agentic systems appeared first on AI News.

]]>