Human-AI Relationships - AI News https://www.artificialintelligence-news.com/categories/ai-and-us/human-ai-relationships/ Artificial Intelligence News Fri, 06 Mar 2026 13:54:39 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Human-AI Relationships - AI News https://www.artificialintelligence-news.com/categories/ai-and-us/human-ai-relationships/ 32 32 Physical AI is having its moment–and everyone wants a piece of it https://www.artificialintelligence-news.com/news/physical-ai-global-race-robots-manufacturing-2026/ Wed, 04 Mar 2026 12:00:00 +0000 https://www.artificialintelligence-news.com/?p=112502 There is a particular kind of momentum in the technology industry that announces itself not through a single breakthrough, but through the simultaneous convergence of many. Physical AI is having that moment right now–and paying attention to where it is coming from, and why, tells you more than any single product launch can. The term […]

The post Physical AI is having its moment–and everyone wants a piece of it appeared first on AI News.

]]>
There is a particular kind of momentum in the technology industry that announces itself not through a single breakthrough, but through the simultaneous convergence of many. Physical AI is having that moment right now–and paying attention to where it is coming from, and why, tells you more than any single product launch can.

The term itself–physical AI–is simple enough. It describes AI systems that don’t just process data or generate content, but perceive, reason, and act in the real world–robots, autonomous vehicles, machines that adapt. Nvidia CEO Jensen Huang called it “the ChatGPT moment for robotics” at CES in January–a deliberate framing, and a useful one. 

The ChatGPT comparison isn’t about hype. It signals that a technology once confined to research environments is being adopted for mainstream commercial deployment. That crossing is exactly what we are watching unfold from factory floors in Silicon Valley to stages in Shanghai.”

The West is building the stack

On the Western side, the physical AI push is fundamentally a platform race. The companies investing most aggressively aren’t primarily robotics companies–they’re infrastructure companies that see robotics as the next surface on which AI gets monetised.

Nvidia has released new Cosmos and GR00T open models for robot learning and reasoning, alongside the Blackwell-powered Jetson T4000 module, which delivers 4x greater energy efficiency for robotics computing. Arm has carved outan entirely new Physical AI business unit focused on semiconductor design for robotics and intelligent vehicles. 

Siemens and Nvidia announced plans to build what they’re calling an Industrial AI Operating System, with ambitions to create the world’s first fully AI-driven adaptive manufacturing site. Then there’s Google, which last week brought its robotics software unit Intrinsic fully in-house–out of Alphabet’s “Other Bets” and into Google’s core. 

The move positions Google to offer manufacturers a vertically integrated stack: AI models from DeepMind, deployment software from Intrinsic, and cloud infrastructure from Google Cloud. The Android analogy being floated internally is instructive. Android didn’t win smartphones by building the best phone. It won by becoming the layer everything else ran on. 

That is precisely what Google is attempting with physical AI.

The enterprise implications are significant. A Deloitte survey of more than 3,200 global business leaders found that 58% are already using physical AI in some capacity, rising to 80% with plans over the next two years. The demand is there. The question has shifted from whether to adopt to how fast and on whose platform.

The East is building the machines

China’s physical AI story is different in character–and arguably more visceral. At this year’s Spring Festival Gala, humanoid robots from multiple Chinese startups performed kung fu routines, aerial flips, and choreographed dances before hundreds of millions of viewers–a sharp contrast from the stumbling prototypes that drew scepticism just a year prior. 

It was a spectacle, yes. It was also a statement. China accounted for over 80% of global humanoid robot installations in 2025 and over half of the world’s industrial robots. That dominance is underpinned by structural advantages that go beyond software. China controls roughly 70% of the global lidar sensor market, leads in harmonic reducer production–the gears critical to robot movement–and has driven hardware costs down through the same economies of scale that propelled its EV industry. 

Alibaba has entered the race with RynnBrain, an open-source AI model designed to help robots comprehend the physical world and identify objects–positioning itself alongside NVIDIA’s Cosmos and Google DeepMind’s Gemini Robotics in the foundation model layer. With over 140 domestic humanoid manufacturers and more than 330 humanoid models already unveiled, China’s push into embodied AI is no longer experimental–it’s commercial.

Why it matters beyond the headlines

The convergence of Western platform strategies and Eastern manufacturing scale is creating something genuinely new: a global physical AI ecosystem that is advancing on multiple fronts simultaneously, with different competitive advantages colliding.

What makes this moment distinct from prior robotics waves is the removal of the expertise bottleneck. Historically, deploying industrial robots required specialised engineering teams, months of custom programming, and a high tolerance for downtime. The platforms being built now–by Google, Nvidia, Siemens, and their Chinese equivalents–are explicitly designed to lower that barrier. 

Companies like Vention, which raised US$110 million in January, claim their physical AI platforms can reduce automation project timelines from months to days. When that claim becomes routine, the economics of manufacturing change structurally.

There is also a geopolitical dimension that sits quietly beneath the product announcements. Every foundation model for robotics, every platform layer, every semiconductor architecture being developed right now carries with it questions of supply chain dependency, data sovereignty, and long-term infrastructure control. 

The country–or company–that governs the software layer of physical AI will have unusual leverage over industrial operations globally for years to come.

Physical AI is not a trend. It is the next significant reconfiguration of how the world makes things, moves things, and operates at scale. The conversations happening now–from semiconductor boardrooms to factory floors in Shenzhen and Silicon Valley–are not preliminary. They are the thing itself, already underway.

(Photo by Hyundai Motor Group)

See also: Goldman Sachs and Deutsche Bank test agentic AI for trade surveillance

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Physical AI is having its moment–and everyone wants a piece of it appeared first on AI News.

]]>
Google makes its industrial robotics AI play official–and this time, it means business https://www.artificialintelligence-news.com/news/google-industrial-robotics-ai-physical-ai-intrinsic/ Wed, 04 Mar 2026 08:00:00 +0000 https://www.artificialintelligence-news.com/?p=112499 When Google folds a moonshot into its core operations, it’s not cleaning house. It’s placing a bet. On February 25, Alphabet-owned Intrinsic–which builds AI models and software designed to make industrial robotics more accessible–officially joined Google.  The company will remain a distinct group within Google, working closely with Google DeepMind and tapping into Gemini AI models and […]

The post Google makes its industrial robotics AI play official–and this time, it means business appeared first on AI News.

]]>
When Google folds a moonshot into its core operations, it’s not cleaning house. It’s placing a bet. On February 25, Alphabet-owned Intrinsic–which builds AI models and software designed to make industrial robotics more accessible–officially joined Google. 

The company will remain a distinct group within Google, working closely with Google DeepMind and tapping into Gemini AI models and Google Cloud. No purchase price was disclosed.

On the surface, this looks like a routine internal reshuffle. It isn’t.

From Moonshot to Mandate

Intrinsic graduated into an independent Alphabet-owned company in 2021 after five years of development within Alphabet’s X, the moonshot research division–the same factory that produced Waymo and Wing. Its mission from the start: make industrial robotics AI accessible to manufacturers who don’t have armies of specialist engineers.

While hardware like robotic arms has become cheaper, programming them remains incredibly complex, often requiring hundreds of hours of manual coding by specialised engineers that can vary based on the particular robot. Intrinsic’s answer to that is Flowstate–a web-based platform that allows users to build robotic applications without having to write thousands of lines of code. 

The platform is designed to be hardware-, software-, and AI-model-agnostic. Think of it less as a product and more as an operating layer–one that Google CEO Sundar Pichai has reportedly compared directly to Android. “He said this is the Android of robotics,” Intrinsic CEO Wendy Tan White said, noting that Pichai worked on Chrome and Android before becoming CEO. 

Why now, why Google?

The timing isn’t arbitrary. The sequence of hiring Boston Dynamics’ CTO, releasing a standalone robotics SDK, and now absorbing Intrinsic represents a deliberate consolidation of robotics capability inside Google’s core. Taken together, these moves position Google to offer manufacturers something no competitor has assembled quite as cleanly: AI models from DeepMind, deployment software from Intrinsic, and cloud infrastructure from Google Cloud–all under one roof.

Last month, Google also teamed up with Boston Dynamics to integrate Gemini into Atlas humanoid robots built for manufacturing environments, while Google DeepMind hired the former CTO of Boston Dynamics in November. 

The industrial robotics AI market Google is chasing is not small. McKinsey projects that the market for general-purpose robots could reach US$370 billion by 2040. 

What it means for the enterprise

For enterprise decision-makers, the more interesting signal here isn’t the technology–it’s the accessibility shift. Google plans to integrate Intrinsic’s robotics development platform and vision models with its broader AI ecosystem, combining advanced reasoning, perception and learning capabilities with industrial-grade robotics software to allow machines to interpret sensor data better, adapt to dynamic environments and execute complex tasks. 

Intrinsic has also expanded through acquisitions–acquiring the Open Source Robotics Corp. in 2022, the for-profit arm of the foundation behind the Robot Operating System (ROS). And its commercial pipeline is already in motion: in October 2025, Intrinsic formed a strategic partnership with Foxconn focused on developing general-purpose intelligent robots for full factory automation within electronics manufacturing. 

White framed the integration in terms enterprise leaders will find hard to ignore: production economics, operational transformation, and what she described as truly advanced manufacturing — all within reach once Google’s infrastructure is fully behind it.

That’s a significant claim. But with Gemini, DeepMind, and Google Cloud now aligned behind it, the infrastructure to back it up is, for the first time, actually there.

See also: Physical AI adoption boosts customer service ROI

Banner for the AI & Big Data Expo event series.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Google makes its industrial robotics AI play official–and this time, it means business appeared first on AI News.

]]>
Physical AI adoption boosts customer service ROI https://www.artificialintelligence-news.com/news/physical-ai-adoption-boosts-customer-service-roi/ Tue, 03 Mar 2026 11:32:47 +0000 https://www.artificialintelligence-news.com/?p=112483 The adoption of physical AI drives ROI in frontline customer service by merging digital intelligence with human-like physical interaction. As businesses navigate shrinking labour pools, they are finding that simply automating routine workflows is no longer enough. A new partnership between KDDI and AVITA demonstrates how companies can address complex operational gaps through humanoid deployment. […]

The post Physical AI adoption boosts customer service ROI appeared first on AI News.

]]>
The adoption of physical AI drives ROI in frontline customer service by merging digital intelligence with human-like physical interaction.

As businesses navigate shrinking labour pools, they are finding that simply automating routine workflows is no longer enough. A new partnership between KDDI and AVITA demonstrates how companies can address complex operational gaps through humanoid deployment.

While traditional industrial robots excel at repetitive, single-function tasks, they lack the versatility required to manage unexpected anomalies like equipment failures. Customer-facing roles demand nonverbal communication, including synchronised nodding, natural eye contact, and reassuring facial expressions. 

By integrating AVITA’s avatar creation expertise with KDDI’s communications infrastructure, the two organisations are building domestically developed humanoids capable of operating smoothly in real-world commercial environments.

Blending hardware with advanced data infrastructure

Deploying humanoids into active commercial spaces requires high-capacity and low-latency network infrastructure to transmit visual data and control commands in real time. KDDI provides this operational backbone, facilitating remote control capabilities alongside intensive cloud-based data processing. The resulting visual and motion data collected during customer interactions feeds back into the system to train the AI, improving the precision and autonomy of the humanoid’s behaviour.

To support the demanding computational requirements of physical AI adoption, the companies plan to utilise GPUs hosted at the Osaka Sakai Data Center, which commenced operations in January 2026. They are also exploring integration with an on-premises service for Google’s Gemini high-performance generative AI model. This alignment with major enterprise platforms ensures that data processing remains secure and capable of handling complex dialogue requirements.

The hardware itself departs from standard utilitarian machinery. Based on a concept model designed by Hiroshi Ishiguro, the humanoid features a compact skeletal structure approximating a typical Japanese physique.

Silicone skin and specialised mechanical systems enable warm, approachable facial expressions that sync directly with spoken dialogue. Embedded camera sensors track objects in motion to create natural eye contact, while quiet pneumatic actuation allows for fluid and continuous movement with natural “micro-variations”. This design specifically addresses the historical difficulty of deploying automation in operations requiring hospitality and reassurance.

Preparing for commercial adoption of physical AI

This initiative builds upon earlier joint projects between KDDI and AVITA, which introduced a “next-generation remote customer service platform” using digital avatars for remote assistance at retail locations like Lawson and au Style shops.

Transitioning from digital and language-driven communication to physical units capable of free movement represents a logical progression for enterprises looking to scale their customer service capabilities. The partners intend to begin trials in actual commercial facilities starting in Autumn 2026. Deployment at customer touchpoints such as au Style shops will also be considered.

Integrating physical AI demands environments capable of sustaining continuous, high-volume data streams without latency interruptions. As visual and motion data becomes central to machine learning models, governance frameworks must adapt to manage customer data usage within physical spaces.

Organisations facing demographic workforce pressures should evaluate current bottlenecks to identify where non-verbal, empathetic engagement is necessary. Setting up high-speed network foundations and piloting digital AI avatar programmes today allows enterprises to prepare for the adoption of physical humanoids as the hardware further matures.

See also: Santander and Mastercard run Europe’s first AI-executed payment pilot

Banner for the AI & Big Data Expo event series.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Physical AI adoption boosts customer service ROI appeared first on AI News.

]]>
Deploying agentic finance AI for immediate business ROI https://www.artificialintelligence-news.com/news/deploying-agentic-finance-ai-for-immediate-business-roi/ Tue, 24 Feb 2026 13:26:20 +0000 https://www.artificialintelligence-news.com/?p=112381 Agentic finance AI improves business efficiency and ROI only when deployed with strict governance and clear return on investment targets. A recent FT Longitude survey of 200 finance leaders across the US, UK, France, and Germany showed 61 percent have deployed AI agents merely as experiments. Meanwhile, one in four executives admit they do not […]

The post Deploying agentic finance AI for immediate business ROI appeared first on AI News.

]]>
Agentic finance AI improves business efficiency and ROI only when deployed with strict governance and clear return on investment targets.

A recent FT Longitude survey of 200 finance leaders across the US, UK, France, and Germany showed 61 percent have deployed AI agents merely as experiments. Meanwhile, one in four executives admit they do not fully grasp what these agents look like in practice.

Advancing agentic finance AI beyond experiments

Finance departments need governed systems that combine language processing with business logic to deliver actual value.

Providers of Invoice Lifecycle Management platforms are introducing new agents designed to accelerate invoice processing and push accounts payable toward greater autonomy. Recent market solutions use generative AI, deep learning, and natural language processing to manage the entire workflow, from initial data ingestion through to final reconciliation.

These digital teammates handle task execution, allowing human employees to focus on higher-level business planning rather than replacing them entirely.

Within these ecosystems, specialised business agents provide contextual and real-time guidance regarding the next best actions for handling invoices. Data agents allow staff to query system information using natural language, easily finding answers about awaiting approvals in specific regions or identifying suppliers offering early payment discounts.

Governing autonomous finance workflows

Finance teams will only hand over tasks to agentic AI if they retain control. Finance departments require verifiable audit trails and explainable logic for every action, avoiding networks of disconnected bots.

Industry leaders note that autonomy without trust isn’t acceptable, especially in sensitive industries like finance. Platforms must ensure every AI decision is explainable, auditable, and governed through existing finance controls. This approach helps safely delegate workloads to algorithms while remaining fully compliant and protected.

To enable this trust, every action performed by an AI agent routes through a central policy engine. Before executing any task, the system passes the proposed action through specific autonomy gates that enforce the customer’s business rules, risk thresholds, and compliance requirements. This architecture ensures algorithms manage the bulk of the workload while finance personnel retain total visibility and a complete audit trail.

Building automated procurement operations

Future agentic finance AI capabilities will automate issue resolution and connect data across systems for faster decision-making.

Modern capabilities in 2026 include supplier agents designed to manage invoice disputes and payment queries. These agents will automatically telephone suppliers to explain discrepancies, summarise the conversation, and outline subsequent steps to achieve faster resolutions. Professional agents, meanwhile, will assist clerks in resolving real-time processing questions using natural language to cut manual effort and delays.

AI must operate as an integral business component rather than a bonus feature, requiring intelligent, secure, and ethical application to drive cost efficiencies and enhance operations. By centralising control and ensuring every automated decision from agentic AI passes through established compliance checks, organisations can safely elevate their finance operations to fully autonomous execution.

See also: Mastercard’s AI payment demo points to agent-led commerce

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Deploying agentic finance AI for immediate business ROI appeared first on AI News.

]]>
How Amul is using AI dairy farming to put 36M farmers first https://www.artificialintelligence-news.com/news/amul-ai-dairy-farming-platform-india/ Mon, 23 Feb 2026 09:00:00 +0000 https://www.artificialintelligence-news.com/?p=112344 AI dairy farming has found its most ambitious deployment yet – not in a Silicon Valley lab nor a European agri-tech campus, but in the villages of Gujarat, India, where 36 lakh (3.6 million) women milk producers are now being served by an AI assistant named Sarlaben. Amul, the world’s largest dairy cooperative, has launched […]

The post How Amul is using AI dairy farming to put 36M farmers first appeared first on AI News.

]]>
AI dairy farming has found its most ambitious deployment yet – not in a Silicon Valley lab nor a European agri-tech campus, but in the villages of Gujarat, India, where 36 lakh (3.6 million) women milk producers are now being served by an AI assistant named Sarlaben.

Amul, the world’s largest dairy cooperative, has launched what it calls Amul AI: a platform built on five decades of cooperative data, designed to give every farmer in its network round-the-clock, personalised guidance in their own language.

Amul was launched just ahead of India’s AI Impact Summit 2026 and backed by the Ministry of Electronics and Information Technology (MeitY) with the EkStep Foundation. It is a test case for whether AI – the kind being debated in boardrooms and policy forums globally – can actually reach the last mile.

Meet Sarlaben: The AI dairy farming assistant

Sarlaben draws from one of India’s most comprehensive agricultural data repositories. It’s accessible via the Amul Farmer mobile app – already downloaded by over 10 lakh (one million) users on Android and iOS – as well as through voice calls for farmers using feature phones or landlines.

The system is integrated with Amul’s Automatic Milk Collection System (AMCS) and the Pashudhan application, allowing it to offer personalised, cattle-specific guidance.

What makes Amul AI substantially different from most agricultural chatbots is the scale of its training data. The platform was built on a digital backbone managing over 200 crore (two billion) milk procurement transactions annually, veterinary treatment records from more than 1,200 doctors covering nearly 3 crore (30 million) cattle, approximately 70 lakh (seven million) artificial inseminations conducted each year, ISRO satellite imagery for fodder production mapping, and a cattle census conducted every five years.

Every animal in the system carries a unique ID, with individual records of feed intake, disease history and milking status. “Amul AI is about taking dependable, verified information directly to the farmer – instantly and in a language they are comfortable with,” said Jayen Mehta, Managing Director of the Gujarat Cooperative Milk Marketing Federation (GCMMF), which markets the Amul brand.

He said how, by using decades of structured data and integrating it with their operational systems, the platform will help farmers make timely decisions that improve animal productivity and income.

India’s productivity paradox

India is the world’s largest producer of milk, generating 347.87 million tonnes in 2024-25 according to the Department of Animal Husbandry and Dairying – more than double the US’s 102.70 million tonnes. And yet despite leading in volume, India’s per-animal milk yield remains among the lowest globally.

The reasons are structural. India’s dairy sector is characterised by small herd sizes, low-quality feed, limited access to veterinary care in rural areas, and widespread lack of awareness about modern breeding and husbandry practices. Amul’s network spans more than 18,600 villages in Gujarat, where farmers supply over 350 lakh litres (35 million litres) of milk daily.

But information asymmetry has long been a bottleneck – a farmer facing a sick animal at midnight in a remote village has few places to turn; the gap Amul AI is designed to close.

Available initially in Gujarati – the primary language of the cooperative’s farmer base – the platform is built on the government’s Bhashini multilingual framework and could, in principle, be extended to 20 Indian languages, reaching Amul’s presence in 20,000 villages in 20 states.

The cooperative model

The technology story here is inseparable from the institutional one. Amul’s cooperative structure – built over five decades under the original White Revolution – created the data infrastructure that makes Amul AI possible.

Most private agri-tech startups are working backwards: collecting data first, building products second. Amul already had the data. What was needed was a way to make it actionable at the farmer level.

Experts tracking the dairy-tech space see this as significant. Sreeshankar Nair, Founder of Brainwired, a dairy-tech startup, identifies three specific challenges that Amul AI could meaningfully address: farmer awareness, access to quality veterinary guidance, and connectivity to grazing and feed resources.

“If AI can integrate local dialects of Indian languages, India can have White Revolution 2.0,” Nair said, pointing to the transformative potential of vernacular AI in a sector where not every farmer speaks the same dialect.

Saswata Narayan Biswas, Director of the Institute of Rural Management, Anand (IRMA) – the institution closely associated with Amul’s founding ethos – frames it as an AI embedded in a cooperative framework. It becomes “not a technology upgrade, but an instrument of inclusive rural transformation.”

For Biswas, the specific abilities Amul AI brings – predictive disease detection, oestrus tracking, optimised feed formulation, localised weather risk advisories – are abilities Amul had been building for years. AI accelerates and democratises them.

Scale and the test ahead

The launch has drawn backing from the highest levels of government. Gujarat Chief Minister Bhupendra Patel launched the platform and confirmed it will be showcased at the AI Impact Summit 2026. The cooperative has acknowledged MeitY and the EkStep Foundation – an open digital infrastructure nonprofit – as partners in building the AI layer.

Farmers not affiliated with Amul can also access general dairying and animal husbandry information through the app. At its current scale, Amul AI already covers more cattle – nearly 3 crore (30 million) – than most national veterinary databases anywhere in the world.

The harder question, as with most AI deployments at a population scale, is whether the tool will serve those who need it most. The farmers most likely to benefit first – those already comfortable with smartphones, already plugged into Amul’s digital system – may not be the ones with the greatest information deficit.

The rollout of Bhashini-enabled dialect support, the adoption rate among feature-phone users relying on voice calls, and whether AI-driven advisories translate into measurable yield improvements will be the metrics that determine whether this is genuinely White Revolution 2.0.

Amul has built an AI system grounded in half a century of real cooperative transactions, real animals, and real farmers. Such an infrastructure is, arguably, the most credible foundation for AI dairy farming at scale. Whether it fulfils its promise will depend on execution – and on whether Sarlaben’s voice can reach in the last few miles; those that have always been the hardest to cross.

See also: Hitachi bets on industrial expertise to win the physical AI race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post How Amul is using AI dairy farming to put 36M farmers first appeared first on AI News.

]]>
Alibaba enters physical AI race with open-source robot model RynnBrain https://www.artificialintelligence-news.com/news/alibaba-rynnbrain-physical-ai-robots-china/ Fri, 13 Feb 2026 09:30:00 +0000 https://www.artificialintelligence-news.com/?p=112207 Alibaba has entered the race to build AI that powers robots, not just chatbots. The Chinese tech giant this week unveiled RynnBrain, an open-source model designed to help robots perceive their environment and execute physical tasks.  The move signals China’s accelerating push into physical AI as ageing populations and labour shortages drive demand for machines […]

The post Alibaba enters physical AI race with open-source robot model RynnBrain appeared first on AI News.

]]>
Alibaba has entered the race to build AI that powers robots, not just chatbots. The Chinese tech giant this week unveiled RynnBrain, an open-source model designed to help robots perceive their environment and execute physical tasks. 

The move signals China’s accelerating push into physical AI as ageing populations and labour shortages drive demand for machines that can work alongside—or replace—humans. The model positions Alibaba alongside Nvidia, Google DeepMind, and Tesla in the race to build what Nvidia CEO Jensen Huang calls “a multitrillion-dollar growth opportunity.” 

Unlike its competitors, however, Alibaba is pursuing an open-source strategy—making RynnBrain freely available to developers to accelerate adoption, similar to its approach with the Qwen family of language models, which rank among China’s most advanced AI systems.

Video demonstrations released by Alibaba’s DAMO Academy show RynnBrain-powered robots identifying fruit and placing it in baskets—tasks that seem simple but require complex AI governing object recognition and precise movement.

The technology falls under the category of vision-language-action (VLA) models, which integrate computer vision, natural language processing, and motor control to enable robots to interpret their surroundings and execute appropriate actions.

Unlike traditional robots that follow preprogrammed instructions, physical AI systems like RynnBrain enable machines to learn from experience and adapt behaviour in real time. This represents a fundamental shift from automation to autonomous decision-making in physical environments—a shift with implications extending far beyond factory floors.

From prototype to production

The timing signals a broader inflexion point. According to Deloitte’s 2026 Tech Trends report, physical AI has begun “shifting from a research timeline to an industrial one,” with simulation platforms and synthetic data generation compressing iteration cycles before real-world deployment.

The transition is being driven less by technological breakthroughs than by economic necessity. Advanced economies face a stark reality: demand for production, logistics, and maintenance continues rising while labour supply increasingly fails to keep pace. 

The OECD projects that working-age populations across developed nations will stagnate or decline over the coming decades as ageing accelerates.

Parts of East Asia are encountering this reality earlier than other regions. Demographic ageing, declining fertility, and tightening labour markets are already influencing automation choices in logistics, manufacturing, and infrastructure—particularly in China, Japan, and South Korea. 

These environments aren’t exceptional; they’re simply ahead of a trajectory other advanced economies are likely to follow.

When it comes to humanoid robots specifically—machines designed to walk and function like humans—China is “forging ahead of the U.S.,” with companies planning to ramp up production this year, according to Deloitte. 

UBS estimates there will be two million humanoids in the workplace by 2035, climbing to 300 million by 2050, representing a total addressable market between $1.4 trillion and $1.7 trillion by mid-century.

The governance gap

Yet as physical AI capabilities accelerate, a critical constraint is emerging—one that has nothing to do with model performance.

“In physical environments, failures cannot simply be patched after the fact,” according to a World Economic Forum analysis published this week. “Once AI begins to move goods, coordinate labour or operate equipment, the binding constraint shifts from what systems can do to how responsibility, authority and intervention are governed.”

Physical industries are governed by consequences, not computation. A flawed recommendation in a chatbot can be corrected in software. A robot that drops a part during handover or loses balance on a factory floor designed for humans causes operations to pause, creating cascading effects on production schedules, safety protocols, and liability chains.

The WEF framework identifies three governance layers required for safe deployment: executive governance setting risk appetite and non-negotiables; system governance embedding those constraints into engineered reality through stop rules and change controls; and frontline governance giving workers clear authority to override AI decisions.

“As physical AI accelerates, technical capabilities will increasingly converge, but governance will not,” the analysis warns. “Those that treat governance as an afterthought may see early gains, but will discover that scale amplifies fragility.”

This creates an asymmetry in the US-China competition. China’s faster deployment cycles and willingness to pilot systems in controlled industrial environments could accelerate learning curves. 

However, governance frameworks that work in structured factory settings may not translate to public spaces where autonomous systems must navigate unpredictable human behaviour.

Early deployment signals

Current deployments remain concentrated in warehousing and logistics, where labour market pressures are most acute. Amazon recently deployed its millionth robot, part of a diverse fleet working alongside humans. Its DeepFleet AI model coordinates this massive robot army across the entire fulfilment network, which Amazon reports will improve travel efficiency by 10%.

BMW is testing humanoid robots at its South Carolina factory for tasks requiring dexterity that traditional industrial robots lack: precision manipulation, complex gripping, and two-handed coordination. 

The automaker is also using autonomous vehicle technology to enable newly built cars to drive themselves from the assembly line through testing to the finishing area, all without human assistance.

But applications are expanding beyond traditional industrial settings. In healthcare, companies are developing AI-driven robotic surgery systems and intelligent assistants for patient care. 

Cities like Cincinnati are deploying AI-powered drones to autonomously inspect bridge structures and road surfaces. Detroit has launched a free autonomous shuttle service for seniors and people with disabilities.

The regional competitive dynamic intensified this week when South Korea announced a $692 million national initiative to produce AI semiconductors, underscoring how physical AI deployment requires not just software capabilities but domestic chip manufacturing capacity.

NVIDIA has released multiple models under its “Cosmos” brand for training and running AI in robotics. Google DeepMind offers Gemini Robotics-ER 1.5. Tesla is developing its own AI to power the Optimus humanoid robot. Each company is betting that the convergence of AI capabilities with physical manipulation will unlock new categories of automation.

As simulation environments improve and ecosystem-based learning shortens deployment cycles, the strategic question is shifting from “Can we adopt physical AI?” to “Can we govern it at scale?”

For China, the answer may determine whether its early mover advantage in robotics deployment translates into sustained industrial leadership—or becomes a cautionary tale about scaling systems faster than the governance infrastructure required to sustain them.

(Photo by Alibaba)

See also: EY and NVIDIA to help companies test and deploy physical AI

Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, clickhere for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Alibaba enters physical AI race with open-source robot model RynnBrain appeared first on AI News.

]]>
Google identifies state-sponsored hackers using AI in attacks https://www.artificialintelligence-news.com/news/state-sponsored-hackers-ai-cyberattacks-google/ Thu, 12 Feb 2026 09:00:00 +0000 https://www.artificialintelligence-news.com/?p=112167 State-sponsored hackers are exploiting highly-advanced tooling to accelerate their particular flavours of cyberattacks, with threat actors from Iran, North Korea, China, and Russia using models like Google’s Gemini to further their campaigns. They are able to craft sophisticated phishing campaigns and develop malware, according to a new report from Google’s Threat Intelligence Group (GTIG). The […]

The post Google identifies state-sponsored hackers using AI in attacks appeared first on AI News.

]]>
State-sponsored hackers are exploiting highly-advanced tooling to accelerate their particular flavours of cyberattacks, with threat actors from Iran, North Korea, China, and Russia using models like Google’s Gemini to further their campaigns. They are able to craft sophisticated phishing campaigns and develop malware, according to a new report from Google’s Threat Intelligence Group (GTIG).

The quarterly AI Threat Tracker report, released today, reveals how government-backed attackers have begun to use artificial intelligence in the attack lifecycle – reconnaissance, social engineering, and eventually, malware development. This activity has become apparent thanks to the GTIG’s work during the final quarter of 2025.

“For government-backed threat actors, large language models have become essential tools for technical research, targeting, and the rapid generation of nuanced phishing lures,” GTIG researchers stated in their report.

Reconnaissance by state-sponsored hackers targets the defence sector

Iranian threat actor APT42 is reported as having used Gemini to augment its reconnaissance and targeted social engineering operations. The group used an AI to create official-seeming email addresses for specific entities and then conducted research to establish credible pretexts for approaching targets.

APT42 crafted personas and scenarios designed to better elicit engagement by their targets, translating between languages and deploying natural, native phrases that helped it get round traditional phishing red flags, such as poor grammar or awkward syntax.

North Korean government-backed actor UNC2970, which focuses on defence targeting and impersonating corporate recruiters, used Gemini to help it profile high-value targets. The group’s reconnaissance included searching for information on major cybersecurity and defence companies, mapping specific technical job roles, and gathering salary information.

“This activity blurs the distinction between routine professional research and malicious reconnaissance, as the actor gathers the necessary components to create tailored, high-fidelity phishing personas,” GTIG noted.

Model extraction attacks surge

Beyond operational misuse, Google DeepMind and GTIG identified a increase in model extraction attempts – also known as “distillation attacks” – aimed at stealing intellectual property from AI models.

One campaign targeting Gemini’s reasoning abilities involved the collation and use of over 100,000 prompts designed to coerce the model into outputting reasoning processes. The breadth of questions suggested an attempt to replicate Gemini’s reasoning ability in non-English target languages in various tasks.

How model extraction attacks work to steal AI intellectual property. (Image: Google GTIG)

While GTIG observed no direct attacks on frontier models from advanced persistent threat actors, the team identified and disrupted frequent model extraction attacks from private sector entities globally and researchers seeking to clone proprietary logic.

Google’s systems recognised these attacks in real-time and deployed defences to protect internal reasoning traces.

AI-integrated malware emerges

GTIG observed malware samples, tracked as HONESTCUE, that use Gemini’s API to outsource functionality generation. The malware is designed to undermine traditional network-based detection and static analysis through a multi-layered obfuscation approach.

HONESTCUE functions as a downloader and launcher framework that sends prompts via Gemini’s API and receives C# source code as responses. The fileless secondary stage compiles and executes payloads directly in memory, leaving no artefacts on disk.

HONESTCUE malware’s two-stage attack process using Gemini’s API. (Image: Google GTIG)

Separately, GTIG identified COINBAIT, a phishing kit whose construction was likely accelerated by AI code generation tools. The kit, which masquerades as a major cryptocurrency exchange for credential harvesting, was built using the AI-powered platform Lovable AI.

ClickFix campaigns abuse AI chat platforms

In a novel social engineering campaign first observed in December 2025, Google saw threat actors abuse the public sharing features of generative AI services – including Gemini, ChatGPT, Copilot, DeepSeek, and Grok – to host deceptive content distributing ATOMIC malware targeting macOS systems.

Attackers manipulated AI models to create realistic-looking instructions for common computer tasks, embedding malicious command-line scripts as the “solution.” By creating shareable links to these AI chat transcripts, threat actors used trusted domains to host their initial attack stage.

The three-stage ClickFix attack chain exploiting AI chat platforms. (Image: Google GTIG)

Underground marketplace thrives on stolen API keys

GTIG’s observations of English and Russian-language underground forums indicate a persistent demand for AI-enabled tools and services. However, state-sponsored hackers and cybercriminals struggle to develop custom AI models, instead relying on mature commercial products accessed through stolen credentials.

One toolkit, “Xanthorox,” advertised itself as a custom AI for autonomous malware generation and phishing campaign development. GTIG’s investigation revealed Xanthorox was not a bespoke model but actually powered by several commercial AI products, including Gemini, accessed through stolen API keys.

Google’s response and mitigations

Google has taken action against identified threat actors by disabling accounts and assets associated with malicious activity. The company has also applied intelligence to strengthen both classifiers and models, letting them refuse assistance with similar attacks moving forward.\

“We are committed to developing AI boldly and responsibly, which means taking proactive steps to disrupt malicious activity by disabling the projects and accounts associated with bad actors, while continuously improving our models to make them less susceptible to misuse,” the report stated.

GTIG emphasised that despite these developments, no APT or information operations actors have achieved breakthrough abilities that fundamentally alter the threat landscape.

The findings underscore the evolving role of AI in cybersecurity, as both defenders and attackers race to use the technology’s abilities.

For enterprise security teams, particularly in the Asia-Pacific region where Chinese and North Korean state-sponsored hackers remain active, the report serves as an important reminder to enhance defences against AI-augmented social engineering and reconnaissance operations.

(Photo by SCARECROW artworks)

See also: Anthropic just revealed how AI-orchestrated cyberattacks actually work – Here’s what enterprises need to know

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Google identifies state-sponsored hackers using AI in attacks appeared first on AI News.

]]>
ThoughtSpot: On the new fleet of agents delivering modern analytics https://www.artificialintelligence-news.com/news/thoughtspot-on-the-new-fleet-of-agents-delivering-modern-analytics/ Mon, 02 Feb 2026 09:34:52 +0000 https://www.artificialintelligence-news.com/?p=111947 If you are a data and analytics leader, then you know agentic AI is fuelling unprecedented speed of change right now. Knowing you need to do something and knowing what to do, however, are two different things. The good news is providers like ThoughtSpot are able to assist, with the company in its own words […]

The post ThoughtSpot: On the new fleet of agents delivering modern analytics appeared first on AI News.

]]>
If you are a data and analytics leader, then you know agentic AI is fuelling unprecedented speed of change right now. Knowing you need to do something and knowing what to do, however, are two different things. The good news is providers like ThoughtSpot are able to assist, with the company in its own words determined to ‘reimagin[e] analytics and BI from the ground up’.

“Certainly, agentic systems really are shifting us into very new territory,” explains Jane Smith, field chief data and AI officer at ThoughtSpot. “They’re shifting us away from passive reporting to much more active decision making.

“Traditional BI waits for you to find an insight,” adds Jane. “Agentic systems are proactively monitoring data from multiple sources 24/7; they’re diagnosing why changes happened; they’re triggering the next action automatically.

“We’re getting much more action-oriented.”

Alongside moving from passive to active, there are two other ways in which Jane sees this change taking place in BI. There is a shift towards the ‘true democratisation of data’ on one hand, but on the other is the ‘resurgence of focus’ on the semantic layer. “You cannot have an agent taking action in the way I just described when it doesn’t strictly understand business context,” says Jane. “A strong semantic layer is really the only way to make sense… of the chaos of AI.”

ThoughtSpot has a fleet of agents to take action and move the needle for customers. In December, the company launched four new BI agents, with the idea that they work as a team to deliver modern analytics.

Spotter 3, the latest iteration of an agent first debuted towards the end of 2024, is the star. It is conversant with applications like Slack and Salesforce, and can not only answer questions, but assess the quality of its answer and keep trying until it gets the right result.

“It leverages the [Model Context] protocol, so you can ask your questions to your organisation’s structured data – everything in your rows, your columns, your tables – but also incorporate your unstructured data,” says Jane. “So, you can get really context-rich answers to questions, all through our agent, or if you wish, through your own LLM.”

With this power, however, comes responsibility. As ThoughtSpot’s recent eBook exploring data and AI trends for 2026 notes, the C-suite needs to work out how to design systems so every decision – be it human or AI – can be explained, improved, and trusted.

ThoughtSpot calls this emerging architecture ‘decision intelligence’ (DI). “What we’ll see a lot of, I think, will be decision supply chains,” explains Jane. “Instead of a one-off insight, I think what we’re going to see is decisions… flow through repeatable stages, data analysis, simulation, action, feedback, and these are all interactions between humans and machines that will be logged in what we can think of as a decision system of record.”

What would this look like in practice? Jane offers an example from a clinical trial in the pharma industry. “The system would log and version, really, every step of how a patient is chosen for a clinical trial; how data from a health record is used to identify a candidate; how that decision was simulated against the trial protocol; how the matching occurred; how potentially a doctor ultimately recommended this patient for the trial,” she says.

“These are processes that can be audited, they can be improved for the following trial. But the very meticulous logging of every element of the flow of this decision into what we think of as a supply chain is a way that I would visualise that.”

ThoughtSpot is participating at the AI & Big Data Expo Global, in London, on February 4-5. You can watch the full interview with Jane Smith below:

Photo by Steve Johnson on Unsplash

The post ThoughtSpot: On the new fleet of agents delivering modern analytics appeared first on AI News.

]]>
China’s hyperscalers bet billions on agentic AI as commerce becomes the new battleground https://www.artificialintelligence-news.com/news/china-hyperscalers-agentic-ai-commerce-battleground/ Fri, 30 Jan 2026 09:00:00 +0000 https://www.artificialintelligence-news.com/?p=111928 The artificial intelligence industry’s pivot toward agentic AI – systems capable of autonomously executing multi-step tasks – has dominated technology discussions in recent months. But while Western firms focus on foundational models and cross-platform interoperability, China’s technology giants are racing to dominate through commerce integration, a divergence that could reshape how enterprises deploy autonomous systems […]

The post China’s hyperscalers bet billions on agentic AI as commerce becomes the new battleground appeared first on AI News.

]]>
The artificial intelligence industry’s pivot toward agentic AI – systems capable of autonomously executing multi-step tasks – has dominated technology discussions in recent months.

But while Western firms focus on foundational models and cross-platform interoperability, China’s technology giants are racing to dominate through commerce integration, a divergence that could reshape how enterprises deploy autonomous systems globally.

Alibaba, Tencent and ByteDance have rapidly upgraded their AI platforms to support agentic commerce, marking a pivot from conversational AI tools to agents capable of completing entire transaction cycles, from product discovery through payment.

Just last week, Alibaba upgraded its Qwen chatbot to let direct transaction completion in the interface, connecting the AI agent in its ecosystem, including Taobao, Alipay, Amap and travel platform Fliggy. The integration supports over 400 core digital tasks, allowing users to compare personalised recommendations in platforms and complete payments without leaving the chatbot environment.

“The agentic transformation of commercial services lets the maximal integration of user services and enhances user stickiness,” Shaochen Wang, research analyst at Counterpoint Research, told CNBC, referring to stronger long-term user engagement that creates sustainable competitive advantages.

The super app advantage

Before that, ByteDance upgraded its Doubao AI chatbot in December to autonomously handle tasks, including ticket bookings, through integrations with Douyin, the Chinese version of TikTok. The upgraded model was introduced on a ZTE-developed prototype smartphone as a system-level AI assistant; however, some planned features were later scaled back due to privacy and security concerns raised by rivals.

Tencent President Martin Lau indicated during the company’s May 2025 earnings call that AI agents could become core components of the WeChat ecosystem, which serves over one billion users with integrated messaging, payments, e-commerce and services.

The positioning reflects China’s structural advantage in agentic AI deployment: integrated ecosystems that eliminate the fragmentation constraining Western competitors.

“AI agents will be foundational to the evolution of super apps, with success depending on deep integration in payments, logistics, and social engagement,” Charlie Dai, VP and principal analyst at Forrester, told CNBC. “Chinese firms like Alibaba, Tencent and ByteDance all benefit from integrated ecosystems, rich behavioural data, and consumer familiarity with super apps.”

Western companies face more fragmented data environments and stricter privacy regulations that slow cross-service integration, despite leading in foundational AI model development and global reach, Dai noted.

Agentic AI’s enterprise trajectory

Commercial applications signal broader enterprise implications as agentic AI moves from auxiliary tools to autonomous actors capable of executing complex workflows. Industry experts expect multi-agent systems to emerge as a defining trend in AI deployment this year, extending from consumer services into organisational production.

In a report by Global Times, Tian Feng, president of the Fast Think Institute and former dean of SenseTime’s Intelligence Industry Research Institute, predicted that the first AI agent to surpass 300 million monthly active users could emerge as early as 2026, becoming “an indispensable assistant for work and daily life” capable of autonomously executing cross-app, composite services.

Approximately half of all consumers already use AI when searching online, according to a 2025 McKinsey study. The research firm estimated that AI agents could generate more than $1 trillion in economic value for US businesses by 2030 through streamlining routine steps in consumer decision-making.

Chinese cloud providers, including smaller players like JD Cloud and UCloud, have also begun supporting agentic AI tools, though high token use has driven some providers, like ByteDance’s Volcano Engine, to introduce fixed-subscription pricing models to address cost concerns.

Divergent deployment strategies

The contrasting approaches between Chinese integration and Western scalability reflect fundamental differences in market structure and regulatory environments that will likely define competitive positioning.

“China will prioritise domestic integration and expansion in selected regions, while US firms focus on global scalability and governance,” Dai said.

US players pursuing agentic commerce include OpenAI, Perplexity, and Amazon, while Google explores positioning itself as a “matchmaker” between merchants, consumers and AI agents – approaches that reflect fragmented platform environments requiring interoperability not closed-loop integration.

However, the autonomous nature of agentic systems has raised regulatory questions in China. ByteDance warned users about security and privacy risks when announcing Doubao’s abilities, recommending deployment on dedicated devices not those containing sensitive information, given the tool’s access to device data, digital accounts and internet connectivity in multiple ports.

The rapid commercialisation of agentic AI in China’s consumer sector provides enterprise decision-makers globally with early signals of how autonomous systems may reshape customer acquisition costs, platform economics and competitive moats as these abilities mature.

(Photo by Philip Oroni)

See also: Deloitte sounds alarm as AI agent deployment outruns safety frameworks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post China’s hyperscalers bet billions on agentic AI as commerce becomes the new battleground appeared first on AI News.

]]>
Franny Hsiao, Salesforce: Scaling enterprise AI https://www.artificialintelligence-news.com/news/franny-hsiao-salesforce-scaling-enterprise-ai/ Wed, 28 Jan 2026 15:00:44 +0000 https://www.artificialintelligence-news.com/?p=111906 Scaling enterprise AI requires overcoming architectural oversights that often stall pilots before production, a challenge that goes far beyond model selection. While generative AI prototypes are easy to spin up, turning them into reliable business assets involves solving the difficult problems of data engineering and governance. Ahead of AI & Big Data Global 2026 in […]

The post Franny Hsiao, Salesforce: Scaling enterprise AI appeared first on AI News.

]]>
Scaling enterprise AI requires overcoming architectural oversights that often stall pilots before production, a challenge that goes far beyond model selection. While generative AI prototypes are easy to spin up, turning them into reliable business assets involves solving the difficult problems of data engineering and governance.

Ahead of AI & Big Data Global 2026 in London, Franny Hsiao, EMEA Leader of AI Architects at Salesforce, discussed why so many initiatives hit a wall and how organisations can architect systems that actually survive the real world.

The ‘pristine island’ problem of scaling enterprise AI

Most failures stem from the environment in which the AI is built. Pilots frequently begin in controlled settings that create a false sense of security, only to crumble when faced with enterprise scale.

Headshot of Franny Hsiao, EMEA Leader of AI Architects at Salesforce.

“The single most common architectural oversight that prevents AI pilots from scaling is the failure to architect a production-grade data infrastructure with built-in end to end governance from the start,” Hsiao explains.

“Understandably, pilots often start on ‘pristine islands’ – using small, curated datasets and simplified workflows. But this ignores the messy reality of enterprise data: the complex integration, normalisation, and transformation required to handle real-world volume and variability.”

When companies attempt to scale these island-based pilots without addressing the underlying data mess, the systems break. Hsiao warns that “the resulting data gaps and performance issues like inference latency render the AI systems unusable—and, more importantly, untrustworthy.”

Hsiao argues that the companies successfully bridging this gap are those that “bake end-to-end observability and guardrails into the entire lifecycle.” This approach provides “visibility and control into how effective the AI systems are and how users are adopting the new technology.”

Engineering for perceived responsiveness

As enterprises deploy large reasoning models – like the ‘Atlas Reasoning Engine’ – they face a trade-off between the depth of the model’s “thinking” and the user’s patience. Heavy compute creates latency.

Salesforce addresses this by focusing on “perceived responsiveness through Agentforce Streaming,” according to Hsiao.

“This allows us to deliver AI-generated responses progressively, even while the reasoning engine performs heavy computation in the background. It’s an incredibly effective approach for reducing perceived latency, which often stalls production AI.”

Transparency also plays a functional role in managing user expectations when scaling enterprise AI. Hsiao elaborates on using design as a trust mechanism: “By surfacing progress indicators that show the reasoning steps or the tools being used, as well images like spinners and progress bars to depict loading states, we don’t just keep users engaged; we improve perceived responsiveness and build trust.

“This visibility, combined with strategic model selection – like choosing smaller models for fewer computations, meaning faster response times – and explicit length constraints, ensures the system feels deliberate and responsive.”

Offline intelligence at the edge

For industries with field operations, such as utilities or logistics, reliance on continuous cloud connectivity is a non-starter. “For many of our enterprise customers, the biggest practical driver is offline functionality,” states Hsiao.

Hsiao highlights the shift toward on-device intelligence, particularly in field services, where the workflow must continue regardless of signal strength.

“A technician can photograph a faulty part, error code, or serial number while offline. Then an on-device LLM can then identify the asset or error, and provide guided troubleshooting steps from a cached knowledge base instantly,” explains Hsiao.

Data synchronisation happens automatically once connectivity returns. “Once a connection is restored, the system handles the ‘heavy lifting’ of syncing that data back to the cloud to maintain a single source of truth. This ensures that work gets done, even in the most disconnected environments.”

Hsiao expects continued innovation in edge AI due to benefits like “ultra-low latency, enhanced privacy and data security, energy efficiency, and cost savings.”

High-stakes gateways

Autonomous agents are not set-and-forget tools. When scaling enterprise AI deployments, governance requires defining exactly when a human must verify an action. Hsiao describes this not as dependency, but as “architecting for accountability and continuous learning.”

Salesforce mandates a “human-in-the-loop” for specific areas Hsiao calls “high-stakes gateways”:

“This includes specific action categories, including any ‘CUD’ (Creating, Uploading, or Deleting) actions, as well as verified contact and customer contact actions,” says Hsiao. “We also default to human confirmation for critical decision-making or any action that could be potentially exploited through prompt manipulation.”

This structure creates a feedback loop where “agents learn from human expertise,” creating a system of “collaborative intelligence” rather than unchecked automation.

Trusting an agent requires seeing its work. Salesforce has built a “Session Tracing Data Model (STDM)” to provide this visibility. It captures “turn-by-turn logs” that offer granular insight into the agent’s logic.

“This gives us granular step-by-step visibility that captures every interaction including user questions, planner steps, tool calls, inputs/outputs, retrieved chunks, responses, timing, and errors,” says Hsiao.

This data allows organisations to run ‘Agent Analytics’ for adoption metrics, ‘Agent Optimisation’ to drill down into performance, and ‘Health Monitoring’ for uptime and latency tracking.

“Agentforce observability is the single mission control for all your Agentforce agents for unified visibility, monitoring, and optimisation,” Hsiao summarises.

Standardising agent communication

As businesses deploy agents from different vendors, these systems need a shared protocol to collaborate. “For multi-agent orchestration to work, agents can’t exist in a vacuum; they need common language,” argues Hsiao.

Hsiao outlines two layers of standardisation: orchestration and meaning. For orchestration, Salesforce is adopting open-source standards like MCP (Model Context Protocol) and A2A (Agent to Agent Protocol).”

“We believe open source standards are non-negotiable; they prevent vendor lock-in, enable interoperability, and accelerate innovation.”

However, communication is useless if the agents interpret data differently. To solve for fragmented data, Salesforce co-founded OSI (Open Semantic Interchange) to unify semantics so an agent in one system “truly understands the intent of an agent in another.”

The future enterprise AI scaling bottleneck: agent-ready data

Looking forward, the challenge will shift from model capability to data accessibility. Many organisations still struggle with legacy, fragmented infrastructure where “searchability and reusability” remain difficult.

Hsiao predicts the next major hurdle – and solution – will be making enterprise data “‘agent-ready’ through searchable, context-aware architectures that replace traditional, rigid ETL pipelines.” This shift is necessary to enable “hyper-personalised and transformed user experience because agents can always access the right context.”

“Ultimately, the next year isn’t about the race for bigger, newer models; it’s about building the orchestration and data infrastructure that allows production-grade agentic systems to thrive,” Hsiao concludes.

Salesforce is a key sponsor of this year’s AI & Big Data Global in London and will have a range of speakers, including Franny Hsiao, sharing their insights during the event. Be sure to swing by Salesforce’s booth at stand #163 for more from the company’s experts.

See also: Databricks: Enterprise AI adoption shifts to agentic systems

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Franny Hsiao, Salesforce: Scaling enterprise AI appeared first on AI News.

]]>
Masumi Network: How AI-blockchain fusion adds trust to burgeoning agent economy https://www.artificialintelligence-news.com/news/masumi-network-how-ai-blockchain-fusion-adds-trust-to-burgeoning-agent-economy/ Wed, 28 Jan 2026 12:28:14 +0000 https://www.artificialintelligence-news.com/?p=111898 2026 will see forward-thinking organisations building out their squads of AI agents across roles and functions. But amid the rush, there is another aspect to consider. One of IDC’s enterprise technology predictions for the coming five years, published in October, was fascinating. “By 2030, up to 20% of [global 1000] organisations will have faced lawsuits, […]

The post Masumi Network: How AI-blockchain fusion adds trust to burgeoning agent economy appeared first on AI News.

]]>
2026 will see forward-thinking organisations building out their squads of AI agents across roles and functions. But amid the rush, there is another aspect to consider.

One of IDC’s enterprise technology predictions for the coming five years, published in October, was fascinating. “By 2030, up to 20% of [global 1000] organisations will have faced lawsuits, substantial fines, and CIO dismissals, due to high-profile disruptions stemming from inadequate controls and governance of AI agents,” the analyst noted.

How do you therefore put guardrails in place – and how do you ensure these agents work together and, ultimately, do business together? Patrick Tobler, founder and CEO of blockchain infrastructure platform provider NMKR, is working on a project which aims to solve this – by fusing agentic AI and decentralisation.

The Masumi Network, born out of a collaboration between NMKR and Serviceplan Group, launched in late 2024 as a framework-agnostic infrastructure which ‘empowers developers to build autonomous agents that collaborate, monetise services, and maintain verifiable trust.’

“The core thesis of Masumi is that there’s going to be billions of different AI agents from different companies interacting with each other in the future,” explains Tobler. “The difficult part now is – how do you actually have agents from different companies that can interact with each other and send money to each other as well, across these different companies?”

Take travel as an example. You want to attend an industry conference, so your hotel booking agent buys a plane ticket from your airline agent. The entire experience and transaction will be seamless – but that implicit trust is required.

“Masumi is a decentralised network of agents, so it’s not relying on any centralised payment infrastructure,” says Tobler. “Instead, agents are equipped with wallets and can send stablecoins from one agent to another and, because of that, interacting with each other in a completely safe and trustless manner.”

For Tobler, having spent in his words ‘a lot of time’ in crypto, he determined that its benefits were being pointed to the wrong place.

“I think there’s a lot of these problems that we have solved in crypto for humans, and then I came to this conclusion that maybe we’ve been solving them or the wrong target audience,” he explains. “Because for humans, using crypto and wallets and blockchains, all that kind of stuff is extremely difficult; the user experience is not great. But for agents, they don’t care if it’s difficult to use. They just use it, and it’s very native to them.

“So all these issues that are now arising with agents having to interact with millions, or maybe even billions, of agents in the future – these problems have all already been solved with crypto.”

Tobler is attending AI & Big Data Expo Global as part of Discover Cardano; NMKR started on the Cardano blockchain, while Masumi is built completely on Cardano. He says he is looking forward to speaking with businesses that are ‘hearing a lot about AI but aren’t really using it much besides ChatGPT’.

“I want to understand from them what they are doing, and then figure out how we can help them,” he says. “That’s most often the thing missing from traditional tech startups. We’re all building for our own bubble, instead of actually talking to the people that would be using it every day.”

Discover Cardano is exhibiting at the AI & Big Data Expo Global, in London on February 4-5. Watch the full video interview with NMKR’s Patrick Tobler below:

Photo by Google DeepMind

The post Masumi Network: How AI-blockchain fusion adds trust to burgeoning agent economy appeared first on AI News.

]]>
Gallup Workforce shows details of AI adoption in US workplaces https://www.artificialintelligence-news.com/news/gallup-workforce-ai-shows-details-of-ml-adoption-in-us-workplaces/ Wed, 28 Jan 2026 10:06:00 +0000 https://www.artificialintelligence-news.com/?p=111891 Artificial intelligence has moved into the US workplace, but its adoption remains uneven, fragmented, and tied to role, industry, and organisation. Findings from a Gallup Workforce survey covering the period to the end of December 2025 show how employees use AI, who benefits most from it, and where areas of uncertainty remain. The findings draw […]

The post Gallup Workforce shows details of AI adoption in US workplaces appeared first on AI News.

]]>
Artificial intelligence has moved into the US workplace, but its adoption remains uneven, fragmented, and tied to role, industry, and organisation. Findings from a Gallup Workforce survey covering the period to the end of December 2025 show how employees use AI, who benefits most from it, and where areas of uncertainty remain.

The findings draw from a nationally-representative questioning of more than 23,000 US adults in full- and part-time work, conducted online in August 2025. Its conclusions are that instances of AI in the workplace are increasing, but its use is far from universal, and is concentrated among knowledge-based workers.

The office AI

Employees in technology, finance, and professional services are by far the biggest user group. More than three-quarters of those working in IT report using AI “at least a few times a year”. In finance and professional services, the figure is a touch under 60%. AI-enabled or aided roles tend to be those that involve significant digital workflow and information synthesis; tasks that correspond with AI’s current abilities.

AI use is lower in sectors dominated by customer-facing or manual work. Only around a third of retail workers report comparable levels of use to their office counterparts, although those in healthcare and manufacturing do tend to deploy AI more often than those in retail, for example. The fact that current raft of AI platforms fit more naturally into desk-based, cognitive roles seems obvious – less so is a drop-off in user numbers in tightly-regulated environments.

Do we, or don’t we?

Gallup’s data reveals a significant number of workers ore unsure whether or not their employer had adopted AI – nearly a quarter of those surveyed weren’t sure. In the third quarter of 2025, just over a third of employees said their organisation had implemented AI. 40% said there was no adoption of AI in their place of work

It’s worth noting that earlier versions of Gallup surveys didn’t include a “don’t know” option for questions about employers’ AI adoption, which encouraged respondents to guess. Belief in organisational AI adoption appeared to rise sharply between 2024 and 2025, therefore, Gallup says. Once uncertainty could be stated explicitly, it became clear a good number of employees were simply uninformed on the matter.

It’s staff in non-managerial roles who are more likely to say they’re unaware of their organisation’s AI use, a tendency mirrored in part-time staff and hands-on roles. The further workers are from decision-making, it seems, the less sure they become.

How workers use AI

The way employees use AI are consistent: of those using AI at least once a year, the most common applications are consolidating information, searching for information, and “generating ideas”, tasks that have changed little since Gallup first measured workplace AI use in 2024.

More than 60% of AI users refer to chatbots, with using AI for writing and editing coming some way behind. Coding assistants and data science tools remain niche, but popular. Employees who use AI often are far more likely to use any more advanced tools at their disposal; particularly true in the cases of coding assistants and data analysis.

Although use figures are generally up, Gallup concludes that AI has yet to be embedded in daily work for most Americans. Around 45% of workers say they use AI “a few times a year”, but only about 10% use it every day.

Conclusions

Business leaders have an easy win: simply clarifying a position on AI use would be a positive, plus publicising the availability (or otherwise) of AI tools would be an easy way to improve adoption rates.

The current abilities of AI pertain to desk-based, digital and data-centric workflows, although there are a myriad of platforms that will utilise AI in other roles. Exploring these more fully would certainly be bucking the trend, and may make the difference between an organisation’s long-term prospects and those of its direct competitors.

A page detailing Gallup’s findings can be found on the company’s website.

(Image source: “DIY Open Plan Office” by lower29 is licensed under CC BY-NC-SA 2.0.)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Gallup Workforce shows details of AI adoption in US workplaces appeared first on AI News.

]]>