Service Industry AI - AI News https://www.artificialintelligence-news.com/categories/ai-in-action/service-industry-ai/ Artificial Intelligence News Fri, 06 Mar 2026 13:15:43 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Service Industry AI - AI News https://www.artificialintelligence-news.com/categories/ai-in-action/service-industry-ai/ 32 32 Scaling intelligent automation without breaking live workflows https://www.artificialintelligence-news.com/news/scaling-intelligent-automation-without-breaking-live-workflows/ Fri, 06 Mar 2026 13:15:41 +0000 https://www.artificialintelligence-news.com/?p=112519 Scaling intelligent automation without disruption demands a focus on architectural elasticity, not just deploying more bots. At the Intelligent Automation Conference, industry leaders gathered to dissect why many automation initiatives stall after pilot phases. Speaking alongside representatives from NatWest Group, Air Liquide, and AXA XL, Promise Akwaowo, Process Automation Analyst at Royal Mail, grounded the […]

The post Scaling intelligent automation without breaking live workflows appeared first on AI News.

]]>
Scaling intelligent automation without disruption demands a focus on architectural elasticity, not just deploying more bots.

At the Intelligent Automation Conference, industry leaders gathered to dissect why many automation initiatives stall after pilot phases. Speaking alongside representatives from NatWest Group, Air Liquide, and AXA XL, Promise Akwaowo, Process Automation Analyst at Royal Mail, grounded the dialogue in practical delivery and risk management.

The elasticity imperative for scaling intelligent automation

Expansion initiatives often fail because teams equate success with the raw number of deployed bots rather than the underlying architecture’s elasticity. Infrastructure must handle volume and variability predictably.

When demand spikes during end-of-quarter financial reporting or sudden supply chain disruptions, the system cannot degrade or collapse. Without built-in elasticity, companies risk building brittle architectures that break under operational stress.

Headshot of Promise Akwaowo, Process Automation Analyst at Royal Mail.

Akwaowo explained that an automated architecture must remain stable without excessive manual intervention. “If your automation engine requires constant sizing, provisioning, and babysitting, you haven’t built a scalable platform; you’ve built a fragile service,” he advised the audience.

Whether integrating CRM ecosystems like Salesforce or orchestrating low-code vendor platforms, the objective remains building a platform capability rather than a loose collection of scripts.

Transitioning from controlled proofs-of-concept to live production environments introduces inherent risk. Large-scale, immediate deployments frequently cause disruption, undermining the anticipated efficiency gains. To protect core operations, deployment must happen in controlled stages. Akwaowo warned that “progress must be gradual, deliberate, and supported at each stage.”

A disciplined approach starts with formalising intent through a statement of work and validating assumptions under real conditions.

Before scaling intelligent automation, engineering teams must thoroughly understand system behaviour, potential failure modes, and recovery paths. For example, a financial institution implementing machine learning for transaction processing might cut manual review times by 40 percent, but they must ensure error traceability before applying the model to higher volumes.

This phased methodology protects live operations while enabling sustainable growth. Additionally, teams must fully grasp process ownership and variability before applying technology, avoiding the trap of merely automating existing inefficiencies. Fragmented workflows and unmanaged exceptions upstream often doom projects long before the software goes live.

A persistent misconception within automation programmes suggests that governance frameworks impede delivery speed. However, bypassing architectural standards allows hidden risks to accumulate, eventually stalling momentum. In regulated, high-volume environments, governance provides the foundation for safely scaling intelligent automation. It establishes the trust, repeatability, and confidence necessary for company-wide adoption.

Implementing a dedicated centre of excellence helps standardise these deployments. Operating a central Rapid Automation and Design function ensures every project is assessed and aligned before it reaches the production environment. Such structures guarantee that solutions remain operationally sustainable over time. Analysts also rely on standards like BPMN 2.0 to separate the business intent from the technical execution, ensuring traceability and consistency across the entire organisation.

Adapting to agentic AI inside ERP ecosystems

As large ERP providers rapidly integrate agentic AI, smaller vendors and their customers face pressure to adapt. Embedding intelligent agents directly into smaller ERP ecosystems offers a path forward, augmenting human workers by simplifying customer management and decision support. This approach to scaling intelligent automation allows businesses to drive value for existing clients instead of competing solely on infrastructure size.

Integrating agents into finance and operational workflows enhances human roles rather than replacing accountability. Agents can manage repetitive tasks such as email extraction, categorisation, and response generation.

Relieved of administrative burdens, finance professionals can dedicate their time to analysis and commercial judgement. Even when AI models generate financial forecasts, the final authority over decisions rests firmly with human operators.

Building a resilient capability demands patience and a commitment to long-term value over rapid deployment. Business leaders must ensure their designs prioritise observability, allowing engineers to intervene without disrupting active processes.

Before scaling any intelligent automation initiative, decision-makers should evaluate their readiness for the inevitable anomalies. As Akwaowo challenged the audience: “If your automation fails, can you clearly identify where the error occurred, why it happened, and fix it with confidence?”

See also: JPMorgan expands AI investment as tech spending nears $20B

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Scaling intelligent automation without breaking live workflows appeared first on AI News.

]]>
Physical AI adoption boosts customer service ROI https://www.artificialintelligence-news.com/news/physical-ai-adoption-boosts-customer-service-roi/ Tue, 03 Mar 2026 11:32:47 +0000 https://www.artificialintelligence-news.com/?p=112483 The adoption of physical AI drives ROI in frontline customer service by merging digital intelligence with human-like physical interaction. As businesses navigate shrinking labour pools, they are finding that simply automating routine workflows is no longer enough. A new partnership between KDDI and AVITA demonstrates how companies can address complex operational gaps through humanoid deployment. […]

The post Physical AI adoption boosts customer service ROI appeared first on AI News.

]]>
The adoption of physical AI drives ROI in frontline customer service by merging digital intelligence with human-like physical interaction.

As businesses navigate shrinking labour pools, they are finding that simply automating routine workflows is no longer enough. A new partnership between KDDI and AVITA demonstrates how companies can address complex operational gaps through humanoid deployment.

While traditional industrial robots excel at repetitive, single-function tasks, they lack the versatility required to manage unexpected anomalies like equipment failures. Customer-facing roles demand nonverbal communication, including synchronised nodding, natural eye contact, and reassuring facial expressions. 

By integrating AVITA’s avatar creation expertise with KDDI’s communications infrastructure, the two organisations are building domestically developed humanoids capable of operating smoothly in real-world commercial environments.

Blending hardware with advanced data infrastructure

Deploying humanoids into active commercial spaces requires high-capacity and low-latency network infrastructure to transmit visual data and control commands in real time. KDDI provides this operational backbone, facilitating remote control capabilities alongside intensive cloud-based data processing. The resulting visual and motion data collected during customer interactions feeds back into the system to train the AI, improving the precision and autonomy of the humanoid’s behaviour.

To support the demanding computational requirements of physical AI adoption, the companies plan to utilise GPUs hosted at the Osaka Sakai Data Center, which commenced operations in January 2026. They are also exploring integration with an on-premises service for Google’s Gemini high-performance generative AI model. This alignment with major enterprise platforms ensures that data processing remains secure and capable of handling complex dialogue requirements.

The hardware itself departs from standard utilitarian machinery. Based on a concept model designed by Hiroshi Ishiguro, the humanoid features a compact skeletal structure approximating a typical Japanese physique.

Silicone skin and specialised mechanical systems enable warm, approachable facial expressions that sync directly with spoken dialogue. Embedded camera sensors track objects in motion to create natural eye contact, while quiet pneumatic actuation allows for fluid and continuous movement with natural “micro-variations”. This design specifically addresses the historical difficulty of deploying automation in operations requiring hospitality and reassurance.

Preparing for commercial adoption of physical AI

This initiative builds upon earlier joint projects between KDDI and AVITA, which introduced a “next-generation remote customer service platform” using digital avatars for remote assistance at retail locations like Lawson and au Style shops.

Transitioning from digital and language-driven communication to physical units capable of free movement represents a logical progression for enterprises looking to scale their customer service capabilities. The partners intend to begin trials in actual commercial facilities starting in Autumn 2026. Deployment at customer touchpoints such as au Style shops will also be considered.

Integrating physical AI demands environments capable of sustaining continuous, high-volume data streams without latency interruptions. As visual and motion data becomes central to machine learning models, governance frameworks must adapt to manage customer data usage within physical spaces.

Organisations facing demographic workforce pressures should evaluate current bottlenecks to identify where non-verbal, empathetic engagement is necessary. Setting up high-speed network foundations and piloting digital AI avatar programmes today allows enterprises to prepare for the adoption of physical humanoids as the hardware further matures.

See also: Santander and Mastercard run Europe’s first AI-executed payment pilot

Banner for the AI & Big Data Expo event series.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Physical AI adoption boosts customer service ROI appeared first on AI News.

]]>
Hitachi bets on industrial expertise to win the physical AI race https://www.artificialintelligence-news.com/news/hitachi-physical-ai-industrial-expertise/ Mon, 23 Feb 2026 07:00:00 +0000 https://www.artificialintelligence-news.com/?p=112339 Physical AI – the branch of artificial intelligence that controls robots and industrial machinery in the real world – has a hierarchy problem. At the top, OpenAI and Google are scaling multimodal foundation models. In the middle, Nvidia is building the platforms and tools for physical AI development. And then there is a third camp: […]

The post Hitachi bets on industrial expertise to win the physical AI race appeared first on AI News.

]]>
Physical AI – the branch of artificial intelligence that controls robots and industrial machinery in the real world – has a hierarchy problem. At the top, OpenAI and Google are scaling multimodal foundation models. In the middle, Nvidia is building the platforms and tools for physical AI development.

And then there is a third camp: industrial manufacturers like Hitachi and Germany’s Siemens, that are making the quieter but arguably more grounded argument that you cannot train machines to navigate the physical world without first understanding it.

That argument is now moving from boardroom strategy to factory floor deployment, as Hitachi revealed in a recent interview with Nikkei Asia.

Why Physical AI needs a better model

Kosuke Yanai, deputy director of Hitachi’s Centre for Technology Innovation-Artificial Intelligence, is direct about what separates viable physical AI from the theoretical kind. “Physical AI cannot be implemented in society without a systematic understanding that begins with foundational knowledge of physics and industrial equipment,” he told Nikkei.

Hitachi’s pitch is that it already holds much of that foundational knowledge – accumulated over decades of building railways, power infrastructure, and industrial control systems. The company has thermal fluid simulation technology that models the behaviour of gases and liquids, and signal-processing tools for monitoring equipment condition – what Yanai describes as the engineering foundation underpinning Hitachi’s ‘extensive knowledge of product design and control logic construction.’

Daikin and JR East

While Hitachi’s overarching physical AI architecture – the Integrated World Infrastructure Model (IWIM), which it describes as a mixture-of-experts system integrating multiple specialised models and data sets – remains in the concept verification stage, two real-world deployments signal that the underlying approach is already producing results.

In collaboration with Daikin Industries, Hitachi has deployed an AI system that diagnoses malfunctions in commercial air-conditioner manufacturing equipment. The system, trained on equipment maintenance records, procedure manuals, and design drawings, can now identify which component is likely failing when an anomaly is detected – the kind of operational intuition that previously existed only in the heads of experienced engineers.

With East Japan Railway (JR East), Hitachi has built an AI that identifies the root cause of malfunctions in the control devices running the Tokyo metropolitan area’s railway traffic management system, and then assists operators in formulating a response plan. In a network where delays ripple in millions of daily journeys, the ability to accelerate fault diagnosis carries real operational weight.

The R&D pipeline: Cutting development time

Hitachi’s physical AI push is also showing up in its research output. In December 2025, the company published findings from two projects presented at ASE 2025, a top-tier software engineering conference, that address a persistent bottleneck in industrial AI: the time and effort required to write and adapt control software.

In the automotive sector, Hitachi and its subsidiary Astemo developed a system that uses retrieval-augmented generation to automatically produce integration test scripts for vehicle electronic control units (ECUs) – pulling from hardware-specific API information and frontline engineering knowledge. In a pilot involving multi-core ECU testing, the technology reduced integration testing man-hours by 43% compared to manual execution.

In logistics, the company developed variability management technology that modularises robot control software into reusable components structured around a robot operating system (ROS). By mapping out the environmental variables and operational requirements of different warehouse settings in advance, the system lets operators adapt robotic picking-and-placing workflows to new products or layouts without rewriting software from scratch.

Safety a structural requirement

One thread that runs through all of Hitachi’s physical AI work is its emphasis on safety guardrails – not as a compliance checkbox, but as an engineering constraint baked into system design. Yanai told Nikkei that the company is integrating its control and reliability technology from social infrastructure development to prevent AI outputs from deviating from human-approved operating parameters.

This includes input validation to screen out data that models should not be trained on, output verification to ensure machine actions do not endanger people or property, and real-time monitoring of the AI model itself for operational anomalies.

It is a distinction. Physical AI systems fail in the real world, not in a sandbox. The stakes for an AI controlling railway signalling or factory robotics are categorically different from those governing a chatbot.

Infrastructure to match ambition

On the infrastructure side, Hitachi Vantara – the group’s data and digital infrastructure arm – is positioning itself as an early adopter of NVIDIA’s RTX PRO Servers, built on the RTX PRO 6000 Blackwell Server Edition GPU, designed to accelerate agentic and physical AI workloads. The hardware is being paired with Hitachi’s iQ platform and used to build digital twins – virtual replicas of physical systems – that can simulate everything from grid fluctuations to robotic motion at scale.

The IWIM concept, meanwhile, is designed to connect Nvidia’s open-source Cosmos physical AI development platform with specialised Japanese-language LLMs and visual language models via the model context protocol (MCP) – essentially a framework to stitch together the models, simulation tools, and industrial datasets that physical AI systems require.

The broader race in physical AI is far from settled. But Hitachi’s position – that domain expertise and operational data are as important as model architecture – is increasingly hard to dismiss, particularly as deployments with partners like Daikin and JR East begin to demonstrate what that expertise is actually worth in practice.

Sources: Nikkei Asia (Feb 21, 2026); Hitachi R&D (Dec 24, 2025); Hitachi Vantara Blog (Aug 27, 2025)

See also:Alibaba enters physical AI race with open-source robot model RynnBrain

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Hitachi bets on industrial expertise to win the physical AI race appeared first on AI News.

]]>
SS&C Blue Prism: On the journey from RPA to agentic automation https://www.artificialintelligence-news.com/news/ssc-blue-prism-on-the-journey-from-rpa-to-agentic-automation/ Tue, 17 Feb 2026 15:27:34 +0000 https://www.artificialintelligence-news.com/?p=112272 For organizations who are still wedded to the rules and structures of robotic process automation (RPA), then considering agentic AI as the next step for automation may be faintly terrifying. SS&C Blue Prism, however, is here to help, taking customers on the journey from RPA to agentic automation at a pace with which they’re comfortable. […]

The post SS&C Blue Prism: On the journey from RPA to agentic automation appeared first on AI News.

]]>
For organizations who are still wedded to the rules and structures of robotic process automation (RPA), then considering agentic AI as the next step for automation may be faintly terrifying. SS&C Blue Prism, however, is here to help, taking customers on the journey from RPA to agentic automation at a pace with which they’re comfortable.

Big as it may be, this move is a necessary one. Modern workflows are at a level of complexity that outlines what traditional RPA was designed to do, according to Steven Colquitt, VP Software Engineering, SS&C Blue Prism. Unstructured data comes from various sources resembling non-deterministic real-world interactions. “Inputs can vary, outcomes can shift and decisions depend on context in real-time,” notes Colquitt.

Brian Halpin, Managing Director, Automation, SS&C Blue Prism, gives the example of a credit agreement where you might need to get 30 or 40 answers from it. He uses the word “answers” deliberately as opposed to data points to account for the level of reasoning that a large language model (LLM) performs.

The element of this being a journey continues to resonate, however. “We’re now saying we’re giving an AI agent the outcome that we want, but we’re not giving it the instructions on how to complete,” says Halpin. “We’re not saying, ‘follow step one, two, three, four, five.’ We’re saying, ‘I want this loan reviewed’ or ‘I want this customer onboarded.’

“Ultimately, I think that’s where the market will go,” adds Halpin. “Is it ready for that? No. Why? Because there’s trust, there’s regulations, there’s auditability […] stability, security. We know LLMs are prone to hallucinations, we know they drift, and [if] you change the underlying model, things change and responses get different.

“There’s an awful lot of learning to happen before I think companies go fully autonomous and real agentic workflows [are] driven from that sort of non-deterministic perspective,” says Halpin. “But then, there will be something else, right? There will be another model. So really, it is all a journey right now.”

SS&C Blue Prism has thousands of customers who have automated processes in place, from centers of excellence (CoEs) to running digital workers in their operations, who they’re hoping to upgrade into the “world of AI”, as Halpin puts it. Sometimes it’s about connecting two separate areas.

“It’s been interesting,” Halpin notes. “As I talk to [our] customers, I see a common thread among companies right now where, in a lot of cases, AI has been established as a separate unit in a company. You go over to the process automation team, and they’re maybe not even allowed to use the AI.

“So, it’s about, ‘How do you help them get that capability and blend it into their process efficiency and allow them to get to the next 20%, 30% of automation, in terms of the end-to-end process?’”

As part of this, SS&C Blue Prism is soon to launch new technology which helps organizations build and embed AI agents within workflows, as well as assist with orchestration. Those who attended TechEx Global, on February 4-5 as part of the Intelligent Automation conference, where SS&C Blue Prism participated, got the full story, as well as understanding the company’s ongoing path.

“[SS&C Technologies] are one of the biggest users of RPA in the world,” adds Halpin. “We have over three and a half thousand digital workers deployed [across the SS&C estate]. We’re saving hundreds of millions in run-rate benefit. We’ve about 35 AI agents in production attached to those digital workers doing […] complex tasks, and really, we just want to share that journey.”

Watch the full interview with Brian Halpin below:

Photo by Patrick Tomasso on Unsplash

The post SS&C Blue Prism: On the journey from RPA to agentic automation appeared first on AI News.

]]>
AI use surges at Travelers as call centre roles reduce https://www.artificialintelligence-news.com/news/travelers-ai-in-contact-centres-two-stage-innovation-strategy/ Fri, 30 Jan 2026 10:01:10 +0000 https://www.artificialintelligence-news.com/?p=111931 Mid-January saw insurance company, Travelers, announce a new deal that empowers 10,000 engineers and data scientists with AI assistants. However, less than two weeks on, Travelers’ leadership explained that the company’s true competitive advantage lies in expertise, not AIs alone, believing this is what will drive longer-term profit growth. According to Travelers’ chief executive officer […]

The post AI use surges at Travelers as call centre roles reduce appeared first on AI News.

]]>
Mid-January saw insurance company, Travelers, announce a new deal that empowers 10,000 engineers and data scientists with AI assistants. However, less than two weeks on, Travelers’ leadership explained that the company’s true competitive advantage lies in expertise, not AIs alone, believing this is what will drive longer-term profit growth.

According to Travelers’ chief executive officer Alan Schnitzer, over 20,000 professionals at the company currently “use AI tools regularly.” He also commented on company claims that Travelers’ call centres are experiencing a boost in efficiency at the hands of AI, leading to claims call centre cuts.

AI technology and innovation driving growth

Travelers’ net profit has increased, according to Schnitzer, largely fuelled by the company’s intensive technology and innovation strategy. Travelers reportedly increased its total value of insurance policies it sold by nearly 7% a year on average between 2016 and 2025. Its underlying combined ratio improved by almost eight points, falling to 83.9.

Schnitzer explained that heavy investment in technology has coincided with improved profits. “Notwithstanding a increase in our technology spending, that improvement in underlying profitability includes a 3-point or 10% improvement in our expense ratio. Over the decade, we developed the competitive advantage of an innovation skill set. Now we’re bringing all that Part 1 know-how to Innovation 2.0 at Travelers, powered by AI – and not too far off quantum computing.”

Innovation 1.0 relates to the company’s strategy and foundation to this success, and it has plans to move into a more advanced stage it’s calling Innovation 2.0, in which AI is the central driver.

Automation equals call centre culls

Schnitzer noted how automation has directly reduced staffing needs and improved claims efficiency, something clearly seen in recent numbers. For instance, Schnitzer said that Travelers’ “claim call centre population is down by a third,” and steps are being taken to consolidate four claims call centres into two.

Such efficiency gains have reduced loss adjustment expenses, improving the company’s loss ratio. Ultimately, investment in automation and analytics have helped Travelers “refine indemnity payouts and drive operational efficiencies.”

Schnitzer stated that over 50% of all claims made to Travelers are now eligible for straight-through processing, and customers are adopting this processing approximately in two-thirds of cases. He went on to say that: “Another 15% of all claims are processed with advanced digital tools. All of those percentages are growing.”

Despite automated tools doing the bulk of claims work, the CEO said that some customers still prefer to call the company to report and discuss claims. Therefore, Travelers has set up an advanced natural language generative AI voice agent that handles initial phone calls.

Schnitzer heralded the success of this voice agent, saying: “Early customer adoption is exceeding our expectation.”

AI and automation reshaping operations in Travelers

The benefits of AI and automation are far-reaching, beyond just claims call centres, according to Schnitzer. “Other use cases enhance underwriting decision quality and efficiency and improve the experience for customers, agents, brokers and employees.”

Greg Toczydlowski, executive vice president and president of business insurance for Travelers, spoke about how gen AI agents have been used to “efficiently mine” data sources, both internally and externally. These help the company “better understand and synthesise the risk characteristics.” Toczydlowski added that the recent agent additions have boosted the speed of underwriting processes and improved segmented pricing.

He explained how the company’s commercial underwriters are performing very well, enhanced by advanced tools used to evaluate risks. Tools include models that refine pricing and summarise past claims data, streamlining the entire process.

“They’re not only executing with excellence in the market today, but they’re also helping to shape the transformation of our industry,” Toczydlowski said.

Michael Klein, executive vice president and president of personal insurance for Travelers, underscored AI’s importance in personal insurance, saying it is used to make renewal underwriting “more effective and efficient.”

Klein said, “we start with a proprietary AI-enabled predictive model that scores every account in the property portfolio. Based on this score, accounts with the highest probable risk of loss are presented to underwriters for review. From there, our renewal underwriting platform uses generative AI to consolidate data into summaries of relevant actionable information for our underwriters to evaluate.”

As a result, Klein said that there has been a 30% reduction in average handle times. Therefore, “the net result is that our underwriters focus their efforts on decisions most likely to improve profitability and do so more efficiently.”

In Speciality insurance, Jeffrey Klenk, president of bond & speciality insurance at Travelers, commented on how AI has cut times to intake submissions from “hours to just minutes.” He also said AI has recently been implemented to streamline renewals.

Innovation 2.0 – AI’s impact on jobs

Despite claims call centre headcounts already reduced, Schnitzer did not speculate on further cuts. Instead, he emphasised the increasing productivity AI has brought to Travelers. “What I would say is that per employee is up, thanks to some productivity and efficiency initiatives, and we expect per employee to continue to go up.”

Travelers’ Innovation 1.0 strategy has been the key driver to the company’s strong 10-year profits, according to Schnitzer. Over the decade, we developed the competitive advantage of an innovation skill set. Now we’re bringing all that Part 1 know-how to Innovation 2.0 at Travelers, powered by AI – and not too far off quantum computing.”

He believes that AI is set to benefit the entire P/C landscape, highlighting how recent advanced AI tools are able to “understand and execute the complex stakeholder interactions, well-defined processes, data-intensive workflows and massive amounts of unstructured data.”

Schnitzer said human expertise with AI “amplifies existing strength,” and said Travelers is investing heavily in “AI and other sophisticated technology solutions.” He said “Dozens of scaled generative AI tools are already in production. Millions of transactions are now automated… And agentic AI isn’t a future aspiration. It’s embedded in our business operations today.”

AI and automated technologies are poised to transform the insurance industry tenfold, as Travelers expects such technologies to “result in faster and more cost-effective delivery of new abilities.”

From product development to new business prospecting to underwriting speed and quality, agent and customer service and more, AI is benefiting Travelers, its customers, and distribution partners, showcasing the technology’s vast impact in the business and industry.

(Image source: “GOES Satellites Capture Holiday Weather Travel Conditions” by NASA Goddard Photo and Video is licensed under CC BY 2.0.)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AI use surges at Travelers as call centre roles reduce appeared first on AI News.

]]>
Cold snap highlight’s airlines’ proactive use of AI https://www.artificialintelligence-news.com/news/cold-snap-highlights-airlines-proactive-use-of-ai-airline-industrys-use-of-ai/ Tue, 27 Jan 2026 10:55:00 +0000 https://www.artificialintelligence-news.com/?p=111861 The severe weather experienced at present in the US has placed significant strain on the airline industry in the country, with knock-on effects of changes to schedules and routes affecting the rest of the world. It’s at times like this that companies have to respond to queries from customers at a much greater rate than […]

The post Cold snap highlight’s airlines’ proactive use of AI appeared first on AI News.

]]>
The severe weather experienced at present in the US has placed significant strain on the airline industry in the country, with knock-on effects of changes to schedules and routes affecting the rest of the world.

It’s at times like this that companies have to respond to queries from customers at a much greater rate than during normal operations, and there are – in the specific case of the air sector – operational decisions that need to be taken quickly, yet inside the strictest safety boundaries.

Several airlines are turning to generative AI to help them during these types of events, and more generally, to help turn them into more efficient and reactive organisations.

Last year, Air France-KLM built a cloud-based generative AI ‘factory’ for use throughout the organisation, which it described as letting it make AI development more consistent and reusable. It formed a partnership with Accenture and Google Cloud for its factory, using it to test and deploy generative AI models. It produces measurable outcomes in ground operations, engineering and maintenance, and customer-facing functions. The partnership group has stated that enterprise deployment of generative AI has increased development speed by more than 35%.

The AI factory was built on earlier work undertaken by the airline and Accenture, which involved migrating core applications to the cloud. Since then, Air France-KLM has created a private AI assistant and RAG tools linking LLMs with internal search to support tasks like diagnosing and repairing aircraft damage.

The factory is also used by employees, who get trained on how to use AI tools in order that they can use the power of LLMs to make a positive impact to the business.

Weather and when AI is used

United Airlines is similarly exploring AI in its operations. In an interview with CIO.com, CIO Jason Birnbaum described AI as a way to “shorten decision cycles” during irregular operations such as the recent outages caused by the current extreme cold snap. The company’s AI journey began with the use of AI to respond to passenger enquiries.

When flights are delayed or cancelled, customer service representatives are expected to respond quickly and informatively, yet retain a company-mandated communication style – honed during the company’s ‘Every Flight Has A Story’ programme. During extended periods of disruption, maintaining the output from what the company terms ‘storytellers’ difficult.

Jason Birnbaum said, “Considering the number of delays versus storytellers, we couldn’t have a person write a new message with every event. So we focused on prioritising the most impactful situations. […] The data piece was simple: the basic facts of the flight and the running chat between the attendants, pilots, gate agents, and the operations people associated with the flight. We fed that information — with additional data on weather, for example — into the AI model, to generate a good draft customer message.”

“The trick then was to have it understand the nuances of United Airlines’ communications style and what we wanted to emphasise. That’s where prompt engineering came in, not to train the model to understand flight data, but to use the words United prefers. Let’s take safety, for instance. We can emphasise safety with without scaring people, and the AI tool is learning to make the right word choice. […] The AI model was very good at looking back in time to bring previous flight data into the current situation. Even our human storytellers didn’t include reasons for flight delays, and that kind of information can be very useful to a customer.”

Boston Consulting Group’s measure of AI maturity in industries pegs airlines at ‘average’, having moved from slightly below average in the past year. Only one of the 36 airlines surveyed met the highest criteria for being prepared for an AI-enabled future. The analysis suggests that by 2030, carriers that embed AI at the core of their workflows could achieve operating margins that are 5% to 6% points higher than those of peers.

It’s thought that generative AI will become part of the operational core of airlines and airports, where decisions about schedules, crew allocations, aircraft rotations, and passenger recovery have to be made quickly. Microsoft claims data-driven AI systems can reduce the root causes of flight delays by up to 35% through improved disruption forecasting, which can limit the negative effects of the spread of disruption.

Airlines using AI-driven personalisation report revenue increases of around 10% to 15% per passenger, according to Microsoft, which also says that AI-based tools such as self-service customer interfaces can lead to cost reductions of up to 30%.

(Image source: “airplane” by Kuster & Wildhaber Photography is licensed under CC BY-ND 2.0.)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Cold snap highlight’s airlines’ proactive use of AI appeared first on AI News.

]]>
Retailers examine options for on-AI retail https://www.artificialintelligence-news.com/news/retailers-examine-options-for-on-ai-retail/ Mon, 26 Jan 2026 16:40:00 +0000 https://www.artificialintelligence-news.com/?p=111839 Big retailers are committing more heavily to agentic AI-led commerce, and accepting some loss of customer proximity and data control in the process. As reported by Retail Dive, the opening weeks of 2026 have seen Etsy, Target and Walmart push product ranges onto third-party AI platforms, forming new partnerships with Google’s Gemini and Microsoft’s Copilot, […]

The post Retailers examine options for on-AI retail appeared first on AI News.

]]>
Big retailers are committing more heavily to agentic AI-led commerce, and accepting some loss of customer proximity and data control in the process.

As reported by Retail Dive, the opening weeks of 2026 have seen Etsy, Target and Walmart push product ranges onto third-party AI platforms, forming new partnerships with Google’s Gemini and Microsoft’s Copilot, after last year’s collaborations with OpenAI’s ChatGPT. These let consumers purchase goods inside the AI’s conversation interface.

Amazon and Walmart have been investing in their own consumer-facing AI assistants, Rufus and Sparky respectively to change how shoppers interact with their brands.

Agentic AI is beginning to redraw direct-to-consumer engagement, and industry figures regard this trend as an important moment in online retail. “I think this has the potential to disrupt retail in the same way the internet once did,” Kartik Hosanagar, a marketing professor at the Wharton School of the University of Pennsylvania, told the website’s reporters.

Partnering with AIs like ChatGPT or Gemini engages consumers wherever they happen to be and may choose to shop. Adobe’s 2025 Holiday Shopping report found that AI-driven traffic to US e-commerce sites grew 758% year on year between in November 2025, and Cyber Monday saw a 670% increase in AI-referred retail visits.

“What we expect is a deepening of consumer engagement,” Katherine Black, a partner at Kearney specialising in food, drug and mass-market retail, said in an email to Retail Dive. “More shoppers will rely on AI for purchasing, and across a wider range of missions. As retailers’ capabilities within these tools improve, adoption should accelerate further.”

Meeting customers on AI platforms comes with trade-offs, according to industry observers, with questions around data ownership and the risk that retailers are sidelined. 81% of retail executives believe generative AI will erode brand loyalty by 2027, according to Deloitte’s 2026 Retail Industry Global Outlook, published earlier this month.

Retailers’ websites or apps provide a stream of behavioural data, and if discovery, evaluation, and purchase happen externally, any insight doesn’t reach the retailer. “This fundamentally changes where power sits,” Hosanagar said. “Control over the agent increasingly means control over the customer relationship.”

Google and Alphabet CEO Sundar Pichai has unveiled new commerce tools for Gemini, outlining how it will support customers from discovery to final purchase. Nikki Baird, vice president of strategy and product at Aptos, says this raises difficult questions. “What he’s describing is Google owning the data across discovery, decision and transaction. Even if some information is shared back, missing context from those stages leaves retailers with a much poorer understanding of their customers.”

Pichai reassured retailers collaboration remains central to Google. “From nearly three decades of working with retailers, we know success only comes when we work together,” he told an NRF audience. “Our aim is to use our full technology stack to help shape the next era of retail.”

Yet agentic systems’ features like instant checkout absorb the shopping experience into one platform. “If research, discovery and purchase all happen on OpenAI rather than Walmart.com, you’re effectively giving away the brand experience. At that point, the retailer risks becoming little more than a fulfilment operation,” Hosanagar said.

Amazon has not announced plans to sell directly through ChatGPT, doubling down on its own AI initiatives. Earlier this month, the company launched a dedicated site for Alexa+, its generative AI assistant that helps users research and plan purchases.

Yet participation in third-party AI commerce may become unavoidable. When OpenAI launched its Instant Checkout feature on ChatGPT last September, it suggested that enabling the function could influence how merchants are ranked in search results, in addition to price and product quality. Uploading product catalogues to AI chat platforms may be the first step in a transformation of online retail.

According to Deloitte, roughly half of retail executives expect the current multi-stage shopping process to reduce to a single AI-driven interaction by 2027. For now the industry remains at an early stage of any transition. “The real inflection point is when consumers rely on an autonomous agent to shop on their behalf,” Hosanagar told Retail Dive.

“Retailers will engage less with humans directly and more with their representatives — AI agents. That agent processes information differently, requires data in new formats and responds to persuasion in ways unlike a person.”

Today, consumers can access ChatGPT on their phones while in-store, effectively consulting an always-available expert. “It’s not just the internet in your pocket,” Baird told Retail Dive. “It’s like having a highly knowledgeable store associate who knows every retailer.”

This may prompt retailers to equip frontline staff with their own AI tools, offering instant insight into customer preferences or shopping history. Alternatively, a retailer’s AI agent could proactively notify customers when a favoured item is back in stock, helping associates convert interest into sales. “The goal is to enable store associates to perform at their best,” Baird said.

(Image source: “Shopping trauma!” by Elsie esq. is licensed under CC BY 2.0.)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Retailers examine options for on-AI retail appeared first on AI News.

]]>
Grab brings robotics in-house to manage delivery costs https://www.artificialintelligence-news.com/news/grab-brings-robotics-in-house-to-manage-delivery-costs/ Wed, 07 Jan 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=111489 Rising labour costs and tighter delivery margins are pushing large platform operators like Grab to look at automation. It’s moved to bring robotics capability in-house by its acquisition of Infermove. Grab operates at a scale where small efficiency gains can have out-sized effects. Its platform supports millions of deliveries in Southeast Asia, many of them […]

The post Grab brings robotics in-house to manage delivery costs appeared first on AI News.

]]>
Rising labour costs and tighter delivery margins are pushing large platform operators like Grab to look at automation. It’s moved to bring robotics capability in-house by its acquisition of Infermove.

Grab operates at a scale where small efficiency gains can have out-sized effects. Its platform supports millions of deliveries in Southeast Asia, many of them carried out by riders on scooters and bicycles in dense urban areas, producing complexity that limits how much automation could replace human labour. By acquiring a company focused on robots designed for unstructured settings, Grab sees physical-world AI as mature enough to use in cases outside pilot programmes.

Delivery automation close to core operations

Rather than relying on off-the-shelf systems, Grab is opting to internalise the development loop. Infermove’s technology is designed to learn from real-world movement data, including information generated by non-motorised delivery vehicles. In practical terms, that means robots trained on how people actually navigate pavements, crossings, and crowded drop-off points, rather than how those spaces appear in simulations.

For a delivery operator like Grab, that distinction matters. Simulated environments can support early development, but they often struggle with the edge cases that define real cities. Bringing that learning process in-house allows Grab to shape how automation behaves under its own operating constraints, rather than adapting its delivery network to fit a third-party system.

From an enterprise perspective, the strategic value lies in control. Owning the technology gives Grab more influence over deployment pace, operating scope, and cost trade-offs. It also reduces long-term dependence on vendors whose priorities may not match Grab’s regional footprint or economic realities.

Automation, however, is not positioned as a replacement for human riders. Even as robots take on parts of the workflow, people remain central to service delivery. Grab’s interest appears focused on selective use, like structured first-mile or last-mile segments where tasks are repetitive and distances are short. In these areas, robots may help smooth demand spikes, reduce delays during peak hours, and ease pressure during labour shortages.

Managing cost pressure without breaking service

During an internal meeting in December, Grab’s chief technology officer Suthen Thomas described Infermove’s progress as “impressive,” highlighting both the technology and its early commercial use. He also said the company would continue to operate independently, with its founder reporting directly to him. The structure suggests Grab is prioritising execution and continuity rather than rapid organisational integration.

The approach reflects a broader shift among large digital platforms. Instead of treating AI as a layer added on top of existing systems, companies are embedding it deeper into core operations. In delivery and logistics, that often means moving beyond optimisation software into physical automation, where the risks and costs are higher but the potential gains are more structural.

The timing is also telling. On-demand delivery volumes continue to grow, but margins remain under pressure. Customers expect faster service and lower fees, while operators face rising wages, fuel costs, and tighter regulation. In that environment, automation becomes less about novelty and more about sustaining service levels without eroding profitability.

Bringing robotics development closer to operations may also help align incentives around data use. Training physical AI systems requires large amounts of real-world data, which delivery platforms already generate at scale. Keeping that feedback loop internal can speed iteration and reduce the need to share sensitive operational data externally.

There are still limits. Robots designed for pavements and short routes are unlikely to replace human couriers in an entire network anytime soon. Weather, local rules, and customer acceptance will continue to shape where automation can realistically operate. Expanding in multiple countries adds further complexity, as infrastructure and regulations vary widely.

Industry forecasts suggest rapid growth in last-mile delivery robotics, but those figures offer limited guidance for operators. The more immediate question is whether automation can lower cost per delivery without introducing new failure points. That depends less on market size and more on performance in live environments.

Seen through an enterprise lens, the acquisition of Infermove is not a bet on robotics as a product category. It is a move to tighten the link between AI, data, and physical operations. For platform companies built on logistics and mobility, that integration may become a key factor in managing growth under sustained cost pressure.

(Photo by Afif Ramdhasuma)

See also: The Law Society: Current laws are fit for the AI era

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Grab brings robotics in-house to manage delivery costs appeared first on AI News.

]]>
Arm and the future of AI at the edge https://www.artificialintelligence-news.com/news/arm-chips-and-the-future-of-ai-at-the-edge/ Tue, 23 Dec 2025 13:45:19 +0000 https://www.artificialintelligence-news.com/?p=111417 Arm Holdings has positioned itself at the centre of AI transformation. In a wide-ranging podcast interview, Vince Jesaitis, head of global government affairs at Arm, offered enterprise decision-makers look into the company’s international strategy, the evolution of AI as the company sees it, and what lies ahead for the industry. From cloud to edge Arm […]

The post Arm and the future of AI at the edge appeared first on AI News.

]]>
Arm Holdings has positioned itself at the centre of AI transformation. In a wide-ranging podcast interview, Vince Jesaitis, head of global government affairs at Arm, offered enterprise decision-makers look into the company’s international strategy, the evolution of AI as the company sees it, and what lies ahead for the industry.

From cloud to edge

Arm thinks the AI market is about to enter a new phase, moving from cloud-based processing to edge computing. While much of the media’s attention has been focused to date on massive data centres, with models trained in and accessed from the cloud, Jesaitis said that most AI compute, especially inference tasks, is likely to be increasingly decentralised.

“The next ‘aha’ moment in AI is when local AI processing is being done on devices you couldn’t have imagined before,” Jesaitis said. These devices range from smartphones and earbuds to cars and industrial sensors. Arm’s IP is already embedded, literally, in these devices – it’s a company that only in the last year has been the IP behind over 30 billion chips, placed in devices of every conceivable description, all over the world.

The deployment of AI in edge environments has several benefits, with team at Arm citing three main ‘wins’. Firstly, the inherent efficiency of low-power Arm chips means that power bills for running compute and cooling are lower. That keeps the environmental footprint of the technology as small as possible.

Secondly, putting AI in local settings means latency is much lower (with latency determined by the distance between local operations and the site of the AI model). Arm points to uses like instant translation, dynamic scheduling of control systems, and features like the near-immediate triggering of safety functions – for instance in IIoT settings.

Thirdly, ‘keeping it local’ means there’s no potentially sensitive data sent off-premise. The benefits are obvious for any organisation in highly-regulated industries, but the increasing number of data breaches means even companies operating with relatively benign data sets are looking to reduce their attack surface.

Arm silicon, optimised for power-constrained devices, makes it well-suited for compute where it’s needed on the ground, the company says. The future may well be one where AI is found woven throughout environments, not centralised in a data centre run by one of the large providers.

Arm and global governments

Arm is actively engaged with global policymakers, considering this level of engagement an important part of its role. Governments continue to compete to attract semiconductor investment, the issues of supply chain and concentrated dependencies still fresh in many policymakers’ memories from the time of the COVID epidemic.

Arm lobbies for workforce development, working at present with policy-makers in the White House on an education coalition to build an ‘AI-ready workforce’. Domestic independence in technology relies as much on the abilities of workforce as it does on the availability of hardware.

Jesaitis noted a divergence between regulatory environments: the US prioritises what the government there terms acceleration and innovation, while the EU leads on safety, privacy, security and legally-enforced standards of practice. Arm aims to find the middle ground between these approaches, building products that meet stringent global compliance needs, yet furthering advances in the AI industry.

The enterprise case for edge AI

The case for integrating Arm’s edge-focused AI architecture into enterprise transformation strategies can be persuasive. The company stresses its ability to offer scale-able AI without the need to centralise to the cloud, and is also pushing its investment in hardware-level security. That means issues like memory exploits (outside of the control of users plugged into centralised AI models) can be avoided.

Of course, sectors already highly-regulated in terms of data practices are unlikely to experience relaxed governance in the future – the opposite is pretty much inevitable. All industries will be seeing more regulation and greater penalties for non-compliance in the years to come. However, to balance that, there are significant competitive advantages available to those that can demonstrate their systems’ inherent safety and security. It’s into this regulatory landscape that Arm sees itself and local, edge AI fitting.

Additionally, in Europe and Scandinavia, ESG goals are going to be increasingly important. Here, the power-sipping nature of Arm chips offers big advantages. That’s a trend that even the US hyperscalers are responding to: AWS’s latest SHALAR range of low-cost, low-power Arm-based platforms is there to satisfy that exact demand.

Arm’s collaboration with cloud hyperscalers such as AWS and Microsoft produces chips that combine efficiency with the necessary horsepower for AI applications, the company says.

What’s next from Arm and the industry

Jesaitis pointed out several trends that enterprises may be seeing in the next 12 to 18 months. Global AI exports, particularly from the US and Middle East, are ensuring that local demand for AI can be satisfied by the big providers. Arm is a company that can supply both big providers in these contexts (as part of their portfolios of offerings) and satisfy the rising demand for edge-based AI.

Jesaitis also sees edge AI as something of the hero of sustainability in an industry increasingly under fire for its ecological impact. Because Arm technology’s biggest market has been in low-power compute for mobile, it’s inherently ‘greener’. As enterprises hope to meet energy goals without sacrificing compute, Arm offers a way that combines performance with responsibility.

Redefining “smart”

Arm’s vision of AI at the edge means computers and the software running on them can be context-aware, cheap to run, secure by design, and – thanks to near-zero network latency – highly-responsive. Jesaitis said, “We used to call things ‘smart’ because they were online. Now, they’re going to be truly intelligent.”

(Image source: “Factory Floor” by danielfoster437 is licensed under CC BY-NC-SA 2.0.)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Arm and the future of AI at the edge appeared first on AI News.

]]>
50,000 Copilot licences for Indian service companies https://www.artificialintelligence-news.com/news/service-provider-ai-implementations-india-enterprise-scale-copilot-rollouts/ Fri, 19 Dec 2025 13:19:12 +0000 https://www.artificialintelligence-news.com/?p=111397 Cognizant, Tata Consultancy Services, Infosys, and Wipro have announced plans to deploy more than 200,000 Microsoft Copilot licenses in their enterprises – over 50,000 per company – in what Microsoft is calling a new benchmark for enterprise-scale adoption of generative AI. The companies involved are framing the move as the implementation of a default tool […]

The post 50,000 Copilot licences for Indian service companies appeared first on AI News.

]]>
Cognizant, Tata Consultancy Services, Infosys, and Wipro have announced plans to deploy more than 200,000 Microsoft Copilot licenses in their enterprises – over 50,000 per company – in what Microsoft is calling a new benchmark for enterprise-scale adoption of generative AI.

The companies involved are framing the move as the implementation of a default tool for hundreds of thousands of employees involved in consulting, delivery, operations, and software.

The announcement, made in Bengaluru, December 11, was timed to coincide with Microsoft CEO Satya Nadella’s visit to India. There, and across the industrialised world, there’s been growing momentum for agentic AI – AI systems that do more than chat, executing multi-step work in business processes. The four firms want to be seen as AI advisors for clients, with extensive experience drawn from their internal rollouts of AI.

Why enterprises care about Copilot

Readers will be familiar with Microsoft 365 Copilot, the AI assistant embedded in standard workplace tools Word, Excel, PowerPoint, Outlook, and Teams. It’s intended to help users draft, summarise, and analyse, turning natural-language queries into work-related outputs. Copilot combines large language models with Microsoft 365 apps and organisational data gained from Microsoft Graph, with the assistant working in the context of a user’s files, meetings, and messages. This ability is, of course, subject to access controls already in place and defined by the organisation.

For large organisations, the embedding of AI into workflows is important. A firm shouldn’t have to rebuild its toolchain to experiment with AI, but rather start using AI in the software and documents its workforce already uses.

The raft of benefits is practical and work-focused: faster documentation, quicker meeting follow-ups, faster draft proposals, better discovery of information from internal knowledge repositories, and, with agentic AI, the automation of repetitive tasks.

From Copilots to frontier firms and agents

Microsoft uses the term “Frontier Firms” to describe organisations that are “human-led and agent-operated”; where employees work alongside AI assistants and specialised agents that take on work processes.

The designation of ‘Frontier Firm’ status aligns with Microsoft’s messaging at Microsoft Ignite 2025, where the company described agents reinventing business processes and amplifying impact through human-agent teamwork.

In very simple terms, the company’s pitch is to move from “AI helps you write” to “AI helps run workflows.”

Why IT services firms are making public commitments

There are two reasons why the four firms are rolling out the technology at such a large scale. First, to improve internal productivity. The Times of India reports the deployments are intended to integrate Copilot into workflows in consulting, software development, operations and client delivery, with the aim of improved productivity.

At large multinational companies, margins depend on delivery efficiency and knowledge reuse, so shaving minutes from everyday tasks for tens of thousands of workers produces meaningful gains.

Second, client credibility. The consultancy companies serve global enterprises, including many Fortune 500 clients, which means their internal operating model can, and perhaps should, become their clients’ playbooks.

If consultancies can demonstrate mature governance, training, and measurable outcomes with Copilot at scale in their own operations, it strengthens their messaging, better able to sell similar transformations to potential and existing clients.

Hyperscalers’ investment in India

The Copilot announcement came immediately after Microsoft said it would invest $17.5 billion in India between 2026-2029, money destined for cloud and AI infrastructure, skilling, and operations. The company describes this as its largest investment in Asia to date. Other major tech firms are making parallels: Reuters reported in December 2025 that Amazon/AWS planned to invest over $35 billion in India by 2030, expanding its operations and AI capabilities, for example.

Together, such moves underscore India’s growing position as a massive enterprise market and strategic hub for AI talent and cloud infrastructure. For India’s IT services leaders, Copilot is being positioned as a way to stay ahead of the competitive curve and define “AI-first delivery.”

(Image source: “Gobbling Indian view of Clinch River” by dmott9 is licensed under CC BY-ND 2.0.)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post 50,000 Copilot licences for Indian service companies appeared first on AI News.

]]>
Ensuring effective AI in insurance operations https://www.artificialintelligence-news.com/news/insurance-ai-use-operational-differences-experienced-by-the-big-players/ Thu, 18 Dec 2025 10:47:50 +0000 https://www.artificialintelligence-news.com/?p=111382 Artificial intelligence has been part of the insurance sector for years – the Finance function in many businesses is often the first to automate. But what’s remarkable in the instance of AI is how directly the technology is woven into day-to-day operational work. Not sitting in the background as a niche modelling capability, AI is […]

The post Ensuring effective AI in insurance operations appeared first on AI News.

]]>
Artificial intelligence has been part of the insurance sector for years – the Finance function in many businesses is often the first to automate. But what’s remarkable in the instance of AI is how directly the technology is woven into day-to-day operational work. Not sitting in the background as a niche modelling capability, AI is now used in places where insurers spend most of their time and money: claims handling, underwriting, and running complex programmes.

Industry giants Allianz, Zurich, and Aviva have published evidence in just the last 12 months illustrating their shifts from experimentation stages to production-grade tools that support frontline workers in real workflows.

Simple claims: Fewer admin bottlenecks

Claims operations are a natural proving ground for AI because they comprise of a combination of paperwork and human judgement, and are usually undertaken in an environment of time pressure. Allianz describes its Insurance Copilot as an AI-powered tool that helps claims handlers automate repetitive tasks and pull together relevant information that would otherwise require multiple searches on different systems.

There’s a notable change to the workflows, Allianz outlines. The Copilot starts with data gathering, summarising claim and contract details so a handler can get just the essentials, quickly. The algorithm then performs document analysis, operations that include interpreting agreements and comparing claims against policy details. The tool flags discrepancies and suggests next steps. Once the human operator has taken their decision, the Copilot assists drafts context-aware emails.

This is the kind of daily activity that insurers care about, and by using their AI tools, they get reduced turnaround time, smoother settlements, and less friction for staff and customers. Allianz also frames AI as a way to reduce unnecessary payouts by highlighting important factors adjusters might otherwise miss. That has a clear impact on the company’s overall bottom line.

Complex documents to usable decisions

The quality of underwriting is determined by the quality of information available. Aviva uses the example of underwriters needing to read GP medical reports. The company says it’s launching an AI-powered summarisation tool that uses genAI to analyse and summarise these reports, which can sometimes amount to dozens of pages of medical text. The AI functions let underwriters make faster, more informed decisions.

The immediate value here is not AI replacing the underwriter, but technology reducing the time spent reading. The insurer is explicit that underwriters will review summaries and make the final decision – not the AI. That distinction matters because underwriting is technical and sensitive; compressing documents into decision-ready summaries can speed up processing, but it also raises questions about accuracy, omissions and auditability. Aviva addresses this by pointing to its “rigorous testing and controls“. An active test phase processed around 1,000 cases before roll-out to ensure the standards it required, the company says.

Uncertain contracts and servicing in multinational programmes

Commercial insurance is an area with its own challenges, which include the complexity from working in multiple jurisdictions, and the regional differences between policies and stakeholders. Zurich says generative AI’s ability to process unstructured information lets multinational insurance work more easily across several countries, helping it build quicker, more accurate pictures of commercial insurance offerings, and simplifying submissions in different countries.

Zurich also highlights contract certainty as a practical outcome: multinational programmes involve layered documents, varied local requirements and have the pervasive need for constant checking. It says GenAI helps internal experts compare, summarise and verify coverage in a programme using the operator’s native language, “in a fraction of the time” compared with the manual effort required to translate and capture the nuance of international differences. Although this area isn’t customer-facing, genAI improves the company’s responsiveness by letting its underwriters, risk engineers, and claims professionals work more efficiently.

Zurich also refers to AI “joining up the dots”, able to spot trends in data that would – given the quantity of information – go unnoticed by human staff. Indeed, AI amplifies its experts’ judgement rather than displacing it.

The common thread: augmentation, not automation-for-automation’s sake

Across these three examples, a consistent pattern emerges:

  • AI handles the heavy lifting of reading, searching, and drafting; high-volume tasks in insurance operations.
  • Humans remain accountable for consequent decisions, whether it’s claim payments or underwriting acceptance. (Allianz describes a “human-in-the-loop” approach, and Aviva and Zurich similarly emphasise experts retaining decision-making control).
  • Operational control and scalability are treated as major concerns: pilots, testing, domain-by-domain tuning, and expansion into lines of business are integral part of the narrative.

What this means for the sector

Insurers see faster cycle times, better consistency, reduced manual work, and a path to scaling. Their challenge is implementing tools responsibly, which is defined by secure data handling, explainability where needed, and the training of teams so they can question outputs appropriately.

AI is becoming less of a headline in the sector and more of an everyday reality, a practical silicon colleague in the routine work of insurance profitability.

(Image source: “house fire” by peteSwede is licensed under CC BY 2.0. )

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Ensuring effective AI in insurance operations appeared first on AI News.

]]>
AWS’s legacy will be in AI success https://www.artificialintelligence-news.com/news/awss-legacy-will-be-in-ai-success/ Mon, 15 Dec 2025 13:44:11 +0000 https://www.artificialintelligence-news.com/?p=111311 As the company that kick-started the cloud computing revolution, Amazon is one of the world’s biggest companies whose practices in all things technological can be regarded as a blueprint for implementing new technology. This article looks at some of the ways that the company is deploying AI in its operations. Amazon’s latest AI strategy has […]

The post AWS’s legacy will be in AI success appeared first on AI News.

]]>
As the company that kick-started the cloud computing revolution, Amazon is one of the world’s biggest companies whose practices in all things technological can be regarded as a blueprint for implementing new technology.

This article looks at some of the ways that the company is deploying AI in its operations.

Amazon’s latest AI strategy has progressed from basic chatbots to agentic AI: systems that can plan and execute multi-step work using different tools and across processes. As a company, Amazon sits at the intersection of cloud infrastructure (in the form of AWS), logistics, retail, and customer service, all of which are areas where small efficiency gains can have massive impact.

Copilots to agents, AWS builds the control plane autonomy

In early 2025, Amazon made its AI intentions clear for its cloud company, AWS, by forming a new group focused internally on agentic AI. According to reporting on an internal email, AWS leadership described agentic AI as a potential “multi-billion” business, underscoring that the technology is regarded as a new platform layer, not a standalone feature.

The company was not afraid to say that its workforce is expected to shrink because of the technology. In June 2025, Amazon CEO Andy Jassy told employees that widespread use of generative AI and agents will change how work is done, and that over the next few years, Amazon expects routine work to become faster and more automated, slowing hiring, changing roles, and shrinking some job categories, even if other categories grow.

Amazon’s best use cases are high-volume, rules-bound workflows that require a lot of searching, checking, routing, and logging. These are or will have significant impact in forecasting, delivery mapping, customer service, and product content. /Reuters/ noted examples like inventory optimisation, improved customer service, and better product detail pages as internal targets for gen AI.

Logistics and operations

Amazon has described AI-enabled upgrades in its US operations that hint at where an agentic approach may take shape. In June 2025, it outlined AI innovations that included a generative AI system to improve delivery location accuracy, a new demand forecasting model to predict what customers want (and where), and an agentic AI team looking at enabling robots to understand natural-language

Consumer-facing agents

Consumer agents are where autonomy first becomes real, because systems can take actions, even where there’s money involved. Reporting in The Verge about Alexa+ highlighted features like monitoring items for price drops and (optionally) purchasing for the user automatically once a threshold is hit, a concrete example of the agentic concept in everyday terms: users setting constraints (in the form of price thresholds), and the system watches and executes inside said boundaries.

Rufus as the Amazon AI interface

Amazon’s Rufus assistant is positioned as an AI interface to shopping, one that helps customers find products, do comparisons, and understand the trade-offs between various choices. Amazon describes Rufus as powered by generative (and increasingly agentic) AI to make shopping faster, with personalisation created by a user’s shopping history and current context. Agents therefore become the a shopping interface, with their value to the retailer in shortening journey from intent to final purchase.

Agents for Amazon Bedrock and AgentCore

Internally, AWS is producing agentic ‘building blocks’. Agents for Amazon Bedrock are designed to execute multi-step tasks by orchestrating models with tools use and integration with other platforms. The Amazon Bedrock AgentCore is presented as a platform to build [PDF], deploy, and operate agents securely at scale. It has features like runtime hosting, memory, observability dashboards, and evaluation.

AgentCore is Amazon’s attempt to become the default infrastructure layer for supervised enterprise agents, especially for organisations that need auditability, access controls, and reliability.

Keeping an eye on workforce and governance

If Amazon succeeds, the next phase for the technology is managed AI, comprising of mechanisms that grant or revoke permissions for tools and data access, the monitoring of agents’ behaviour, evaluation of performance and whether governance guidelines are being met, and the establishment of escalation paths when agents hit uncertainty.

The signals to the workforce have been baked into leadership messaging at the company. Fewer people will be required for some corporate tasks, and there will be more roles that can design workflows, govern the models, keep systems secure, and audit the outcomes of agentic AI use.

Conclusions

Proven as a leader in technology, Amazon’s stance on AI and the meaningful ways in which it’s implementing AI are a description of the paths enterprise companies may follow. Winning the productivity gains and lowered costs that AI technology promises is not as simple as plugging in a local device, or spinning up a new cloud instance. But the company can be seen as lighting the way for others to follow. Whether it’s supervising agents or deflecting customer queries to automated answering systems, AI is changing this technology giant in every possible way.

(Image source: CHEN – The Arousing, Thunder – arouse, excite, inspire; thunder rising from below; awe, alarm, trembling; fertilizing intrusion. The ideogram: excitement and rain” – public domain)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AWS’s legacy will be in AI success appeared first on AI News.

]]>