Special Reports & Series - AI News https://www.artificialintelligence-news.com/categories/features/special-reports-series/ Artificial Intelligence News Tue, 24 Feb 2026 15:56:37 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Special Reports & Series - AI News https://www.artificialintelligence-news.com/categories/features/special-reports-series/ 32 32 Anthropic: Claude faces ‘industrial-scale’ AI model distillation https://www.artificialintelligence-news.com/news/anthropic-claude-faces-industrial-scale-ai-model-distillation/ Tue, 24 Feb 2026 15:56:35 +0000 https://www.artificialintelligence-news.com/?p=112422 Anthropic has detailed three “industrial-scale” AI model distillation campaigns by overseas labs designed to extract abilities from Claude. These competitors generated over 16 million exchanges using approximately 24,000 deceptive accounts. Their goal was to acquire proprietary logic to improve their competing platforms. The extraction technique, known as distillation, involves training a weaker system on the […]

The post Anthropic: Claude faces ‘industrial-scale’ AI model distillation appeared first on AI News.

]]>
Anthropic has detailed three “industrial-scale” AI model distillation campaigns by overseas labs designed to extract abilities from Claude.

These competitors generated over 16 million exchanges using approximately 24,000 deceptive accounts. Their goal was to acquire proprietary logic to improve their competing platforms.

The extraction technique, known as distillation, involves training a weaker system on the high-quality outputs of a stronger one.

When applied legitimately, distillation helps companies build smaller and cheaper versions of their applications for customers. Yet, malicious actors weaponise this method to acquire powerful capabilities in a fraction of the time and cost required for independent development.

Protecting intellectual property like Anthropic’s Claude

Unmitigated distillation presents a severe intellectual property challenge. Because Anthropic blocks commercial access in China for national security reasons, attackers bypass regional access restrictions by deploying commercial proxy networks.

These services run what Anthropic calls “hydra cluster” architectures, which distribute traffic across APIs and third-party cloud platforms. The massive breadth of these networks means there are no single points of failure. As Anthropic noted, “when one account is banned, a new one takes its place.”

In one identified case, a single proxy network managed more than 20,000 fraudulent accounts simultaneously. These networks mix AI model distillation traffic with standard customer requests to evade detection. This directly impacts corporate resilience and forces security teams to reconsider how they monitor cloud API traffic.

Illicitly-trained models also bypass established safety guardrails, creating severe national security risks. US developers, for example, build protections to prevent state and non-state actors from using these systems to develop bioweapons or carry out malicious cyber activities.

Cloned systems lack the safeguards implemented by systems like Anthropic’s Claude, allowing dangerous capabilities to proliferate with protections stripped out entirely. Foreign competitors can feed these unprotected capabilities into military, intelligence, and surveillance systems, enabling authoritarian governments to deploy them for offensive operations.

If these distilled versions are open-sourced, the danger further multiplies as the capabilities spread freely beyond any single government’s control.

Unlawful extraction allows foreign entities, including those under the control of the Chinese Communist Party, to close the competitive advantage protected by export controls. Without visibility into these attacks, rapid advancements by foreign developers incorrectly appear as innovation circumventing export controls.

In reality, these advancements depend heavily on extracting American intellectual property at scale, an effort that still requires access to advanced chips. Restricted chip access limits both direct model training and the scale of illicit distillation.

The playbook for AI model distillation

The perpetrators followed a similar operational playbook, utilising fraudulent accounts and proxy services to access systems at scale while evading detection. The volume, structure, and focus of their prompts were distinct from normal usage patterns, reflecting deliberate capability extraction rather than legitimate use. 

Anthropic attributed these campaigns targeting Claude through IP address correlation, request metadata, and infrastructure indicators. Each operation targeted highly differentiated functions: agentic reasoning, tool use, and coding.

One campaign generated over 13 million exchanges targeting agentic coding and tool orchestration. Anthropic detected this operation while it was still active, mapping timings against the competitor’s public product roadmap. When Anthropic released a new model, the competitor pivoted within 24 hours, redirecting nearly half their traffic to extract capabilities from the latest system.

Another operation generated over 3.4 million requests focused on computer vision, data analysis, and agentic reasoning. This group utilised hundreds of varied accounts to obscure their coordinated efforts. Anthropic attributed this campaign by matching request metadata to the public profiles of senior staff at the foreign laboratory. In a later phase, this competitor attempted to extract and reconstruct the host system’s reasoning traces.

Anthropic says a third AI model distillation campaign targeting Claude extracted reasoning capabilities and rubric-based grading data through over 150,000 interactions. This group forced the targeted system to map out its internal logic step-by-step, effectively generating massive volumes of chain-of-thought training data. They also extracted censorship-safe alternatives to politically sensitive queries to train their own systems to steer conversations away from restricted topics. The perpetrators generated synchronised traffic using identical patterns and shared payment methods to enable load balancing. 

Request metadata for this third campaign traced these accounts back to specific researchers at the laboratory. These requests often appear benign on their own, such as a prompt simply asking the system to act as an expert data analyst delivering insights grounded in complete reasoning. But when variations of that exact prompt arrive tens of thousands of times across hundreds of coordinated accounts targeting the same narrow capability, the extraction pattern becomes clear.

Massive volume concentrated in specific areas, highly repetitive structures, and content mapping directly to training needs are the hallmarks of a distillation attack.

Implementing actionable defences

Protecting enterprise environments requires adopting multi-layered defences to make such extraction efforts harder to execute and easier to identify. Anthropic advises implementing behavioural fingerprinting and traffic classifiers designed to identify AI model distillation patterns in API traffic.

IT leaders must also strengthen verification processes for common vulnerability pathways, such as educational accounts, security research programmes, and startup organisations.

Companies should integrate product-level and API-level safeguards designed to reduce the efficacy of model outputs for illicit distillation. This must be done without degrading the experience for legitimate, paying customers.

Detecting coordinated activity across large numbers of accounts is an absolute necessity. This includes specifically monitoring for the continuous elicitation of chain-of-thought outputs used to construct reasoning training data.

Cross-industry collaboration also remains essential, as these attacks are growing in intensity and sophistication. This requires rapid and coordinated intelligence sharing across AI laboratories, cloud providers, and policymakers.

Anthropic has published its findings about Claude being targeted by AI model distillation campaigns to provide a more holistic picture of the landscape and make the evidence available to all stakeholders. By treating AI architectures with rigorous access controls, technology officers can secure their competitive edge while ensuring ongoing governance.

See also: How disconnected clouds improve AI data governance

Banner for the AI & Big Data Expo event series.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Anthropic: Claude faces ‘industrial-scale’ AI model distillation appeared first on AI News.

]]>
How Amul is using AI dairy farming to put 36M farmers first https://www.artificialintelligence-news.com/news/amul-ai-dairy-farming-platform-india/ Mon, 23 Feb 2026 09:00:00 +0000 https://www.artificialintelligence-news.com/?p=112344 AI dairy farming has found its most ambitious deployment yet – not in a Silicon Valley lab nor a European agri-tech campus, but in the villages of Gujarat, India, where 36 lakh (3.6 million) women milk producers are now being served by an AI assistant named Sarlaben. Amul, the world’s largest dairy cooperative, has launched […]

The post How Amul is using AI dairy farming to put 36M farmers first appeared first on AI News.

]]>
AI dairy farming has found its most ambitious deployment yet – not in a Silicon Valley lab nor a European agri-tech campus, but in the villages of Gujarat, India, where 36 lakh (3.6 million) women milk producers are now being served by an AI assistant named Sarlaben.

Amul, the world’s largest dairy cooperative, has launched what it calls Amul AI: a platform built on five decades of cooperative data, designed to give every farmer in its network round-the-clock, personalised guidance in their own language.

Amul was launched just ahead of India’s AI Impact Summit 2026 and backed by the Ministry of Electronics and Information Technology (MeitY) with the EkStep Foundation. It is a test case for whether AI – the kind being debated in boardrooms and policy forums globally – can actually reach the last mile.

Meet Sarlaben: The AI dairy farming assistant

Sarlaben draws from one of India’s most comprehensive agricultural data repositories. It’s accessible via the Amul Farmer mobile app – already downloaded by over 10 lakh (one million) users on Android and iOS – as well as through voice calls for farmers using feature phones or landlines.

The system is integrated with Amul’s Automatic Milk Collection System (AMCS) and the Pashudhan application, allowing it to offer personalised, cattle-specific guidance.

What makes Amul AI substantially different from most agricultural chatbots is the scale of its training data. The platform was built on a digital backbone managing over 200 crore (two billion) milk procurement transactions annually, veterinary treatment records from more than 1,200 doctors covering nearly 3 crore (30 million) cattle, approximately 70 lakh (seven million) artificial inseminations conducted each year, ISRO satellite imagery for fodder production mapping, and a cattle census conducted every five years.

Every animal in the system carries a unique ID, with individual records of feed intake, disease history and milking status. “Amul AI is about taking dependable, verified information directly to the farmer – instantly and in a language they are comfortable with,” said Jayen Mehta, Managing Director of the Gujarat Cooperative Milk Marketing Federation (GCMMF), which markets the Amul brand.

He said how, by using decades of structured data and integrating it with their operational systems, the platform will help farmers make timely decisions that improve animal productivity and income.

India’s productivity paradox

India is the world’s largest producer of milk, generating 347.87 million tonnes in 2024-25 according to the Department of Animal Husbandry and Dairying – more than double the US’s 102.70 million tonnes. And yet despite leading in volume, India’s per-animal milk yield remains among the lowest globally.

The reasons are structural. India’s dairy sector is characterised by small herd sizes, low-quality feed, limited access to veterinary care in rural areas, and widespread lack of awareness about modern breeding and husbandry practices. Amul’s network spans more than 18,600 villages in Gujarat, where farmers supply over 350 lakh litres (35 million litres) of milk daily.

But information asymmetry has long been a bottleneck – a farmer facing a sick animal at midnight in a remote village has few places to turn; the gap Amul AI is designed to close.

Available initially in Gujarati – the primary language of the cooperative’s farmer base – the platform is built on the government’s Bhashini multilingual framework and could, in principle, be extended to 20 Indian languages, reaching Amul’s presence in 20,000 villages in 20 states.

The cooperative model

The technology story here is inseparable from the institutional one. Amul’s cooperative structure – built over five decades under the original White Revolution – created the data infrastructure that makes Amul AI possible.

Most private agri-tech startups are working backwards: collecting data first, building products second. Amul already had the data. What was needed was a way to make it actionable at the farmer level.

Experts tracking the dairy-tech space see this as significant. Sreeshankar Nair, Founder of Brainwired, a dairy-tech startup, identifies three specific challenges that Amul AI could meaningfully address: farmer awareness, access to quality veterinary guidance, and connectivity to grazing and feed resources.

“If AI can integrate local dialects of Indian languages, India can have White Revolution 2.0,” Nair said, pointing to the transformative potential of vernacular AI in a sector where not every farmer speaks the same dialect.

Saswata Narayan Biswas, Director of the Institute of Rural Management, Anand (IRMA) – the institution closely associated with Amul’s founding ethos – frames it as an AI embedded in a cooperative framework. It becomes “not a technology upgrade, but an instrument of inclusive rural transformation.”

For Biswas, the specific abilities Amul AI brings – predictive disease detection, oestrus tracking, optimised feed formulation, localised weather risk advisories – are abilities Amul had been building for years. AI accelerates and democratises them.

Scale and the test ahead

The launch has drawn backing from the highest levels of government. Gujarat Chief Minister Bhupendra Patel launched the platform and confirmed it will be showcased at the AI Impact Summit 2026. The cooperative has acknowledged MeitY and the EkStep Foundation – an open digital infrastructure nonprofit – as partners in building the AI layer.

Farmers not affiliated with Amul can also access general dairying and animal husbandry information through the app. At its current scale, Amul AI already covers more cattle – nearly 3 crore (30 million) – than most national veterinary databases anywhere in the world.

The harder question, as with most AI deployments at a population scale, is whether the tool will serve those who need it most. The farmers most likely to benefit first – those already comfortable with smartphones, already plugged into Amul’s digital system – may not be the ones with the greatest information deficit.

The rollout of Bhashini-enabled dialect support, the adoption rate among feature-phone users relying on voice calls, and whether AI-driven advisories translate into measurable yield improvements will be the metrics that determine whether this is genuinely White Revolution 2.0.

Amul has built an AI system grounded in half a century of real cooperative transactions, real animals, and real farmers. Such an infrastructure is, arguably, the most credible foundation for AI dairy farming at scale. Whether it fulfils its promise will depend on execution – and on whether Sarlaben’s voice can reach in the last few miles; those that have always been the hardest to cross.

See also: Hitachi bets on industrial expertise to win the physical AI race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post How Amul is using AI dairy farming to put 36M farmers first appeared first on AI News.

]]>
Agentic AI drives finance ROI in accounts payable automation https://www.artificialintelligence-news.com/news/agentic-ai-drives-finance-roi-in-accounts-payable-automation/ Fri, 13 Feb 2026 12:33:33 +0000 https://www.artificialintelligence-news.com/?p=112215 Finance leaders are driving ROI using agentic AI for accounts payable automation, turning manual tasks into autonomous workflows. While general AI projects saw return on investment rise to 67 percent last year, autonomous agents delivered an average ROI of 80 percent by handling complex processes without human intervention. This performance gap demands a change in […]

The post Agentic AI drives finance ROI in accounts payable automation appeared first on AI News.

]]>
Finance leaders are driving ROI using agentic AI for accounts payable automation, turning manual tasks into autonomous workflows.

While general AI projects saw return on investment rise to 67 percent last year, autonomous agents delivered an average ROI of 80 percent by handling complex processes without human intervention. This performance gap demands a change in how CIOs allocate automation budgets.

Agentic AI systems are now advancing the enterprise from theoretical value to hard returns. Unlike generative tools that summarise data or draft text, these agents execute workflows within strict rules and approval thresholds.

Boardroom pressure drives this pivot. A report by Basware and FT Longitude finds nearly half of CFOs face demands from leadership to implement AI across their operations. Yet 61 percent of finance leaders admit their organisations rolled out custom-developed AI agents largely as experiments to test capabilities rather than to solve business problems.

These experiments often fail to pay off. Traditional AI models generate insights or predictions that require human interpretation. Agentic systems close the gap between insight and action by embedding decisions directly into the workflow.

Jason Kurtz, CEO of Basware, explains that patience for unstructured experimentation is running low. “We’ve reached a tipping point where boards and CEOs are done with AI experiments and expecting real results,” he says. “AI for AI’s sake is a waste.”

Accounts payable as the proving ground for agentic AI in finance

Finance departments now direct these agents toward high-volume, rules-based environments. Accounts payable (AP) is the primary use case, with 72 percent of finance leaders viewing it as the obvious starting point. The process fits agentic deployment because it involves structured data: invoices enter, require cleaning and compliance checks, and result in a payment booking.

Teams use agents to automate invoice capture and data entry, a daily task for 20 percent of leaders. Other live deployments include detecting duplicate invoices, identifying fraud, and reducing overpayments. These are not hypothetical applications; they represent tasks where an algorithm functions with high autonomy when parameters are correct.

Success in this sector relies on data quality. Basware trains its systems on a dataset of more than two billion processed invoices to deliver context-aware predictions. This structured data allows the system to differentiate between legitimate anomalies and errors without human oversight.

Kevin Kamau, Director of Product Management for Data and AI at Basware, describes AP as a “proving ground” because it combines scale, control, and accountability in a way few other finance processes can.

The build versus buy decision matrix

Technology leaders must next decide how to procure these capabilities. The term “agent” currently covers everything from simple workflow scripts to complex autonomous systems, which complicates procurement.

Approaches split by function. In accounts payable, 32 percent of finance leaders prefer agentic AI embedded in existing software, compared to 20 percent who build them in-house. For financial planning and analysis (FP&A), 35 percent opt for self-built solutions versus 29 percent for embedded ones.

This divergence suggests a pragmatic rule for the C-suite. If the AI improves a process shared across many organisations, such as AP, embedding it via a vendor solution makes sense. If the AI creates a competitive advantage unique to the business, building in-house is the better path. Leaders should buy to accelerate standard processes and build to differentiate.

Governance as an enabler of speed

Fear of autonomous error slows adoption. Almost half of finance leaders (46%) will not consider deploying an agent without clear governance. This caution is rational; autonomous systems require strict guardrails to operate safely in regulated environments.

Yet the most successful organisations do not let governance stop deployment. Instead, they use it to scale. These leaders are significantly more likely to use agents for complex tasks like compliance checks (50%) compared to their less confident peers (6%).

Anssi Ruokonen, Head of Data and AI at Basware, advises treating AI agents like junior colleagues. The system requires trust but should not make large decisions immediately. He suggests testing thoroughly and introducing autonomy slowly, ensuring a human remains in the loop to maintain responsibility.

Digital workers raise concerns regarding displacement. A third of finance leaders believe job displacement is already happening. Proponents argue agents shift the nature of work rather than eliminating it.

Automating manual tasks such as information extraction from PDFs frees staff to focus on higher-value activities. The goal is to move from task efficiency to operating leverage, allowing finance teams to manage faster closes and make better liquidity decisions without increasing headcount.

Organisations that use agentic AI extensively report higher returns. Leaders who deploy agentic AI tools daily for tasks like accounts payable achieve better outcomes than those who limit usage to experimentation. Confidence grows through controlled exposure; successful small-scale deployments lead to broader operational trust and increased ROI.

Executives must move beyond unguided experimentation to replicate the success of early adopters. Data shows that 71 percent of finance teams with weak returns acted under pressure without clear direction, compared to only 13 percent of teams achieving strong ROI.

Success requires embedding AI directly into workflows and governing agents with the discipline applied to human employees. “Agentic AI can deliver transformational results, but only when it is deployed with purpose and discipline,” concludes Kurtz.

See also: AI deployment in financial services hits an inflection point as Singapore leads the shift to production

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Agentic AI drives finance ROI in accounts payable automation appeared first on AI News.

]]>
From blogosphere to the AI & Big Data Expo: Rackspace and operational AI https://www.artificialintelligence-news.com/news/combing-the-rackspace-blogfiles-for-operational-ai-pointers/ Wed, 04 Feb 2026 10:01:00 +0000 https://www.artificialintelligence-news.com/?p=111961 In a recent blog output, Rackspace refers to the bottlenecks familiar to many readers: messy data, unclear ownership, governance gaps, and the cost of running models once they become part of production. The company frames them through the lens of service delivery, security operations, and cloud modernisation, which tells you where it is putting its […]

The post From blogosphere to the AI & Big Data Expo: Rackspace and operational AI appeared first on AI News.

]]>
In a recent blog output, Rackspace refers to the bottlenecks familiar to many readers: messy data, unclear ownership, governance gaps, and the cost of running models once they become part of production. The company frames them through the lens of service delivery, security operations, and cloud modernisation, which tells you where it is putting its own effort.

One of the clearest examples of operational AI inside Rackspace sits in its security business. In late January, the company described RAIDER (Rackspace Advanced Intelligence, Detection and Event Research) as a custom back-end platform built for its internal cyber defense centre. With security teams working amid many alerts and logs, standard detection engineering doesn’t scale if dependent on the manual writing of security rules. Rackspace says its RAIDER system unifies threat intelligence with detection engineering workflows and uses its AI Security Engine (RAISE) and LLMs to automate detection rule creation, generating detection criteria it describes as “platform-ready” in line with known frameworks such as MITRE ATT&CK. The company claims it’s cut detection development time by more than half and reduced mean time to detect and respond. This is just the kind of internal process change that matters.

The company also positions agentic AI as a way of taking the friction out of complex engineering programmes. A January post on modernising VMware environments on AWS describes a model in which AI agents handle data-intensive analysis and many repeating tasks, yet it keeps “architectural judgement, governance and business decisions” remain in the human domain. Rackspace presents this workflow as stopping senior engineers being sidelined into migration projects. The article states the target is to keep day two operations in scope – where many migration plans fail as teams discover they have modernised infrastructure but not operating practices.

Elsewhere the company sets out a picture of AI-supported operations where monitoring becomes more predictive, routine incidents are handled by bots and automation scripts, and telemetry (plus historical data) are used to spot patterns and, it turn, recommend fixes. This is conventional AIOps language, but it Rackspace is tying such language to managed services delivery, suggesting the company uses AI to reduce the cost of labour in operational pipelines in addition to the more familiar use of AI in customer-facing environments.

In a post describing AI-enabled operations, the company stresses the importance of focus strategy, governance and operating models. It specifies the machinery it needed to industrialise AI, such as choosing infrastructure based on whether workloads involve training, fine-tuning or inference. Many tasks are relatively lightweight and can run inference locally on existing hardware.

The company’s noted four recurring barriers to AI adoption, most notably that of fragmented and inconsistent data, and it recommends investment in integration and data management so models have consistent foundations. This is not an opinion unique to Rackspace, of course, but having it writ large by a technology-first, big player is illustrative of the issues faced by many enterprise-scale AI deployments.

A company of even greater size, Microsoft, is working to coordinate autonomous agents’ work across systems. Copilot has evolved into an orchestration layer, and in Microsoft’s ecosystem, multi-step task execution and broader model choice do exist. However, it’s noteworthy that Redmond is called out by Rackspace on the fact that productivity gains only arrive when identity, data access, and oversight are firmly ensconced into operations.

Rackspace’s near-term AI plan comprises of AI-assisted security engineering, agent-supported modernisation, and AI-augmented service management. Its future plans can perhaps be discerned in a January article published on the company’s blog that concerns private cloud AI trends. In it, the author argues inference economics and governance will drive architecture decisions well into 2026. It anticipates ‘bursty’ exploration in public clouds, while moving inference tasks into private clouds on the grounds of cost stability, and compliance. That’s a roadmap for operational AI grounded in budget and audit requirements, not novelty.

For decision-makers trying to accelerate their own deployments, the useful takeaway is that Rackspace has treats AI as an operational discipline. The concrete, published examples it gives are those that reduce cycle time in repeatable work. Readers may accept the company’s direction and still be wary of the company’s claimed metrics. The steps to take inside a growing business are to discover repeating processes, examine where strict oversight is necessary because of data governance, and where inference costs might be reduced by bringing some processing in-house.

(Image source: Pixabay)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post From blogosphere to the AI & Big Data Expo: Rackspace and operational AI appeared first on AI News.

]]>
Gallup Workforce shows details of AI adoption in US workplaces https://www.artificialintelligence-news.com/news/gallup-workforce-ai-shows-details-of-ml-adoption-in-us-workplaces/ Wed, 28 Jan 2026 10:06:00 +0000 https://www.artificialintelligence-news.com/?p=111891 Artificial intelligence has moved into the US workplace, but its adoption remains uneven, fragmented, and tied to role, industry, and organisation. Findings from a Gallup Workforce survey covering the period to the end of December 2025 show how employees use AI, who benefits most from it, and where areas of uncertainty remain. The findings draw […]

The post Gallup Workforce shows details of AI adoption in US workplaces appeared first on AI News.

]]>
Artificial intelligence has moved into the US workplace, but its adoption remains uneven, fragmented, and tied to role, industry, and organisation. Findings from a Gallup Workforce survey covering the period to the end of December 2025 show how employees use AI, who benefits most from it, and where areas of uncertainty remain.

The findings draw from a nationally-representative questioning of more than 23,000 US adults in full- and part-time work, conducted online in August 2025. Its conclusions are that instances of AI in the workplace are increasing, but its use is far from universal, and is concentrated among knowledge-based workers.

The office AI

Employees in technology, finance, and professional services are by far the biggest user group. More than three-quarters of those working in IT report using AI “at least a few times a year”. In finance and professional services, the figure is a touch under 60%. AI-enabled or aided roles tend to be those that involve significant digital workflow and information synthesis; tasks that correspond with AI’s current abilities.

AI use is lower in sectors dominated by customer-facing or manual work. Only around a third of retail workers report comparable levels of use to their office counterparts, although those in healthcare and manufacturing do tend to deploy AI more often than those in retail, for example. The fact that current raft of AI platforms fit more naturally into desk-based, cognitive roles seems obvious – less so is a drop-off in user numbers in tightly-regulated environments.

Do we, or don’t we?

Gallup’s data reveals a significant number of workers ore unsure whether or not their employer had adopted AI – nearly a quarter of those surveyed weren’t sure. In the third quarter of 2025, just over a third of employees said their organisation had implemented AI. 40% said there was no adoption of AI in their place of work

It’s worth noting that earlier versions of Gallup surveys didn’t include a “don’t know” option for questions about employers’ AI adoption, which encouraged respondents to guess. Belief in organisational AI adoption appeared to rise sharply between 2024 and 2025, therefore, Gallup says. Once uncertainty could be stated explicitly, it became clear a good number of employees were simply uninformed on the matter.

It’s staff in non-managerial roles who are more likely to say they’re unaware of their organisation’s AI use, a tendency mirrored in part-time staff and hands-on roles. The further workers are from decision-making, it seems, the less sure they become.

How workers use AI

The way employees use AI are consistent: of those using AI at least once a year, the most common applications are consolidating information, searching for information, and “generating ideas”, tasks that have changed little since Gallup first measured workplace AI use in 2024.

More than 60% of AI users refer to chatbots, with using AI for writing and editing coming some way behind. Coding assistants and data science tools remain niche, but popular. Employees who use AI often are far more likely to use any more advanced tools at their disposal; particularly true in the cases of coding assistants and data analysis.

Although use figures are generally up, Gallup concludes that AI has yet to be embedded in daily work for most Americans. Around 45% of workers say they use AI “a few times a year”, but only about 10% use it every day.

Conclusions

Business leaders have an easy win: simply clarifying a position on AI use would be a positive, plus publicising the availability (or otherwise) of AI tools would be an easy way to improve adoption rates.

The current abilities of AI pertain to desk-based, digital and data-centric workflows, although there are a myriad of platforms that will utilise AI in other roles. Exploring these more fully would certainly be bucking the trend, and may make the difference between an organisation’s long-term prospects and those of its direct competitors.

A page detailing Gallup’s findings can be found on the company’s website.

(Image source: “DIY Open Plan Office” by lower29 is licensed under CC BY-NC-SA 2.0.)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Gallup Workforce shows details of AI adoption in US workplaces appeared first on AI News.

]]>
Databricks: Enterprise AI adoption shifts to agentic systems https://www.artificialintelligence-news.com/news/databricks-enterprise-ai-adoption-shifts-agentic-systems/ Tue, 27 Jan 2026 17:26:45 +0000 https://www.artificialintelligence-news.com/?p=111880 According to Databricks, enterprise AI adoption is shifting to agentic systems as organisations embrace intelligent workflows. Generative AI’s first wave promised business transformation but often delivered little more than isolated chatbots and stalled pilot programmes. Technology leaders found themselves managing high expectations with limited operational utility. However, new telemetry from Databricks suggests the market has […]

The post Databricks: Enterprise AI adoption shifts to agentic systems appeared first on AI News.

]]>
According to Databricks, enterprise AI adoption is shifting to agentic systems as organisations embrace intelligent workflows.

Generative AI’s first wave promised business transformation but often delivered little more than isolated chatbots and stalled pilot programmes. Technology leaders found themselves managing high expectations with limited operational utility. However, new telemetry from Databricks suggests the market has turned a corner.

Data from over 20,000 organisations – including 60 percent of the Fortune 500 – indicates a rapid shift toward “agentic” architectures where models do not just retrieve information but independently plan and execute workflows.

This evolution represents a fundamental reallocation of engineering resources. Between June and October 2025, the use of multi-agent workflows on the Databricks platform grew by 327 percent. This surge signals that AI is graduating to a core component of system architecture.

The ‘Supervisor Agent’ drives enterprise adoption of agentic AI

Driving this growth is the ‘Supervisor Agent’. Rather than relying on a single model to handle every request, a supervisor acts as an orchestrator, breaking down complex queries and delegating tasks to specialised sub-agents or tools.

Since its launch in July 2025, the Supervisor Agent has become the leading agent use case, accounting for 37 percent of usage by October. This pattern mirrors human organisational structures: a manager does not perform every task but ensures the team executes them. Similarly, a supervisor agent manages intent detection and compliance checks before routing work to domain-specific tools.

Technology companies currently lead this adoption, building nearly four times more multi-agent systems than any other industry. Yet the utility extends across sectors. A financial services firm, for instance, might employ a multi-agent system to handle document retrieval and regulatory compliance simultaneously, delivering a verified client response without human intervention.

Traditional infrastructure under pressure

As agents graduate from answering questions to executing tasks, underlying data infrastructure faces new demands. Traditional Online Transaction Processing (OLTP) databases were designed for human-speed interactions with predictable transactions and infrequent schema changes. Agentic workflows invert these assumptions.

AI agents now generate continuous, high-frequency read and write patterns, often creating and tearing down environments programmatically to test code or run scenarios. The scale of this automation is visible in the telemetry data. Two years ago, AI agents created just 0.1 percent of databases; today, that figure sits at 80 percent.

Furthermore, 97 percent of database testing and development environments are now built by AI agents. This capability allows developers and “vibe coders” to spin up ephemeral environments in seconds rather than hours. Over 50,000 data and AI apps have been created since the Public Preview of Databricks Apps, with a 250 percent growth rate over the past six months.

The multi-model standard

Vendor lock-in remains a persistent risk for enterprise leaders as they seek to increase agentic AI adoption. The data indicates that organisations are actively mitigating this by adopting multi-model strategies. As of October 2025, 78 percent of companies utilised two or more Large Language Model (LLM) families, such as ChatGPT, Claude, Llama, and Gemini.

The sophistication of this approach is increasing. The proportion of companies using three or more model families rose from 36 percent to 59 percent between August and October 2025. This diversity allows engineering teams to route simpler tasks to smaller and more cost-effective models while reserving frontier models for complex reasoning.

Retail companies are setting the pace, with 83 percent employing two or more model families to balance performance and cost. A unified platform capable of integrating various proprietary and open-source models is rapidly becoming a prerequisite for the modern enterprise AI stack.

Contrary to the big data legacy of batch processing, agentic AI operates primarily in the now. The report highlights that 96 percent of all inference requests are processed in real-time.

This is particularly evident in sectors where latency correlates directly with value. The technology sector processes 32 real-time requests for every single batch request. In healthcare and life sciences, where applications may involve patient monitoring or clinical decision support, the ratio is 13 to one. For IT leaders, this reinforces the need for inference serving infrastructure capable of handling traffic spikes without degrading user experience.

Governance accelerates enterprise AI deployments

Perhaps the most counter-intuitive finding for many executives is the relationship between governance and velocity. Often viewed as a bottleneck, rigorous governance and evaluation frameworks function as accelerators for production deployment.

Organisations using AI governance tools put over 12 times more AI projects into production compared to those that do not. Similarly, companies employing evaluation tools to systematically test model quality achieve nearly six times more production deployments.

The rationale is straightforward. Governance provides necessary guardrails – such as defining how data is used and setting rate limits – which gives stakeholders the confidence to approve deployment. Without these controls, pilots often get stuck in the proof-of-concept phase due to unquantified safety or compliance risks.

The value of ‘boring’ enterprise automation from agentic AI

While autonomous agents often conjure images of futuristic capabilities, current enterprise value from agentic AI lies in automating the routine, mundane, yet necessary tasks. The top AI use cases vary by sector but focus on solving specific business problems:

  • Manufacturing and automotive: 35% of use cases focus on predictive maintenance.
  • Health and life sciences: 23% of use cases involve medical literature synthesis.
  • Retail and consumer goods: 14% of use cases are dedicated to market intelligence.

Furthermore, 40 percent of the top AI use cases address practical customer concerns such as customer support, advocacy, and onboarding. These applications drive measurable efficiency and build the organisational muscle required for more advanced agentic workflows.

For the C-suite, the path forward involves less focus on the “magic” of AI and more on the engineering rigour surrounding it. Dael Williamson, EMEA CTO at Databricks, highlights that the conversation has shifted.

“For businesses across EMEA, the conversation has moved on from AI experimentation to operational reality,” says Williamson. “AI agents are already running critical parts of enterprise infrastructure, but the organisations seeing real value are those treating governance and evaluation as foundations, not afterthoughts.”

Williamson emphasises that competitive advantage is shifting back towards how companies build, rather than simply what they buy.

“Open, interoperable platforms allow organisations to apply AI to their own enterprise data, rather than relying on embedded AI features that deliver short-term productivity but not long-term differentiation.”

In highly regulated markets, this combination of openness and control is “what separates pilots from competitive advantage.”

See also: Anthropic selected to build government AI assistant pilot

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Databricks: Enterprise AI adoption shifts to agentic systems appeared first on AI News.

]]>
Anthropic’s usage stats paint a detailed picture of AI success https://www.artificialintelligence-news.com/news/anthropic-report-economic-index-summary-key-points-2026/ Fri, 23 Jan 2026 14:21:20 +0000 https://www.artificialintelligence-news.com/?p=111701 Anthropic’s Economic Index offers a look at how organisations and individuals are actually using large language models. The report contains the company’s analysis of a million consumer interactions on Claude.ai, plus a million enterprise API calls, all dated from November 2025. The report notes that its figures are based on observations, rather than, for example, […]

The post Anthropic’s usage stats paint a detailed picture of AI success appeared first on AI News.

]]>
Anthropic’s Economic Index offers a look at how organisations and individuals are actually using large language models. The report contains the company’s analysis of a million consumer interactions on Claude.ai, plus a million enterprise API calls, all dated from November 2025. The report notes that its figures are based on observations, rather than, for example, a sample of business decision-makers or generic survey.

Limited use cases dominate

Use of Anthropic’s AI tends to cluster around a relatively small number of tasks, with the ten most frequently-performed tasks accounting for almost a quarter of consumer interactions, and nearly a third of enterprise API traffic. There’s a focus on the use of Claude for code creation and modification, as readers might expect.

This concentration of use of AI as a software development tool has remained fairly constant over time, suggesting that the model’s value is largely based around these types of tasks, with no emerging use of Claude for other purposes of any empirical significance. This suggests that broad, general rollouts of AI are less likely to be successful than those focused on tasks where large language models are proven to be effective.

Augmentation outperforms automation

On consumer platforms, collaborative use – where users iterate on queries to the AI over the course of a virtual conversation – is more common than using the AI to produce automated workflows. Enterprise API usage shows the opposite, as businesses attempt to gain savings through automating tasks. However, while Claude succeeds on shorter tasks, the observed quality of outcomes declines the more complex the task (or series of tasks) is, and the longer the required ‘thinking time’ required.

This implies automation is most effective for routine, well-defined tasks that are simpler, require fewer logical steps, and where responses to queries can be quick. Tasks estimated to take humans several hours show significantly lower completion rates than shorter tasks. For longer tasks to succeed, users have to iterate and correct outputs.

Users breaking down large tasks into manageable steps and posing each separately (either interactively or via API) have improved success rates.

The company’s observations show most queries put to the LLMs are associated with white-collar roles (although poorer countries tend to use Claude in academic settings more commonly than, for instance, the US). For example, travel agents can lose complex planning tasks to the LLM and retain elements of their more transactional work, while some roles, such as property managers, show the opposite: routine administrative tasks can be handled by the AI, and tasks needing higher-judgement remain with the human professional..

Productivity gains lessened by reliability

The report notes that claims of AI boosting annual labour productivity by 1.8% (over a decade) are likely best to be reduced to 1-1.2%, due to the need to factor in extra labour and costs. While a 1% efficiency gain over a decade is still economically meaningful, the need for activities such as validation, error handling, and reworking will lower success rates and therefore there should be a similar adjustment in the minds of a business’s decision-makers.

Potential gains to an organisation deploying AI also depend on whether tasks given to the LLM complement or substitute work. In the latter case, the success of substituting an AI for tasks normally done by a human depends on how complex the work is.

It’s noteworthy that the report finds a near-perfect correlation between the sophistication of users’ prompts to the LLM and successful outcomes. Thus, how people use AI shapes what it delivers.

Key takeaways for leaders

  • AI implementation delivers value fastest in specific, well-defined areas.
  • Complementary systems (AI+human) outperform full automation for complex work.
  • Reliability and necessary extra work ‘around’ the AI reduce predicted productivity gains.
  • Changes to workforces’ makeup depend on the mix of tasks and their complexity, not specific job roles.

(Image source: “the virtual construction worker” by antjeverena is licensed under CC BY-NC-SA 2.0.)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Anthropic’s usage stats paint a detailed picture of AI success appeared first on AI News.

]]>
2026 to be the year of the agentic AI intern https://www.artificialintelligence-news.com/news/agent-ai-as-the-intern-in-2026-prediction-by-nexos-ai/ Thu, 08 Jan 2026 12:24:21 +0000 https://www.artificialintelligence-news.com/?p=111522 After several years of experimentation, enterprise AI is moving out of the pilot phase. To date, many organisations limit AI to general-purpose chatbots, often created by small groups of early adopters. According to Nexos.ai, that model will give way to something more operational: fleets of task-specific AI agents embedded directly into business workflows. Even isolated […]

The post 2026 to be the year of the agentic AI intern appeared first on AI News.

]]>
After several years of experimentation, enterprise AI is moving out of the pilot phase. To date, many organisations limit AI to general-purpose chatbots, often created by small groups of early adopters. According to Nexos.ai, that model will give way to something more operational: fleets of task-specific AI agents embedded directly into business workflows.

Even isolated agents are in common use, screening CVs, reviewing contracts, drafting routine correspondence, preparing management reports and orchestrating actions in enterprise systems.

Analysis from the company suggests organisations that move from single chatbots to multiple role-specific agents see materially higher adoption and claim a clearer business impact. Teams interact with agents that can behave like junior colleagues, where each agent is accountable for a defined slice of work.

Every team gets its own named agent

The company’s studies envisage the normalisation of named AI agents assigned on a per team basis, which it describes as an “AI intern”. These are not general-purpose assistants, but dedicated tools for specific operational processes.

For example, HR teams might deploy agents tuned to recruitment criteria, or legal teams using agents configured to flag contract standard violations. Sales teams will rely on agents optimised for their sales pipelines and integrated with an existing CRM. In each case, Nexos says the business value comes from contextual awareness and integration with existing software and date, rather than from advances in the raw power of the model.

Early enterprise deployments suggest the gains can be significant. Payhawk, for example, reports that its deployment of Nexos.ai’s agentic platform in finance, customer support, and operations reduced the necessary security investigation time by 80%. The company achieved 98% data accuracy and cut its processing costs by 75%.

Žilvinas Girėnas, head of product at Nexos.ai, says the real benefit stems from coordination. “The shift from single-purpose agents to coordinated AI teams is fundamental. Businesses are […] building groups of specialised agents that work together in a workflow. That’s when AI stops being a pilot and starts becoming infrastructure.”

Platform consolidation becomes unavoidable

As the number of active agents in organisations rises, a second-order problem – fragmentation – appears. Teams running five to ten agents in different tools face duplicate costs and inconsistency in security controls. From the perspective of IT governance, this situation can become unsustainable.

Evidence from early Nexos adopters suggests consolidating agents on a enterprise-wide shared platform delivers faster deployment – in some cases twice as fast – and gives better oversight over spend and performance.

Girėnas says: “When teams are juggling multiple vendors and logins, usage drops. A single platform is what allows organisations to extract consistent value rather than paying for shelfware.”

The situation points to pattern familiar to enterprise technology veterans: AI agent systems follow the same trajectory of consolidation seen in collaboration, security, and analytics stacks.

AI operations shifts to the business

The company’s findings suggest that the ownership of AI operations is moving from engineering teams and towards business leaders and discrete business functions. The function-specific deployment model means heads of HR, legal, finance, and sales are will expected to configure their own agents, a task that include prompt management. Thus, the ability to manage agents will become a core operational competency for individuals and business functions.

This places new requirements on agentic platforms, with the need for interfaces that are approachable by non-technical users, with the stack operating with minimal reliance on APIs or developer-style tooling. Team leads will need to be able to adjust instructions, test outputs from their adopted systems and find ways to scale successful configurations. Engineering support will be reserved for isolated problem-solving.

Demand will outstrip delivery capacity

Nexos.ai’s final prediction is the appearance of a capacity challenge. It says that once teams can deploy their first few agents successfully, demand for similar systems will accelerate in the organisation. Marketing departments may look for workflow automation, finance pros will want compliance-checking agents, and customer success teams will explore the effects of support triage: Each department, seeing proven value elsewhere, will expect similar abilities and efficiencies.

Industry projections suggest that by the end of 2026, around 40% of enterprise software applications will incorporate task-specific AI agents, up from under 5% in 2024. Engineering capacity is unlikely to keep pace if every agent is built from scratch – thus the call for centralised capability.

“The organisations that cope best will be those with agent libraries rather than bespoke builds,” Girėnas says. “Templates, playbooks, and pre-built agents are the only way to meet rising demand without overwhelming delivery teams.”

(Image source: “Office Assistant” by LornaJane.net is licensed under CC BY-ND 2.0.)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post 2026 to be the year of the agentic AI intern appeared first on AI News.

]]>
Optimism for AI-powered productivity: Deloitte https://www.artificialintelligence-news.com/news/deloitte-survey-takes-cfo-and-it-temperature-around-technology-and-ai/ Wed, 07 Jan 2026 15:59:47 +0000 https://www.artificialintelligence-news.com/?p=111509 Deloitte’s latest UK CFO Survey presents an improving outlook for large UK businesses, with technology investment – particularly in AI – emerging as a dominant strategy. The survey offers the signal that while macroeconomic and geopolitical risks remain elevated, boards are converging increasingly on digital ability as a primary route to productivity and medium-term growth. […]

The post Optimism for AI-powered productivity: Deloitte appeared first on AI News.

]]>
Deloitte’s latest UK CFO Survey presents an improving outlook for large UK businesses, with technology investment – particularly in AI – emerging as a dominant strategy. The survey offers the signal that while macroeconomic and geopolitical risks remain elevated, boards are converging increasingly on digital ability as a primary route to productivity and medium-term growth.

The strongest finding concerns technology investment. An overwhelming 96% of CFOs expect UK companies to increase investment in technology over the next five years, with 77% anticipating improvement to productivity and business performance. The figures are distinctive for a CFO-destined paper, and indicate digital spend is not viewed as discretionary or cyclical, but is treated as structural (akin to capital investment in previous industrial phases). For IT leaders, the paper shows sustained funding is available, but also points out the heightened expectations for delivery, integration, and measurable returns from the technology.

Artificial intelligence sits at the centre of the paper and CFO sentiment in general. The proportion of CFOs becoming ‘more optimistic’ about AI’s ability to improve organisational performance has risen to 59%, up from 39% in Q3 2024. This change isn’t incremental, suggesting AI has crossed from experiment into mainstream financial confidence. Importantly, the survey does not indicate a wholesale rise in risk-taking to accompany the new-found optimism. Risk appetite, while improving, remains subdued at 15%, below the longer average of 25%. This combination – confidence in AI but continued balance-sheet caution – has implications for how AI initiatives are likely to be governed and controlled. Finance functions are likely to need tightly-scoped uses and productivity metrics over open-ended experiments and trials.

For finance professionals, the environment reinforces the role of the CFO as a steward of technology, rather than a passive consumer of IT budgets. The survey positions finance chiefs as shaping digital strategy where AI is concerned. The paper’s emphasis on productivity gains suggests a preference for applications that automate processes and help with financial forecasting, not just customer-facing innovation. IT teams should expect closer scrutiny of business cases presented to them, more involved work from finance professionals, and a translation of technical ability to financial outcomes.

Despite improving sentiment metrics, the survey also highlights some notable constraints. Business confidence remains negative at net -13%, below its long-term average, despite optimism having lifted from lows recorded in earlier iterations of the CFO Survey from Deloitte. Capital expenditure is a priority, but only 17% of CFOs describe it as a ‘strong priority’, only just above the long-term average. This suggests while investment is protected, it’s not immune: Programmes perceived as speculative, poorly governed, or badly aligned with productivity are still unlikely to survive.

External uncertainty, though declining, remains notable. 38% of CFOs still rate their uncertainty in the future as ‘high’ or ‘very high’, and geopolitics still dominates the risk landscape, as cited by 65% of respondents. UK competitiveness and productivity follow closely, with a historically high risk rating of 62. Systems resilience, data security, energy efficiency, and supply-chain visibility are likely to command attention as well as the overall goal of efficiencies created by the use of AI in operations.

A notable subtext of the survey is the human dimension of the technology’s adoption. Deloitte’s leadership realises AI’s value depends on combining technology with human skills and the need to upskill workforces. While this is not quantified in the survey data, it aligns with the broader pattern of cautious optimism: CFOs are willing to invest, but not to assume that technology, as of itself, delivers outcomes. This strengthens the case among IT leaderships for embedding change management, training, governance, and oversight into new digital programmes.

The Deloitte CFO Survey shows a pragmatic and decisive turn towards technology-led productivity in UK businesses. Its evidence is strongest around sustained digital investment and the noteworthy rise in confidence in AI. There’s continued caution on risk and a recognition of a challenging external environment. For Finance professionals, the priority is allocation of capital to initiatives that can improve performance demonstrably. For IT staff, opportunity is expanding, but so is accountability. Digital ambition will be funded in all likelihood, but only where it can be translated into credible, auditable business value.

(Image source: “Deloitte exposure” by zilverbat. is licensed under CC BY-NC 2.0.)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Optimism for AI-powered productivity: Deloitte appeared first on AI News.

]]>
Strong contractor belief in AI for industry-wide transformation https://www.artificialintelligence-news.com/news/construction-industry-ai-success-potential/ Tue, 16 Dec 2025 08:22:09 +0000 https://www.artificialintelligence-news.com/?p=111329 The construction industry generates colossal amounts of data, with much of it unused or locked in spreadsheets. AI is now changing this, enabling teams to accelerate decision-making, enhance margins, and improve project outcomes. According to new research from Dodge Construction Network (Dodge) and CMiC, the true transformative impact of AI is highlighted by contractors, with […]

The post Strong contractor belief in AI for industry-wide transformation appeared first on AI News.

]]>
The construction industry generates colossal amounts of data, with much of it unused or locked in spreadsheets. AI is now changing this, enabling teams to accelerate decision-making, enhance margins, and improve project outcomes. According to new research from Dodge Construction Network (Dodge) and CMiC, the true transformative impact of AI is highlighted by contractors, with 87% believing AI will “meaningfully transform their business,” despite current low adoption rates.

The latest research, entitled, ‘AI for Contractors,’ discovered automated proposal generation and progress tracking from site photos both reached a 92% effectiveness rating. Meanwhile, contract risk review achieved 85% effectiveness when compared to previous, more traditional methods.

The report highlights how AI is allowing project managers to focus on strategic decisions rather than time-consuming administrative tasks. Finance teams are also benefiting from AI technology, shifting from historical reporting to predictive insights, while operations leaders are able to apply data-driven intelligence for improved project delivery. Rather than AI fully replacing human expertise, the report found it actually enhances human input.

“For decades, construction firms have lacked the tools to transform the data they’ve collected into action. AI-enabled solutions are changing that,” says Gord Rawlins, president and CEO of CMiC. “This research highlights the high-impact results contractors are achieving today.”

AI changing contractor roles

Surveyed contractors see AI as a catalyst in reshaping everyday aspects of their operations, enabling predictive insights rather than reacting to problems once they have occurred. This introduces wider benefits, like tighter cost controls, improved scheduling, and higher quality project delivery. In other words, improved overall outcomes.

A substantial 85% of contractors foresee less time spent on repetitive tasks, while 75% have faith that AI can help mine historical data to learn from previous projects. Rather than relying fully on AI, 70% said the technology helps them make better, more informed decisions thanks to insights that may otherwise not be present.

AI implementation remains low, but companies are preparing for wider adoption

Currently, AI adoption in the construction industry is low, despite awareness levels of 32% to 34%. This seems to be due to several reasons, including a lack of clear understanding, internal approvals, and software access. However, Dodge’s research discovered more than half of companies surveyed are strategically preparing for AI with pilot programmes and staff training for AI-related positions.

According to the report, 40% of companies have a set budget for AI, 38% are developing teams for implementation, 19% are adapting old workflows, and 51% are assessing AI changes.

Early adopters lead the way

Overall awareness of AI use in the industry is quite low, with just 20% to 50% of contractors knowing that certain management tasks implement AI, and very few actively use these functions. Nevertheless, early adopters of AI provided positive feedback, as more than 70% revealed that AI tools are hugely effective compared to more traditional methods, suggesting a potential for quick growth in AI use throughout the industry.

Security and accuracy lead concerns

The main concerns of adopting AI revolve around security and accuracy. The report reveals that 57% are worried about the accuracy of AI output, while 54% have doubts over the security of company data.

Internal resistance to change (44%) and implementation costs (41%) are also cited as key concerns, but perhaps surprisingly, just 21% expressed concern over job losses. 31% believe current data quality is not yet adequate to support AI analysis.

According to the report, larger contractors are likely to rely more on AI than smaller firms, thus are more concerned about data quality and reliability. For instance, 69% of larger contractors cited lack of reliability or accuracy of AI outputs as a major concern, compared to 54% of smaller or mid-size contractors.

Research data confirms that contractors are generally open to adopting AI, but the accuracy of AI outputs tend to stand in the way, as well as the desire for better tools, more information, and greater internal support.

17% of contractors said they do not sufficiently trust AI results, an issue that becomes more pronounced in sensitive areas like payments. Distrust in AI operations rises to 35% and 31% not having faith in AI managing project budgets.

A major theme is the need for more understanding before using AI. On average, 21% of respondents said they want a better insight of how AI works before considering using it, climbing to 31% for more complex tasks like safety risk assessments.

Contractors also believe they are limited by their current software capabilities, with an average of 19% reporting their software does not offer the AI functions they require. The increases to 33% for managing resources.

Internal approval remains a notable obstacle, with 22% saying their company has not yet approved the use of AI, despite personal interest. Another barrier is a lack of time or resources that effectively evaluate AI tools. 13% stated this as a main reason why AI has not yet been adopted.

Although there are obvious challenges to mass AI use in the construction industry – and therefore significant market opportunity – only 5% believe AI would not be beneficial or improve current methods. That indicates a resistance that stems from various concerns rather than a lack of perceived value.

Steve Jones, Senior Director, Industry Insights Analytics at Dodge, spoke on the findings.

“We designed this study to look at the use of AI in the digital tools already deployed by contractors because that may offer the best solution to the challenge of data quality. But it is also heartening to see that many contractors are aware of the key challenges and the need for a rigorous approach to successfully implementing these tools at their organisations,” the Dodge research states.

Key interest in emerging AI functionalities

AI’s potential is clearly recognised, even if the industry’s readiness to adopt it isn’t quite matching the data. Certain areas are attracting the most attention when it comes to AI functions, like automated construction analysis, where 81% see potential benefits. 80% also show interest in intelligent permit submissions, while 79% believe in autonomous schedule and resource optimisation.

92% appreciate automated contract management and 76% recognise potential in AI-powered dynamic pricing. Although AI adoption remains limited, these strong numbers suggest the tide may soon be turning.

AI and the new age of the construction industry

The latest data suggests a strong openness, maybe even an eagerness, to AI adoption in the construction sector. However, better tools, clearer guidance, and more trustworthy outputs are just some of the areas that need to be addressed before interest becomes implementation.

“With high awareness, strong interest, and powerful validation from early adopters, contractors appear poised for significant expansion in their use of AI-enabled tools in meaningful ways,” said Steve Jones.

The industry is on a “tipping point for AI adoption” according to Jones. When companies start to provide clearer pathways for adoption, the move towards AI-powered construction workflows will undoubtedly accelerate rapidly, reshaping how projects are delivered forever.

(Image source: “Tianjin Construction Site.” by @yakobusan Jakob Montrasio is licensed under CC BY 2.0.)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Strong contractor belief in AI for industry-wide transformation appeared first on AI News.

]]>
CEOs still betting big on AI: Strategy vs. return on investment in 2026 https://www.artificialintelligence-news.com/news/ceos-still-betting-on-ai-strategy-vs-return-on-investment-in-2026/ Mon, 15 Dec 2025 09:00:00 +0000 https://www.artificialintelligence-news.com/?p=111301 Enterprise leaders are pressing ahead with artificial intelligence, even as some early results remain uneven. Reporting from the Wall Street Journal and Reuters shows that most CEOs expect AI spending to keep rising through 2026, despite difficulty tying those investments to clear, enterprise-wide returns. The tension highlights where many organisations now sit in their AI […]

The post CEOs still betting big on AI: Strategy vs. return on investment in 2026 appeared first on AI News.

]]>
Enterprise leaders are pressing ahead with artificial intelligence, even as some early results remain uneven. Reporting from the Wall Street Journal and Reuters shows that most CEOs expect AI spending to keep rising through 2026, despite difficulty tying those investments to clear, enterprise-wide returns.

The tension highlights where many organisations now sit in their AI journey. The technology has moved beyond trials and proofs of concept, but it has yet to settle into a reliable source of value. Companies are operating in an in-between phase, where ambition, execution, and expectations are all under strain at the same time.

Spending continues, even as returns lag

AI budgets have climbed steadily in large enterprises over the past two years. Competitive pressure, board oversight, and fear of being left behind have all played a role. At the same time, executives are more open about the limits they are seeing. Gains often show up in pockets rather than in the business, pilots fail to spread, and the cost of connecting AI systems to existing tools keeps rising.

A Wall Street Journal survey of senior executives found that most CEOs see AI as central to long-term competitiveness, even if short-term benefits are hard to measure. For many, AI no longer feels optional. It is treated as a capability that must be developed over time, rather than a project that can be paused if results disappoint.

That view helps explain why spending remains steady. Leaders worry that cutting back now could weaken their position later, especially as rivals improve how they use the technology.

Why pilots struggle to scale

One of the main barriers to stronger returns is the jump from experimentation to day-to-day use. Many organisations have launched AI pilots in different teams, often without shared rules or coordination. While these efforts can generate insight and interest, few translate into changes that affect the wider business.

Reuters has reported that companies trying to scale AI frequently run into issues with data quality, system links, security controls, and regulatory requirements. The problems are not only technical, but reflect how work is organised. Responsibility is often split in teams, ownership is unclear, and decisions slow down once projects touch legal, risk, and IT functions.

The result is a pattern of heavy spending on trials, with limited progress toward systems that are embedded in core operations.

Infrastructure costs reshape the equation

The cost of infrastructure is also weighing on AI returns. Training and running models demands large amounts of computing power, storage, and energy. Cloud bills can rise quickly as use grows, while building on-site systems requires upfront investment and long planning cycles. Executives cited by Reuters have warned that infrastructure costs can outpace the benefits delivered by AI tools, particularly in the early stages. This has led to tough choices: whether to centralise AI resources or leave teams to experiment on their own; whether to build in-house systems or rely on vendors; and how much waste is acceptable while capabilities are still forming.

In practice, these decisions are shaping AI strategy as much as model performance or use-case selection.

AI governance moves to the centre of CEO decision-making

As AI spending increases, so does scrutiny. Boards, regulators, and internal audit teams are asking harder questions. In response, many organisations are tightening control. Decision rights are shifting toward central teams, AI councils are becoming more common, and projects are being linked more closely to business priorities.

The Wall Street Journal reports that companies are moving away from loosely connected experiments toward clearer goals, measures, and timelines. This can slow progress, but it reflects a growing belief that AI should be managed with the same discipline as other major investments.

The shift marks a change in how AI is treated. It is no longer a side effort or a curiosity but is being brought into existing operating and risk structures.

Expectations are being reset, not abandoned

Importantly, the persistence of AI spending does not signal blind optimism. Instead, it reflects a reset in expectations. CEOs are learning that AI rarely delivers immediate, sweeping returns. Value tends to emerge gradually, as organisations adjust workflows, retrain staff, and refine data foundations.

Rather than abandoning AI initiatives, many enterprises are narrowing their focus. They are prioritising fewer use cases, demanding clearer ownership, and aligning projects more closely with business outcomes. The re-calibration may reduce short-term excitement, but it improves the likelihood of sustainable returns.

What CEO AI strategy signals for 2026 planning

For organisations shaping their plans for 2026, the message for every CEO is not to retreat from AI, but to pursue it with more care as AI strategies mature. Ownership, governance, and realistic timelines matter more than headline spending levels or bold claims.

Those most likely to benefit are treating AI as a long-term shift in how the organisation works, not a quick route to growth. In the next phase, advantage will depend less on how much is spent and more on how well AI fits into everyday operations.

(Photo by Ambre Estève)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post CEOs still betting big on AI: Strategy vs. return on investment in 2026 appeared first on AI News.

]]>
Perplexity: AI agents are taking over complex enterprise tasks https://www.artificialintelligence-news.com/news/perplexity-ai-agents-taking-over-complex-enterprise-tasks/ Wed, 10 Dec 2025 12:08:30 +0000 https://www.artificialintelligence-news.com/?p=111238 New adoption data from Perplexity reveals how AI agents are driving workflow efficiency gains by taking over complex enterprise tasks. For the past year, the technology sector has operated under the assumption that the next evolution of generative AI would advance beyond conversation into action. While Large Language Models (LLMs) serve as a reasoning engine, […]

The post Perplexity: AI agents are taking over complex enterprise tasks appeared first on AI News.

]]>
New adoption data from Perplexity reveals how AI agents are driving workflow efficiency gains by taking over complex enterprise tasks.

For the past year, the technology sector has operated under the assumption that the next evolution of generative AI would advance beyond conversation into action. While Large Language Models (LLMs) serve as a reasoning engine, “agents” act as the hands, capable of executing complex, multi-step workflows with minimal supervision.

Until now, however, visibility into how these tools are actually being utilised in the wild has been opaque, relying largely on speculative frameworks or limited surveys.

New data released by Perplexity, analysing hundreds of millions of interactions with its Comet browser and assistant, provides a first large-scale field study of general-purpose AI agents. The data indicates that agentic AI is already being deployed by high-value knowledge workers to streamline productivity and research tasks.

Understanding who is using these tools is essential for forecasting internal demand and identifying potential shadow IT vectors. The study reveals marked heterogeneity in adoption. Users in nations with higher GDP per capita and educational attainment are far more likely to engage with agentic tools.

More telling for corporate planning is the occupational breakdown. Adoption is heavily concentrated in digital and knowledge-intensive sectors. The ‘Digital Technology’ cluster represents the largest share, accounting for 28 percent of adopters and 30 percent of queries. This is followed closely by academia, finance, marketing, and entrepreneurship.

Collectively, these clusters account for over 70 percent of total adopters. This suggests that the individuals most likely to leverage agentic workflows are the most expensive assets within an organisation: software engineers, financial analysts, and market strategists. These early adopters are not dabbling; the data shows that “power users” (those with earlier access) make nine times as many agentic queries as average users, indicating that once integrated into a workflow, the technology becomes indispensable.

AI agents: Partners for enterprise tasks, not butlers

To advance beyond marketing narratives, enterprises must understand the utility these agents provide. A common view suggests agents will primarily function as “digital concierges” for rote administrative chores. However, the data challenges this view: 57 percent of all agent activity focuses on cognitive work.

Perplexity’s researchers developed a “hierarchical agentic taxonomy” to classify user intent, revealing the usage of AI agents is practical rather than experimental. The dominant use case is ‘Productivity & Workflow,’ which accounts for 36 percent of all agentic queries. This is followed by ‘Learning & Research’ at 21 percent.

Specific anecdotes from the study illustrate how this translates to enterprise value. A procurement professional, for instance, used the assistant to scan customer case studies and identify relevant use cases before engaging with a vendor. Similarly, a finance worker delegated the tasks of filtering stock options and analysing investment information. In these scenarios, the agent handles the information gathering and initial synthesis autonomously to allow the human to focus on final judgment.

This distribution provides a definite indication to operational leaders: the immediate ROI for agentic AI lies in scaling human capability rather than simply automating low-level friction. The study defines these agents as systems that “cycle automatically between three iterative phases to achieve the end goal: thinking, acting, and observing.” This capability allows them to support “deep cognitive work,” acting as a thinking partner rather than a simple butler.

Stickiness and the cognitive migration

A key insight for IT leaders is the “stickiness” of AI agents for enterprise workflows. The data shows that in the short term, users exhibit strong within-topic persistence. If a user engages an agent for a productivity task, their subsequent queries are highly likely to remain in that domain.

However, the user journey often evolves. New users frequently “test the waters” with low-stakes queries, such as asking for movie recommendations or general trivia. Over time, a transition occurs. The study notes that while users may enter via various use cases, query shares tend to migrate toward cognitively oriented domains like productivity, learning, and career development.

Once a user employs an agent to debug code or summarise a financial report, they rarely revert to lower-value tasks. The ‘Productivity’ and ‘Workflow’ categories demonstrate the highest retention rates. This behaviour implies that early pilot programmes should anticipate a learning curve where usage matures from simple information retrieval to complex task delegation.

The “where” of agentic AI is just as important as the “what”. Perplexity’s study tracked the environments – specific websites and platforms – where these AI agents operate. The concentration of activity varies by task, but the top environments are staples of the modern enterprise stack.

Google Docs is a primary environment for document and spreadsheet editing, while LinkedIn dominates professional networking tasks. For ‘Learning & Research,’ the activity is split between course platforms like Coursera and research repositories.

For CISOs and compliance officers, this presents a new risk profile. AI agents are not just reading data; they are actively manipulating it within core enterprise applications. The study explicitly defines agentic queries as those involving “browser control” or actions on external applications via APIs. When an employee tasks an agent to “summarise these customer case studies,” the agent is interacting directly with proprietary data.

The concentration of environments also highlights the potential for platform-specific optimisations. For instance, the top five environments account for 96 percent of queries in professional networking, primarily on LinkedIn. This high concentration suggests that businesses could see immediate efficiency gains by developing specific governance policies or API connectors for these high-traffic platforms.

Business planning for agentic AI following Perplexity’s data

The diffusion of capable AI agents invites new lines of inquiry for business planning. The data from Perplexity confirms that we have passed the speculative phase. Agents are currently being used to plan and execute multi-step actions, modifying their environments rather than just exchanging information.

Operational leaders should consider three immediate actions:

  1. Audit the productivity and workflow friction points within high-value teams: The data shows this is where agents are naturally finding their foothold. If software engineers and financial analysts are already using these tools to edit documents or manage accounts, formalising these workflows could standardise efficiency gains.
  1. Prepare for the augmentation reality: The researchers note that while agents have autonomy, users often break tasks into smaller pieces, delegating only subtasks. This suggests that the immediate future of work is collaborative, requiring employees to be upskilled in how to effectively “manage” their AI counterparts.
  1. Address the infrastructure and security layer: With agents operating in “open-world web environments” and interacting with sites like GitHub and corporate email, the perimeter for data loss prevention expands. Policies must distinguish between a chatbot offering advice and an agent executing code or sending messages.

As the market for agentic AI is projected to grow from $8 billion in 2025 to $199 billion by 2034, the early evidence from Perplexity serves as a bellwether. The transition to enterprise workflows led by AI agents is underway, driven by the most digitally capable segments of the workforce. The challenge for the enterprise is to harness this momentum without losing control of the governance required to scale it safely.

See also: Accenture and Anthropic partner to boost enterprise AI integration

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Perplexity: AI agents are taking over complex enterprise tasks appeared first on AI News.

]]>