Healthcare & Wellness AI - AI News https://www.artificialintelligence-news.com/categories/ai-in-action/healthcare-wellness-ai/ Artificial Intelligence News Fri, 13 Feb 2026 16:07:08 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Healthcare & Wellness AI - AI News https://www.artificialintelligence-news.com/categories/ai-in-action/healthcare-wellness-ai/ 32 32 AI forecasting model targets healthcare resource efficiency https://www.artificialintelligence-news.com/news/ai-forecasting-model-targets-healthcare-resource-efficiency/ Fri, 13 Feb 2026 16:07:06 +0000 https://www.artificialintelligence-news.com/?p=112221 An operational AI forecasting model developed by Hertfordshire University researchers aims to improve resource efficiency within healthcare. Public sector organisations often hold large archives of historical data that do not inform forward-looking decisions. A partnership between the University of Hertfordshire and regional NHS health bodies addresses this issue by applying machine learning to operational planning. […]

The post AI forecasting model targets healthcare resource efficiency appeared first on AI News.

]]>
An operational AI forecasting model developed by Hertfordshire University researchers aims to improve resource efficiency within healthcare.

Public sector organisations often hold large archives of historical data that do not inform forward-looking decisions. A partnership between the University of Hertfordshire and regional NHS health bodies addresses this issue by applying machine learning to operational planning. The project analyses healthcare demand to assist managers with decisions regarding staffing, patient care, and resources.

Most AI initiatives in healthcare focus on individual diagnostics or patient-level interventions. The project team notes that this tool targets system-wide operational management instead. This distinction matters for leaders evaluating where to deploy automated analysis within their own infrastructure.

The model uses five years of historical data to build its projections. It integrates metrics such as admissions, treatments, re-admissions, bed capacity, and infrastructure pressures. The system also accounts for workforce availability and local demographic factors including age, gender, ethnicity, and deprivation.

Iosif Mporas, Professor of Signal Processing and Machine Learning at the University of Hertfordshire, leads the project. The team includes two full-time postdoctoral researchers and will continue development through 2026.

“By working together with the NHS, we are creating tools that can forecast what will happen if no action is taken and quantify the impact of a changing regional demographic on NHS resources,” said Professor Mporas.

Using AI for forecasting in healthcare operations

The model produces forecasts showing how healthcare demand is likely to change. It models the impact of these changes in the short-, medium-, and long-term. This capability allows leadership to move beyond reactive management.

Charlotte Mullins, Strategic Programme Manager for NHS Herts and West Essex, commented: “The strategic modelling of demand can affect everything from patient outcomes including the increased number of patients living with chronic conditions.

“Used properly, this tool could enable NHS leaders to take more proactive decisions and enable delivery of the 10-year plan articulated within the Central East Integrated Care Board as our strategy document.” 

The University of Hertfordshire Integrated Care System partnership funds the work, which began last year. Testing of the AI model tailored for healthcare operations is currently underway in hospital settings. The project roadmap includes extending the model to community services and care homes.

This expansion aligns with structural changes in the region. The Hertfordshire and West Essex Integrated Care Board serves 1.6 million residents and is preparing to merge with two neighbouring boards. This merger will create the Central East Integrated Care Board. The next phase of development will incorporate data from this wider population to improve the predictive accuracy of the model.

The initiative demonstrates how legacy data can drive cost efficiencies and shows that predictive models can inform “do nothing” assessments and resource allocation in complex service environments like the NHS. The project highlights the necessity of integrating varied data sources – from workforce numbers to population health trends – to create a unified view for decision-making.

See also: Agentic AI in healthcare: How Life Sciences marketing could achieve $450B in value by 2028

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AI forecasting model targets healthcare resource efficiency appeared first on AI News.

]]>
Agentic AI in healthcare: How Life Sciences marketing could achieve $450B in value by 2028 https://www.artificialintelligence-news.com/news/agentic-ai-healthcare-pharma-marketing-450b-value-2028/ Tue, 10 Feb 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112097 Agentic AI in healthcare is graduating from answering prompts to autonomously executing complex marketing tasks – and life sciences companies are betting their commercial strategies on it. According to a recent report cited by Capgemini Invent, AI agents could generate up to $450 billion in economic value through revenue uplift and cost savings globally by […]

The post Agentic AI in healthcare: How Life Sciences marketing could achieve $450B in value by 2028 appeared first on AI News.

]]>
Agentic AI in healthcare is graduating from answering prompts to autonomously executing complex marketing tasks – and life sciences companies are betting their commercial strategies on it.

According to a recent report cited by Capgemini Invent, AI agents could generate up to $450 billion in economic value through revenue uplift and cost savings globally by 2028, with 69% of executives planning to deploy agents in marketing processes by year’s end.

The stakes are particularly high in pharmaceutical marketing, where sales representatives have increasingly limited face-time with healthcare professionals (HCPs) – a trend accelerated by Covid-19. The challenge isn’t just access; it’s making those rare interactions count with intelligence that’s currently trapped in data silos.

The fragmented intelligence problem

Briggs Davidson, senior director of digital, data & marketing strategy for life Sciences at Capgemini Invent, outlines a scenario that will sound familiar to anyone in pharma marketing: An HCP attends a conference where a competitor showcases promising drug results, publishes research, and shifts their prescriptions to a rival product – in a single quarter.

“In most companies, legacy IT infrastructure and data silos keep this information in disparate systems in CRM, events databases and claims data,” Davidson writes. “Chances are, none of that information was accessible to sales reps before they met with the HCP.”

The solution, according to Davidson, isn’t to connect these systems, it’s deploying agentic AI in healthcare marketing to autonomously query, synthesising and acting on unified data. Unlike conversational AI that responds to queries, agentic systems can independently execute multi-step tasks.

Instead of a data engineer building a new pipeline, an AI agent could autonomously query the CRM and claims database to answer business questions like: “Identify oncologists in the Northwest who have a 20% lower prescription volume but attended our last medical congress.”

From orchestration to autonomous execution

Davidson frames the change as moving from an “omnichannel view” – coordinating experiences in channels – to true orchestration powered by agentic AI.

In practice, this means a sales representative could have an agent assist with call and visit planning by asking: “What messages has my HCP responded to most recently?” or “Can you create a detailed intelligence brief on my HCP?”

The agentic system would compile:

  • Their most recent conversation with the HCP,
  • The HCP’s prescribing behaviour,
  • Thought-leaders the HCP follows,
  • Relevant content to share,
  • The HCP’s preferred outreach channels (in-person visits, emails, webinars).

More significantly, the AI agent would then create a custom call plan for each HCP based on their unified profile and recommend follow-up steps based on engagement outcomes. “Agentic AI systems are about driving action, graduating from ‘answer my prompt,’ to ‘autonomously execute my task,'” Davidson explains.

“That means evolving the sales representative mindset from asking questions to coordinating small teams of specialised agents that work together: one plans, another retrieves and checks content, a third schedules and measures, and a fourth enforces compliance guardrails – all under human oversight.”

The AI-ready data prerequisite

The operational promise hinges on what Davidson calls “AI-ready data” – standardised, accessible, complete, and trustworthy information that enables three abilities:

Faster decision making: Predictive analytics that provide near real-time alerts on what’s about to happen, letting sales representatives act proactively.

Personalisation at scale: Delivering customised experiences to thousands of HCPs simultaneously with small human teams enabled by specialised agent networks.

True marketing ROI: Moving beyond monthly historical reports to understanding which marketing activities are actively driving prescriptions.

Davidson emphasises that successful deployment starts with marketing and IT alignment on initial use cases, with stakeholders identifying KPIs that demonstrate tangible outcomes – like specific percentage increases in HCP engagement or sales representative productivity.

Critical implementation questions

The article frames agentic AI in healthcare as “not simply another technology-led ability; it’s a new operating layer for commercial teams.” But it acknowledges that “agentic AI’s full value only materialises with AI-ready data, trustworthy deployment and workflow redesign.”

What remains unaddressed is the regulatory and compliance complexity of autonomous systems querying claims databases containing prescriber behaviour, particularly under HIPAA’s minimum necessary standard. The piece also doesn’t detail actual client implementations or metrics beyond the aspirational $450B economic value projection.

For global organisations, Davidson says use cases “can and should be tailored to fit each market’s maturity for maximum ROI,” suggesting that deployment will vary in regulatory environments. The fundamental value proposition, according to Davidson, centres on bidirectional benefit: “The HCP receives directly relevant content, and the marketing teams can drive increased HCP engagement and conversion.”

Whether that vision of autonomous marketing agents coordinating in CRM, events, and claims systems becomes standard practice by 2028 – or remains constrained by data governance realities – will likely determine if life sciences achieves anything close to that $450 billion opportunity.

See also: China’s hyperscalers bet billions on agentic AI as commerce becomes the new battleground

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Agentic AI in healthcare: How Life Sciences marketing could achieve $450B in value by 2028 appeared first on AI News.

]]>
Gates Foundation and OpenAI test AI in African healthcare https://www.artificialintelligence-news.com/news/gates-foundation-and-openai-test-ai-in-african-healthcare/ Thu, 22 Jan 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=111664 Primary healthcare systems across parts of Africa are under growing strain, caught between rising demand, chronic staff shortages, and shrinking international aid budgets. In that context, AI is being tested in healthcare less as a breakthrough technology and more as a way to keep basic services running. According to reporting by Reuters, the Gates Foundation […]

The post Gates Foundation and OpenAI test AI in African healthcare appeared first on AI News.

]]>
Primary healthcare systems across parts of Africa are under growing strain, caught between rising demand, chronic staff shortages, and shrinking international aid budgets. In that context, AI is being tested in healthcare less as a breakthrough technology and more as a way to keep basic services running.

According to reporting by Reuters, the Gates Foundation and OpenAI are backing a new initiative, Horizon1000, that aims to introduce AI tools into primary healthcare clinics across several African countries. The project will begin in Rwanda and is intended to reach 1,000 clinics and surrounding communities by 2028, supported by a combined $50 million investment.

The timing is not accidental as global development assistance for health fell by just under 27% last year compared to 2024, the Gates Foundation estimates, following cuts that began in the United States and spread to other major donors such as Britain and Germany. Those reductions have coincided with the first rise in preventable child deaths this century, adding pressure to health systems already stretched thin.

Rather than focusing on advanced diagnostics or research, Horizon1000 is framed around everyday tasks that consume time in under-resourced clinics. AI tools under the programme are expected to assist with patient intake, triage, record keeping, appointment scheduling, and access to medical guidance, particularly in settings where one doctor may serve tens of thousands of people.

Gates Foundation and OpenAI focus on AI support in healthcare

“In poorer countries with enormous health worker shortages and lack of health systems infrastructure, AI can be a gamechanger in expanding access to quality care,” Bill Gates wrote in a blog post announcing the initiative. Speaking to Reuters at the World Economic Forum in Davos, Gates said the technology could help health systems recover after aid cuts slowed progress.

“Our commitment is that that revolution will at least happen in the poor countries as quickly as it happens in the rich countries,” he said.

The focus, according to both partners, is on supporting healthcare workers rather than replacing them. OpenAI is expected to provide technical expertise and AI systems, while the Gates Foundation will work with African governments and health authorities to oversee deployment and alignment with national guidelines.

Rwanda was chosen as the first pilot country in part because of its existing digital health efforts. The country established an AI health hub in Kigali last year and has positioned itself as a testbed for health technology projects. Paula Ingabire, Rwanda’s minister of information and communications technology and innovation, said the goal is to reduce administrative burdens while expanding access.

“It is about using AI responsibly to reduce the burden on healthcare workers, to improve the quality of care, and to reach more patients,” Ingabire said in a video statement released alongside the launch.

Under Horizon1000, AI tools may also be used before patients reach clinics. Gates told Reuters the systems could support pregnant women and HIV patients with guidance ahead of visits, especially when language barriers exist between patients and providers.

What the AI tools are expected to handle

Once patients arrive, AI could help link records, reduce paperwork, and speed up routine processes.

“A typical visit, we think, can be about twice as fast and much better quality,” Gates said.

Those expectations highlight both the promise and the limits of the approach. While AI may help streamline workflows, its impact depends on reliable data, stable power and connectivity, trained staff, and clear oversight. Many previous digital health pilots in low-income settings have struggled to scale beyond initial trials once funding or external support tapered off.

Horizon1000’s designers say they are trying to avoid that pattern by working closely with local governments and health leaders rather than deploying one-size-fits-all systems. Tools are meant to be adapted to local clinical rules, languages, and care models. Even so, questions remain about long-term maintenance, data governance, and who bears responsibility if systems fail or produce errors.

The initiative also reflects a broader shift in how AI is being positioned in global health. Instead of headline-grabbing claims about medical breakthroughs, the emphasis here is on narrow, operational use cases that address staffing gaps and administrative overload. In that sense, AI is being treated less as a cure for weak health systems and more as a temporary support amid declining resources.

OpenAI’s involvement comes as the company expands its presence in healthcare, following earlier work on health-related applications. At the same time, it faces growing scrutiny over how its systems are trained, deployed, and governed, especially in sensitive sectors like medicine.

A test of AI’s limits in healthcare systems

For African health systems, the stakes are practical rather than symbolic. Sub-Saharan Africa faces an estimated shortage of nearly six million healthcare workers, a gap that training alone cannot close in the near term. If AI tools can help clinicians see more patients, reduce errors, or manage workloads more effectively, they may offer some relief. If they add complexity or require constant outside support, they risk becoming another layer of dependency.

Horizon1000 sits at that intersection. As aid budgets tighten and healthcare demands rise, the project offers a test of whether AI can play a useful, limited role in primary care without overstating its reach. The outcome will depend less on the technology itself than on how well it fits into the systems meant to use it.

See also: SAP and Fresenius to build sovereign AI backbone for healthcare

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Gates Foundation and OpenAI test AI in African healthcare appeared first on AI News.

]]>
SAP and Fresenius to build sovereign AI backbone for healthcare https://www.artificialintelligence-news.com/news/sap-and-fresenius-build-sovereign-ai-backbone-for-healthcare/ Mon, 19 Jan 2026 17:19:33 +0000 https://www.artificialintelligence-news.com/?p=111636 SAP and Fresenius are building a sovereign AI platform for healthcare that brings secure data processing to clinical settings. For data leaders in the medical sector, deploying AI requires strict governance that public cloud solutions often lack. This collaboration addresses that gap by creating a “controlled environment” where AI models can operate without compromising data […]

The post SAP and Fresenius to build sovereign AI backbone for healthcare appeared first on AI News.

]]>
SAP and Fresenius are building a sovereign AI platform for healthcare that brings secure data processing to clinical settings.

For data leaders in the medical sector, deploying AI requires strict governance that public cloud solutions often lack. This collaboration addresses that gap by creating a “controlled environment” where AI models can operate without compromising data sovereignty.

Moving AI from pilot to production

The project aims to build an open and integrated ecosystem allowing hospitals to use AI securely. Rather than running isolated experiments, the companies plan to create a digital backbone for a sovereign and AI-supported healthcare system.

Michael Sen, CEO of Fresenius, said: “Together with SAP, we can accelerate the digital transformation of the German and European healthcare systems and enable a sovereign European solution that is so important in today’s global landscape.

“We are making data and AI everyday companions that are secure, simple and scalable for doctors and hospital teams. This creates more room for what truly matters: caring for patients.”

The technical base uses SAP Business AI and the SAP Business Data Cloud. By leveraging these components, the platform creates a compliant, sovereign foundation for operating AI models in healthcare. This infrastructure handles health data responsibly, a requirement for scaling automated processes in patient care.

The partnership tackles data fragmentation through SAP’s “AnyEMR” strategy, which supports the integration of diverse hospital information systems (HIS). Using open industry standards like HL7 FHIR, the platform connects HIS, electronic medical records (EMRs), and other medical applications.

This connectivity allows Fresenius to develop AI-supported solutions that increase efficiency across the care chain. The goal is to build an individual, scalable platform that enables connected, data-driven healthcare processes.

Investing in sovereign AI to advance healthcare

Both companies intend to invest a “mid three-digit million euro amount” in the medium term. The funds target the digital transformation of German and European healthcare systems using AI-supported solutions.

Plans include joint investments in startups and scaleups, alongside internal technological developments. This approach aims to build a broader library of tools that plug into the sovereign platform.

Christian Klein, CEO of SAP SE, commented: “With SAP’s leading technology and Fresenius’ deep healthcare expertise, we aim to create a sovereign, interoperable healthcare platform for Fresenius worldwide.

“Together, we want to set new standards for data sovereignty, security, and innovation in healthcare. Thanks to SAP, Fresenius can harness the full potential of digital and AI-supported processes and sustainably improve patient care.”

This deal indicates that the next phase of healthcare AI in Europe will focus on sovereign infrastructure. Industries like healthcare require a controlled environment to satisfy regulatory demands—without a sovereign data backbone, AI initiatives risk stalling due to compliance concerns.

See also: Scaling AI value beyond pilot phase purgatory

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post SAP and Fresenius to build sovereign AI backbone for healthcare appeared first on AI News.

]]>
AI dominated the conversation in 2025, CIOs shift gears in 2026 https://www.artificialintelligence-news.com/news/ai-predictions-dominated-the-conversation-in-2025-cios-shift-gears-in-2026/ Thu, 15 Jan 2026 19:29:00 +0000 https://www.artificialintelligence-news.com/?p=111609 Author: Richard Farrell, CIO at Netcall After a year of rapid adoption and high expectations surrounding artificial intelligence, 2026 is shaping up to be the year CIOs apply a more strategic lens. Not to slow progress, but to steer it in a smarter direction. In 2025, we saw the rise of AI copilots across almost […]

The post AI dominated the conversation in 2025, CIOs shift gears in 2026 appeared first on AI News.

]]>
Author: Richard Farrell, CIO at Netcall

After a year of rapid adoption and high expectations surrounding artificial intelligence, 2026 is shaping up to be the year CIOs apply a more strategic lens. Not to slow progress, but to steer it in a smarter direction.

In 2025, we saw the rise of AI copilots across almost every platform imaginable. From browsers and CRMs to productivity tools and helpdesks, the tech world raced to embrace assistance-on-demand. But while vendors marketed “magic,” CIOs were left with the clean-up. Multiple pilots. Multiple platforms. Multiple promises. Few results.

Now the honeymoon period is over. It’s time to assess what worked, what didn’t, and what truly matters. The role of the CIO is shifting from tech enthusiast to strategic outcome architect. That means moving from disconnected experiments to holistic thinking – aligning people, process, and technology to drive sustainable results. Process mapping will become an essential starting point: identifying pain points, inefficiencies, and areas for AI and automation that directly link to measurable outcomes. And that shift comes with a new set of priorities. Here are five that will define 2026.

Process intelligence will replace fragmented copilots

The early promise of AI copilots was appealing: save time, reduce manual work, and supercharge productivity. But reality has been far more grounded. Independent evaluations, including a detailed UK Department for Business and Trade trial, found minimal measurable productivity improvements[1]. Despite glowing self-reports, actual gains were either negligible or non-existent. Why? Because these tools were designed for individual users, not organisations. They sat on top of workflows, rather than improving them. In too many cases, the top use case was summarising meeting notes – useful, but hardly transformative.

In 2026, CIOs will shift focus from point solutions to end-to-end platforms. The goal will be clear: use AI to optimise business processes, not pad out software features. This pivot from individual utility to organisational efficiency will be the biggest AI reset of the year.

Consolidation will beat complexity

CIOs have long battled sprawling tech estates and overlapping solutions, often held together by fragile integrations. In 2026, that complexity will come under fresh scrutiny. Too many tools chasing too few outcomes is no longer sustainable.

There will be a marked shift towards simplification – rationalising technology stacks and working with partners who can demonstrate true interoperability. CIOs will favour vendors who collaborate rather than compete, and who can clearly show how their solutions integrate within the broader ecosystem. Less will be more, especially when it comes to driving efficiency and speed.

This change is as much about procurement strategy as it is about technology. CIOs will look to platform-based approaches that offer the flexibility to build applications tailored to real-world processes. The ability to generate apps directly from mapped processes – refining and improving iteratively – will empower digital teams to deliver faster and smarter. It means building long-term partnerships that are based on shared goals and business value, not short-term sprints or siloed innovation.

Governance will take centre stage

The more AI scales, the more governance matters. In 2026, successful CIOs will build guardrails into every intelligent system. This means moving away from retrofitting rules after the fact, and instead embedding governance by design – from the very beginning of deployment. That includes audit trails, escalation rules, and privacy protocols, all built into the user journey through intuitive, adaptable frameworks. Proper escalation and human-in-the-loop models will be essential, alongside data stewardship – knowing where data is stored, how it’s accessed, and ensuring privacy by design.

Governance isn’t a drag on progress; it’s the foundation of trust. Low-code platforms are emerging as powerful enablers in this shift. They don’t just speed up development – they allow CIOs to embed controls directly into the build process. This approach supports the democratisation of development, empowering teams to iterate, improve, and scale quickly, without compromising on oversight.

That means compliance can’t be tacked on later; it must be built in from the start. This accelerates delivery while reassuring regulators, customers, and internal teams alike. This shift will ensure that automation supports human judgement, not overrides it – building systems people trust, not just systems that work.

Prediction must be followed by action

AI is good at pattern recognition. But unless those patterns trigger interventions, they don’t change outcomes. A shining example of this shift is the work at Rotherham NHS Foundation Trust. By embedding AI directly into its workflows, the Trust saw attendance among those most at risk of missing appointments improve significantly, with a 67% reduction in missed visits. It was not just that the model could identify at-risk patients; it was that this insight triggered an additional reminder, leading to better outcomes. The value was not in the model alone but in how it changed communication in a meaningful, practical way.

That’s what CIOs will demand in 2026. Prediction engines must be paired with platforms that empower action. Whether it’s preventing missed appointments or spotting security anomalies before breaches occur, success will be defined by what AI enables teams to do differently.

Value must be proven, not assumed

A dangerous trend emerged in 2025: building business cases on feelings. CIOs were pressured to prove AI success based on user satisfaction or time-saving estimates, often self-reported. The problem? These metrics are vague, inconsistent, and impossible to verify. In 2026, that won’t be good enough. CIOs will be expected to show clear cause and effect. If AI is being used, what has it replaced? What has it improved? What cost has it avoided?

We need to replace the tick-box mindset with a value lens. That means thinking beyond the tech and tying initiatives back to outcomes CEOs care about – growth, resilience, customer satisfaction, and efficiency. Crucially, this demands a holistic approach. It’s not just about technology. CIOs must align people, process, and platform – starting with detailed process mapping to understand how work gets done, where inefficiencies lie, and how those insights translate into smarter applications. These maps become blueprints for building, offering a framework to generate applications that deliver measurable value.

The resolution: outcome-led leadership

CIOs have spent the last decade digitising the enterprise. In 2026, their role will evolve again – from technologists to outcome architects. This year isn’t about pulling back on AI or slowing innovation. It’s about getting clear. Clear on priorities. Clear on governance. Clear on impact.

The best CIOs will ask the toughest questions. Are we solving a real problem, or just deploying tech? Can we measure the benefit, not just hope for it? Are we building something sustainable, or chasing hype? 2026 is the year we stop experimenting for the sake of it and start delivering for the business. The age of shiny objects is over. It’s time for substance. And that starts with us.

Author: Richard Farrell, CIO at Netcall

(Image source: “Apollo classic concept art: Parachute deployment” by Mooncat.Drew is marked with Public Domain Mark 1.0.)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AI dominated the conversation in 2025, CIOs shift gears in 2026 appeared first on AI News.

]]>
AI medical diagnostics race intensifies as OpenAI, Google, and Anthropic launch competing healthcare tools https://www.artificialintelligence-news.com/news/medical-ai-diagnostics-openai-google-anthropic/ Thu, 15 Jan 2026 07:00:00 +0000 https://www.artificialintelligence-news.com/?p=111592 OpenAI, Google, and Anthropic announced specialised medical AI capabilities within days of each other this month, a clustering that suggests competitive pressure rather than coincidental timing. Yet none of the releases are cleared as medical devices, approved for clinical use, or available for direct patient diagnosis—despite marketing language emphasising healthcare transformation. OpenAI introduced ChatGPT Health on January […]

The post AI medical diagnostics race intensifies as OpenAI, Google, and Anthropic launch competing healthcare tools appeared first on AI News.

]]>
OpenAI, Google, and Anthropic announced specialised medical AI capabilities within days of each other this month, a clustering that suggests competitive pressure rather than coincidental timing. Yet none of the releases are cleared as medical devices, approved for clinical use, or available for direct patient diagnosis—despite marketing language emphasising healthcare transformation.

OpenAI introduced ChatGPT Health on January 7, allowing US users to connect medical records through partnerships with b.well, Apple Health, Function, and MyFitnessPal. Google released MedGemma 1.5 on January 13, expanding its open medical AI model to interpret three-dimensional CT and MRI scans alongside whole-slide histopathology images. 

Anthropic followed on January 11 with Claude for Healthcare, offering HIPAA-compliant connectors to CMS coverage databases, ICD-10 coding systems, and the National Provider Identifier Registry.

All three companies are targeting the same workflow pain points—prior authorisation reviews, claims processing, clinical documentation—with similar technical approaches but different go-to-market strategies.

Developer platforms, not diagnostic products

The architectural similarities are notable. Each system uses multimodal large language models fine-tuned on medical literature and clinical datasets. Each emphasises privacy protections and regulatory disclaimers. Each positions itself as supporting rather than replacing clinical judgment.

The differences lie in deployment and access models. OpenAI’s ChatGPT Health operates as a consumer-facing service with a waitlist for ChatGPT Free, Plus, and Pro subscribers outside the EEA, Switzerland, and the UK. Google’s MedGemma 1.5 releases as an open model through its Health AI Developer Foundations program, available for download via Hugging Face or deployment through Google Cloud’s Vertex AI. 

Anthropic’s Claude for Healthcare integrates into existing enterprise workflows through Claude for Enterprise, targeting institutional buyers rather than individual consumers. The regulatory positioning is consistent across all three. 

OpenAI states explicitly that Health “is not intended for diagnosis or treatment.” Google positions MedGemma as “starting points for developers to evaluate and adapt to their medical use cases.” Anthropic emphasises that outputs “are not intended to directly inform clinical diagnosis, patient management decisions, treatment recommendations, or any other direct clinical practice applications.”

Benchmark performance vs clinical validation

Medical AI benchmark results improved substantially across all three releases, though the gap between test performance and clinical deployment remains significant. Google reports that MedGemma 1.5 achieved 92.3% accuracy on MedAgentBench, Stanford’s medical agent task completion benchmark, compared to 69.6% for the previous Sonnet 3.5 baseline. 

The model improved by 14 percentage points on MRI disease classification and 3 percentage points on CT findings in internal testing. Anthropic’s Claude Opus 4.5 scored 61.3% on MedCalc medical calculation accuracy tests with Python code execution enabled, and 92.3% on MedAgentBench. 

The company also claims improvements in “honesty evaluations” related to factual hallucinations, though specific metrics were not disclosed. 

OpenAI has not published benchmark comparisons for ChatGPT Health specifically, noting instead that “over 230 million people globally ask health and wellness-related questions on ChatGPT every week” based on de-identified analysis of existing usage patterns.

These benchmarks measure performance on curated test datasets, not clinical outcomes in practice. Medical errors can have life-threatening consequences, translating benchmark accuracy to clinical utility more complex than in other AI application domains.

Regulatory pathway remains unclear

The regulatory framework for these medical AI tools remains ambiguous. In the US, the FDA’s oversight depends on intended use. Software that “supports or provides recommendations to a health care professional about prevention, diagnosis, or treatment of a disease” may require premarket review as a medical device. None of the announced tools has FDA clearance.

Liability questions are similarly unresolved. When Banner Health’s CTO Mike Reagin states that the health system was “drawn to Anthropic’s focus on AI safety,” this addresses technology selection criteria, not legal liability frameworks. 

If a clinician relies on Claude’s prior authorisation analysis and a patient suffers harm from delayed care, existing case law provides limited guidance on responsibility allocation.

Regulatory approaches vary significantly across markets. While the FDA and Europe’s Medical Device Regulation provide established frameworks for software as a medical device, many APAC regulators have not issued specific guidance on generative AI diagnostic tools. 

This regulatory ambiguity affects adoption timelines in markets where healthcare infrastructure gaps might otherwise accelerate implementation—creating a tension between clinical need and regulatory caution.

Administrative workflows, not clinical decisions

Real deployments remain carefully scoped. Novo Nordisk’s Louise Lind Skov, Director of Content Digitalisation, described using Claude for “document and content automation in pharma development,” focused on regulatory submission documents rather than patient diagnosis. 

Taiwan’s National Health Insurance Administration applied MedGemma to extract data from 30,000 pathology reports for policy analysis, not treatment decisions.

The pattern suggests institutional adoption is concentrating on administrative workflows where errors are less immediately dangerous—billing, documentation, protocol drafting—rather than direct clinical decision support where medical AI capabilities would have the most dramatic impact on patient outcomes.

Medical AI capabilities are advancing faster than the institutions deploying them can navigate regulatory, liability, and workflow integration complexities. The technology exists. The US$20 monthly subscription provides access to sophisticated medical reasoning tools. 

Whether that translates to transformed healthcare delivery depends on questions these coordinated announcements leave unaddressed.

See also: AstraZeneca bets on in-house AI to speed up oncology research

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AI medical diagnostics race intensifies as OpenAI, Google, and Anthropic launch competing healthcare tools appeared first on AI News.

]]>
AstraZeneca bets on in-house AI to speed up oncology research https://www.artificialintelligence-news.com/news/astrazeneca-bets-on-in-house-ai-to-speed-up-oncology-research/ Wed, 14 Jan 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=111589 Drug development is producing more data than ever, and large pharmaceutical companies like AstraZeneca are turning to AI to make sense of it. The challenge is no longer whether AI can help, but how tightly it needs to be built into research and clinical work to improve decisions around trials and treatment. That question helps […]

The post AstraZeneca bets on in-house AI to speed up oncology research appeared first on AI News.

]]>
Drug development is producing more data than ever, and large pharmaceutical companies like AstraZeneca are turning to AI to make sense of it. The challenge is no longer whether AI can help, but how tightly it needs to be built into research and clinical work to improve decisions around trials and treatment.

That question helps explain why AstraZeneca is bringing Modella AI in-house. The company has agreed to acquire the Boston-based AI firm as it looks to deepen its use of AI across oncology research and clinical development. Financial terms were not disclosed.

Rather than treating AI as a supporting tool, AstraZeneca is pulling Modella’s models, data, and staff directly into its research organisation. The move reflects a broader shift in the drug industry, where partnerships are giving way to acquisitions as companies try to gain more control over how AI is built, tested, and used in regulated settings.

Why AI ownership is starting to matter in drug research

Modella AI focuses on using computers to analyse pathology data, such as biopsy images, and link those findings with clinical information. Its work centres on making pathology more quantitative, helping researchers spot patterns that may point to useful biomarkers or guide treatment choices.

In a statement, Modella said its foundation models and AI agents would be integrated into AstraZeneca’s oncology research and development work, with a focus on clinical development and biomarker discovery.

How AstraZeneca moved its AI partnership toward full integration

For AstraZeneca, the deal builds on a collaboration that began several years ago. That earlier partnership allowed both sides to test whether Modella’s tools could work within the drugmaker’s research environment. According to AstraZeneca executives, the experience made it clear that closer integration was needed.

Speaking at the J.P. Morgan Healthcare Conference, AstraZeneca Chief Financial Officer Aradhana Sarin described the acquisition as a way to bring more data and AI capability inside the company.

“Oncology drug development is becoming more complex, more data-rich and more time-sensitive,” said Gabi Raia, Modella AI’s chief commercial officer, adding that joining AstraZeneca would allow the company to deploy its tools across global trials and clinical settings.

Using AI to improve trial decisions

Sarin said the deal would “supercharge” AstraZeneca’s work in quantitative pathology and biomarker discovery by combining data, models, and teams under one roof. While such language reflects ambition, the practical goal is more grounded: shortening the time it takes to turn research data into decisions that affect trial design and patient selection.

One area where AstraZeneca expects AI to have an impact is in choosing patients for clinical trials. Better matching patients to studies could improve trial outcomes and reduce costs tied to delays or failed studies.

That kind of improvement depends less on complex algorithms and more on steady access to clean data and tools that fit into existing workflows.

Talent and tools move in-house

The acquisition also highlights a change in how large pharmaceutical firms think about AI talent. Rather than relying on outside vendors, companies are increasingly treating data scientists and machine learning experts as part of their core research teams. For AstraZeneca, bringing Modella’s staff in-house reduces dependence on external roadmaps and gives the company more say over how tools are adapted as research needs change.

AstraZeneca said this is the first time a major pharmaceutical company has acquired an AI firm outright, though collaborations between drugmakers and technology companies have become common.

AstraZeneca joins a crowded field of pharma–AI deals

At the same healthcare conference, several new partnerships were announced, including a $1 billion collaboration between Nvidia and Eli Lilly to build a new research lab using Nvidia’s latest AI chips.

Those deals point to growing interest in AI across the sector, but they also underline a key difference in strategy. Partnerships can speed up experimentation, while acquisitions suggest a longer-term bet on building internal capability. For companies operating under strict regulatory rules, that control can matter as much as raw computing power.

What AstraZeneca is betting on next

Sarin described the earlier AstraZeneca–Modella partnership as a “test drive,” saying the company ultimately wanted Modella’s data, models, and people inside the organisation. The aim, she said, is to support the development of “highly targeted biomarkers and then highly targeted therapeutics.”

Beyond the Modella deal, Sarin said 2026 is expected to be a busy year for AstraZeneca, with several late-stage trial results due across different therapy areas. The company is also working toward a target of $80 billion in annual revenue by 2030.

Whether acquisitions like this help meet those goals will depend on execution. Integrating AI into drug development is slow, expensive, and often messy. Still, AstraZeneca’s move signals a clear view of where it thinks the value lies: not in buying AI as a service, but in embedding it deeply into how medicines are discovered and tested.

(Photo by Mika Baumeister)

See also: Allister Frost: Tackling workforce anxiety for AI integration success

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AstraZeneca bets on in-house AI to speed up oncology research appeared first on AI News.

]]>
“Dr AI, am I healthy?” 59% of Brits rely on AI for self-diagnosis https://www.artificialintelligence-news.com/news/dr-ai-am-i-healthy-59-of-brits-rely-on-ai-for-self-diagnosis/ Thu, 08 Jan 2026 13:10:00 +0000 https://www.artificialintelligence-news.com/?p=111527 AI advancements are changing the way we look at health and deal with health-related issues. According to a new nationwide study by Confused.com Life Insurance, three in five Brits now use AI to self-diagnose health conditions. Through various searches, like side effects of medical conditions, treatment options, and symptom checks, as much as 11% of […]

The post “Dr AI, am I healthy?” 59% of Brits rely on AI for self-diagnosis appeared first on AI News.

]]>
AI advancements are changing the way we look at health and deal with health-related issues. According to a new nationwide study by Confused.com Life Insurance, three in five Brits now use AI to self-diagnose health conditions. Through various searches, like side effects of medical conditions, treatment options, and symptom checks, as much as 11% of respondents claim AI has helped improve their conditions. More than a third (35%) are likely to use AI in this context in the future, moving away from traditional GP appointments – increasingly harder to get at short notice.

In the UK, the average GP appointment waiting time is currently 10 days, a period too long for many. Therefore, health related searches have significantly risen since January 2025, including “what is my illness?”, increasing by 85%, “what are the symptoms for?” (33%), and “side effects” (22%).

Most common health-related queries with AI

According to Confused.com, the most searched for health-related query is symptom checks, with 63% seeking advice from AI. Next are side effects at 50% and lifestyle and well-being techniques at 38%. 20% have also sought mental health support through therapy or recommended coping strategies, treating ChatGPT as their virtual therapist.

35% of respondents over 65 are using AI to self diagnose, with 54% using the technology to check their symptoms. This pales in comparison to 18-24 year olds, with 85% using AI to search regularly for health issues.

Tom Vaughan, life insurance expert at Confused.com, commented on these latest findings, saying, “Advances in AI technology have created a new way for people to approach healthcare and self-diagnosis. More individuals are taking steps to support their own and their family’s well-being, getting ahead of health concerns and addressing situations as quickly as possible.”

AI self diagnosis potential benefits

With current GP waiting times sometimes reaching a month, it is no surprise that 42% claimed AI is quicker than waiting for a doctor’s appointment. 50% of 25-34 year olds and 51% of 35-44 year olds said they are not comfortable taking any risks with timings, believing self-diagnosis provides a faster response than waiting for a GP.

Family well-being is also crucial, with 20% using AI to determine the best methods to support their loved one’s health. Not having to physically speak to a doctor is another reason many turned to AI. 24% said they feel more comfortable using AI than discussing their health face to face with a healthcare professional, rising to 39% for 18-24 year olds.

17% are searching for alternative medical solutions and support via AI, increasing to 27% for those aged 25-34. Money is another key factor, as 20% feel self diagnosis through AI could save them substantial private healthcare fees.

AI has also had a positive influence for non-binary individuals and those with an alternative identity. 75% said the technology’s diagnosis had helped them a “great deal”, compared to just 13% for men and 9% for women.

Overall, AI seems to have a positive impact on users’ health situations. For instance, 11% stated that AI has helped their health conditions “a great deal,” while 41% claimed it has helped “somewhat.” The hope is that this self-diagnosis, though not guaranteeing accuracy, will encourage people to visit their GP for a formal diagnosis.

Only a minority of respondents (9%) felt AI has not helped their health in any way, indicating traditional healthcare methods are more reliable.

Tom Vaughan emphasised the importance of GP consultations. “While AI can be useful for initial research and gaining an understanding of a condition, it’s clear that for the ultimate peace of mind people should consult a GP or pharmacist. GPs and other medical professionals are the only people who can accurately diagnose conditions, some of which may worsen or become long-term illnesses without the proper treatment.”

OpenAI launches ChatGPT Health

Confused.com‘s insights into AI use for health concerns coincides with OpenAI’s launch of its new ChatGPT Health feature, part of the ChatGPT platform. This has been set up to meet the substantial number of health-related queries made on the site each day. Figures suggest over 230 million health-related inquiries are made weekly.

ChatGPT Health allows users to connect their personal medical records and wellness apps, like Apple Health, allowing the AI to provide tailored responses, rather than general knowledge surrounding certain health conditions.

Although set up to help users find answers to their health questions, OpenAI has stressed the new feature is not a diagnostic tool or substitute for professional medical care. It has been designed to support medical care, like understanding lab results and track wellness, rather than replace it and give formal medical diagnoses or treatment plans.

ChatGPT Health has been developed with input from hundreds of physicians around the world, ensuring clarity and safety for its users. Despite not being a substitute for medical professionals and traditional GP appointments, the number of people turning to AI for health information and help to understand medical issues is expected to rise, raising important questions and potential repercussions for patient care and clinical trust.

(Image source: “The Sick Classroom by Nge Lay” by Jnzl’s Photos is licensed under CC BY 2.0.)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post “Dr AI, am I healthy?” 59% of Brits rely on AI for self-diagnosis appeared first on AI News.

]]>
AstraZeneca leads big pharma’s AI clinical trials revolution with real-world patient impact https://www.artificialintelligence-news.com/news/astrazeneca-ai-clinical-trials-2025/ Thu, 18 Dec 2025 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=111379 Big Pharma’s AI race extends across drug discovery, development, and clinical trials—but AstraZeneca has distinguished itself by deploying AI clinical trials technology at an unprecedented public health scale.  While competitors optimise internal R&D pipelines, AstraZeneca’s AI is already embedded in national healthcare systems, screening hundreds of thousands of patients and demonstrating what happens when AI […]

The post AstraZeneca leads big pharma’s AI clinical trials revolution with real-world patient impact appeared first on AI News.

]]>
Big Pharma’s AI race extends across drug discovery, development, and clinical trials—but AstraZeneca has distinguished itself by deploying AI clinical trials technology at an unprecedented public health scale. 

While competitors optimise internal R&D pipelines, AstraZeneca’s AI is already embedded in national healthcare systems, screening hundreds of thousands of patients and demonstrating what happens when AI moves from pharmaceutical labs into actual patient care.

The clinical validation backs this approach. AstraZeneca’s CREATE study, presented at the European Lung Cancer Congress in March 2025, demonstrated a 54.1% positive predictive value for its AI chest X-ray tool—far exceeding the pre-defined success threshold of 20%. 

Behind those numbers: over 660,000 people screened in Thailand since 2022, with AI detecting suspected pulmonary lesions in 8% of cases. More critically, Thailand’s National Health Security Office is now scaling this technology across 887 hospitals with a three-year budget exceeding 415 million baht.

This isn’t just a pilot program or proof-of-concept. It’s AI clinical trials technology deployed at the national healthcare system scale.

The strategic divergence in AI clinical trials approaches

The contrast with competitors is revealing. Pfizer’s ML Research Hub has compressed drug discovery timelines to approximately 30 days for molecule identification. The company used AI to develop Paxlovid in record time, with machine learning analysing patient data 50% faster than traditional methods. Pfizer now deploys AI in over half its clinical trials.

Novartis partnered with Nobel Prize winner Demis Hassabis’s Isomorphic Labs and Microsoft for “AI-driven drug discovery.” Its Intelligent Decision System uses computational twins to simulate clinical trial processes, with AI-identified sites reportedly recruiting patients faster than traditional selection methods.

Roche’s “lab in a loop” strategy iterates AI models with laboratory experiments. Having acquired Foundation Medicine and Flatiron Health, Roche built the industry’s largest clinical genomic database—over 800,000 genomic profiles across 150+ tumour subtypes—targeting 50% efficiency gains in safety management by 2026.

AstraZeneca’s clinical operations advantage

What sets AstraZeneca apart in AI clinical trials isn’t just ambition—it’s execution at scale. The company runs over 240 global trials in its R&D pipeline and has systematically embedded generative AI across clinical operations. 

It’s an “intelligent protocol tool,” developed with medical writers, that has reduced document authoring time by 85% in some cases. The company uses AI for 3D location detection on CT scans, slashing the time radiologists spend on manual annotation.

More significantly, AstraZeneca is pioneering virtual control groups for AI clinical trials using electronic health records and past trial data to simulate placebo arms—potentially reducing the number of patients receiving non-active treatments. This represents a fundamental rethinking of clinical trial design itself.

The lung cancer screening program exemplifies this strategic focus. Using Qure.ai’s qXR-LNMS tool, AstraZeneca isn’t just conducting trials—it’s transforming public health infrastructure. The December 2025 expansion includes a new industrial worker screening program targeting 5,000 workers across four Thai provinces, now expanding beyond lung cancer to include heart failure detection.

The timeline acceleration race

Industry metrics show why AI clinical trials matter: Traditional drug development takes 10-15 years with a 90% failure rate. AI-discovered drugs achieve 80-90% Phase I success rates—double the 40-65% traditional benchmark. Over 3,000 AI-assisted drugs are in development, with 200+ AI-enabled approvals expected by 2030.

Pfizer moves from molecule identification to clinical trials in six-week cycles. Novartis analyses 460,000 clinical trials in minutes versus months. Yet AstraZeneca’s model delivers immediate patient impact—detecting cancers today in underserved populations, often before symptoms appear.

The US$410 Billion question

The World Economic Forum projects AI could generate US$350-$410 billion annually for pharma by 2030. The question is which approach captures more value: faster drug discovery or more efficient clinical operations?

Pfizer’s bet on computational drug design and Novartis’s AI-powered trial site selection may yield breakthrough molecules. Roche’s integrated pharma-diagnostics model creates a proprietary data moat. 

But AstraZeneca’s strategy of embedding AI clinical trials throughout operations—from protocol generation to patient recruitment to regulatory submissions—is demonstrably reducing time-to-market while building real-world evidence at scale.

The company’s partnership approach is equally distinctive. While others acquire AI companies or build internal hubs, AstraZeneca collaborates with technology partners like Qure.ai and Perceptra, regulatory bodies, and national health systems to deploy AI clinical trials where infrastructure gaps exist.

As AstraZeneca pursues its 2030 goal of delivering 20 new medicines and reaching us$80 billion in revenue, its AI clinical trials advantage isn’t just about speed—it’s about proving AI’s value in the most regulated, risk-averse phase of pharmaceutical development. While competitors race to discover the next breakthrough molecule, AstraZeneca is reengineering how clinical trials themselves are conducted.

The winner may not be determined by who builds the most sophisticated algorithm, but by who deploys AI clinical trials technology where it demonstrably improves patient outcomes—at scale, under regulatory scrutiny, and within real healthcare systems.

And in that race, AstraZeneca currently leads.

(Photo by AstraZeneca)

See also: Google AMIE: AI doctor learns to ‘see’ medical images

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AstraZeneca leads big pharma’s AI clinical trials revolution with real-world patient impact appeared first on AI News.

]]>
OpenAI: Enterprise users swap AI pilots for deep integrations https://www.artificialintelligence-news.com/news/openai-enterprise-users-swap-ai-pilots-for-deep-integrations/ Mon, 08 Dec 2025 14:41:25 +0000 https://www.artificialintelligence-news.com/?p=111204 According to OpenAI, enterprise AI has graduated from the sandbox and is now being used for daily operations with deep workflow integrations. New data from the company shows that firms are now assigning complex and multi-step workflows to models rather than simply asking for text summaries. The figures illustrate a hard change in how organisations […]

The post OpenAI: Enterprise users swap AI pilots for deep integrations appeared first on AI News.

]]>
According to OpenAI, enterprise AI has graduated from the sandbox and is now being used for daily operations with deep workflow integrations.

New data from the company shows that firms are now assigning complex and multi-step workflows to models rather than simply asking for text summaries. The figures illustrate a hard change in how organisations deploy generative models.

With OpenAI’s platform now serving over 800 million users weekly, a “flywheel” effect is driving consumer familiarity into professional environments. The company’s latest report notes that over a million business customers now use these tools, and the goal is now even deeper integration. 

This evolution presents two realities for decision-makers: productivity gains are concrete, but a growing divide between “frontier” adopters and the median enterprise suggests that value depends heavily on usage intensity.

From chatbots to deep reasoning

The best metric for corporate deployment maturity is not seat count, but task complexity

OpenAI reports that ChatGPT message volume has grown eightfold year-over-year, but a better indicator for enterprise architects is the consumption of API reasoning tokens which suggests deeper integrations are taking place. This figure has increased by nearly 320 times per organisation—evidence that companies are systematically wiring more intelligent models into their products to handle logic rather than basic queries.

The rise of configurable interfaces supports this view. Weekly users of Custom GPTs and Projects (tools that allow workers to instruct models with specific institutional knowledge) have increased approximately 19x this year. Roughly 20 percent of all enterprise messages are now processed via these customised environments, indicating that standardisation is now a prerequisite for professional use.

For enterprise leaders auditing the ROI of AI seats, the data offers evidence on time savings. On average, users attribute between 40-60 minutes of time saved per active day to the technology. The impact varies by function: data science, engineering, and communication professionals report higher savings (averaging 60-80 minutes daily.)

Beyond efficiency, the software is altering role boundaries. There is a specific effect on technical capability, particularly regarding code generation.

Among enterprise users, OpenAI says that coding-related messages have risen across all business functions. Outside of engineering, IT, and research roles, coding queries have grown by an average of 36 percent over the past six months. Non-technical teams are using the tools to perform analysis that previously required specialised developers.

Operational improvements extend across departments. Survey data shows 87 percent of IT workers report faster issue resolution, while 75 percent of HR professionals see improved employee engagement. 

Widening enterprise AI competence gap

OpenAI’s data suggests that a split is forming between organisations that simply provide access to tools and those in which integrations are being deeply embedded into their operating models. The report identifies a “frontier” class of workers – those in the 95th percentile of adoption intensity – who generate six times more messages than the median worker. 

This disparity is stark at the organisational level. Frontier firms generate approximately twice as many messages per seat as the median enterprise and seven times more messages to custom GPTs. Leading firms are not just using the tools more frequently; they are investing in the infrastructure and standardisation required to make AI a persistent part of operations.

Users who engage across a wider variety of tasks (roughly seven distinct types) report saving five times more time than those who limit their usage to three or four basic functions. Benefits correlate directly with the depth of use, implying that a “light touch” deployment plan may fail to deliver the anticipated ROI.

While the professional services, finance, and technology sectors were early adopters and maintain the largest scale of usage, other industries are sprinting to catch up. The technology sector leads with 11x year-over-year growth, but healthcare and manufacturing follow closely with 8x and 7x growth respectively. 

Global adoption patterns also challenge the notion that this is solely a US-centric phenomenon. International usage is surging, with markets such as Australia, Brazil, the Netherlands, and France showing business customer growth rates exceeding 140 percent year-over-year. Japan has also surfaced as a key market, holding the largest number of corporate API customers outside of the US.

OpenAI: Deep AI integrations accelerate enterprise workflows

Examples of deployment highlight how these tools influence key business metrics. Retailer Lowe’s deployed an associate-facing tool to over 1,700 stores, resulting in a customer satisfaction score increase of 200 basis points when associates used the system. Furthermore, when online customers engaged with the retailer’s AI tool, conversion rates more than doubled. 

In the pharmaceutical sector, Moderna used enterprise AI to speed up the drafting of Target Product Profiles (TPPs), a process that typically involves weeks of cross-functional effort. By automating the extraction of key facts from massive evidence packs, the company reduced core analytical steps from weeks to hours. 

Financial services firm BBVA leveraged the technology to fix a bottleneck in legal validation for corporate signatory authority. By building a generative AI solution to handle standard legal queries, the bank automated over 9,000 queries annually, effectively freeing up the equivalent of three full-time employees for higher-value tasks.

However, the transition to production-grade AI requires more than software procurement; it necessitates organisational readiness. The primary blockers for many organisations are no longer model capabilities, but implementation and internal structures. 

Leading firms consistently enable deep system integration by “turning on” connectors that give models secure access to company data. Yet, roughly one in four enterprises has not taken this step, limiting their models to generic knowledge rather than specific organisational context.

Successful deployment relies on executive sponsorship that sets explicit mandates and encourages the codification of institutional knowledge into reusable assets. 

As the technology continues to evolve, organisations must adjust their approach. OpenAI’s data suggests that success now depends on delegating complex workflows with deep integrations rather than just asking for outputs, treating AI as a primary engine for enterprise revenue growth.

See also: AWS re:Invent 2025: Frontier AI agents replace chatbots

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post OpenAI: Enterprise users swap AI pilots for deep integrations appeared first on AI News.

]]>
Edge AI inside the human body: Cochlear’s machine learning implant breakthrough https://www.artificialintelligence-news.com/news/edge-ai-medical-devices-cochlear-implants/ Thu, 27 Nov 2025 09:00:00 +0000 https://www.artificialintelligence-news.com/?p=110943 The next frontier for edge AI medical devices isn’t wearables or bedside monitors—it’s inside the human body itself. Cochlear’s newly launched Nucleus Nexa System represents the first cochlear implant capable of running machine learning algorithms while managing extreme power constraints, storing personalised data on-device, and receiving over-the-air firmware updates to improve its AI models over time. For AI […]

The post Edge AI inside the human body: Cochlear’s machine learning implant breakthrough appeared first on AI News.

]]>
The next frontier for edge AI medical devices isn’t wearables or bedside monitors—it’s inside the human body itself. Cochlear’s newly launched Nucleus Nexa System represents the first cochlear implant capable of running machine learning algorithms while managing extreme power constraints, storing personalised data on-device, and receiving over-the-air firmware updates to improve its AI models over time.

For AI practitioners, the technical challenge is staggering: build a decision-tree model that classifies five distinct auditory environments in real time, optimise it to run on a device with a minimal power budget that must last decades, and do it all while directly interfacing with human neural tissue.

Decision trees meet ultra-low power computing

At the core of the system’s intelligence lies SCAN 2, an environmental classifier that analyses incoming audio and categorises it as Speech, Speech in Noise, Noise, Music, or Quiet.

“These classifications are then input to a decision tree, which is a type of machine learning model,” explains Jan Janssen, Cochlear’s Global CTO, in an exclusive interview with AI News. “This decision is used to adjust sound processing settings for that situation, which adapts the electrical signals sent to the implant.”

The model runs on the external sound processor, but here’s where it gets interesting: the implant itself participates in the intelligence through Dynamic Power Management. Data and power are interleaved between the processor and implant via an enhanced RF link, allowing the chipset to optimise power efficiency based on the ML model’s environmental classifications.

This isn’t just smart power management—it’s edge AI medical devices solving one of the hardest problems in implantable computing: how do you keep a device operational for 40+ years when you can’t replace its battery?

The spatial intelligence layer

Beyond environmental classification, the system employs ForwardFocus, a spatial noise algorithm that uses inputs from two omnidirectional microphones to create target and noise spatial patterns. The algorithm assumes target signals originate from the front while noise comes from the sides or behind, then applies spatial filtering to attenuate background interference.

What makes this noteworthy from an AI perspective is the automation layer. ForwardFocus can operate autonomously, removing cognitive load from users navigating complex auditory scenes. The decision to activate spatial filtering happens algorithmically based on environmental analysis—no user intervention required.

Upgradeability: The medical device AI paradigm shift

Here’s the breakthrough that separates this from previous-generation implants: upgradeable firmware in the implanted device itself. Historically, once a cochlear implant was surgically placed, the technology in the implant was fixed for life.

Existing patients could only benefit from innovation by upgrading their external sound processor every five to seven years—gaining access to new signal processing algorithms, improved ML models, and better noise reduction. But the implant itself? Static.

Now, with the Nucleus Nexa System, patients can benefit from technological advances through firmware upgrades to the implant itself, not just the external processor.

Jan Janssen, Chief Technology Officer, Cochlear Limited

The Nucleus Nexa Implant changes that equation. Using Cochlear’s proprietary short-range RF link, audiologists can deliver firmware updates through the external processor to the implant. Security relies on physical constraints—the limited transmission range and low power output require proximity during updates—combined with protocol-level safeguards.

“With the smart implants, we actually keep a copy [of the user’s personalised hearing map] on the implant,” Janssen explained. “So you lose this [external processor], we can send you a blank processor and put it on—it retrieves the map from the implant.”

The implant stores up to four unique maps in its internal memory. From an AI deployment perspective, this solves a critical challenge: how do you maintain personalised model parameters when hardware components fail or get replaced?

From decision trees to deep neural networks

Cochlear’s current implementation uses decision tree models for environmental classification—a pragmatic choice given power constraints and interpretability requirements for medical devices. But Janssen outlined where the technology is headed: “Artificial intelligence through deep neural networks—a complex form of machine learning—in the future may provide further improvement in hearing in noisy situations.”

The company is also exploring AI applications beyond signal processing. “Cochlear is investigating the use of artificial intelligence and connectivity to automate routine check-ups and reduce lifetime care costs,” Janssen noted.

This points to a broader trajectory for edge AI medical devices: from reactive signal processing to predictive health monitoring, from manual clinical adjustments to autonomous optimisation.

The Edge AI constraint problem

What makes this deployment fascinating from an ML engineering standpoint is the constraint stack:

Power: The device must run for decades on minimal energy, with battery life measured in full days despite continuous audio processing and wireless transmission.

Latency: Audio processing happens in real-time with imperceptible delay—users can’t tolerate lag between speech and neural stimulation.

Safety: This is a life-critical medical device directly stimulating neural tissue. Model failures aren’t just inconvenient—they impact quality of life.

Upgradeability: The implant must support model improvements over 40+ years without hardware replacement.

Privacy: Health data processing happens on-device, with Cochlear applying rigorous de-identification before any data enters their Real-World Evidence program for model training across their 500,000+ patient dataset.

These constraints force architectural decisions you don’t face when deploying ML models in the cloud or even on smartphones. Every milliwatt matters. Every algorithm must be validated for medical safety. Every firmware update must be bulletproof.

The future of Bluetooth and connected implants

Looking ahead, Cochlear is implementing Bluetooth LE Audio and Auracast broadcast audio capabilities—requiring a future firmware updates to their sound processors. Bluetooth LE Audio offers better audio quality than traditional Bluetooth while reducing power consumption, but more Auracast broadcast audio enables greater access to assistive listening networks.

Auracast broadcast audio enables the potential for direct connection to audio streams in public venues, airports, and gyms — transforming the cochlear implant system from an isolated medical device into a connected edge AI medical device participating in ambient computing environments.

The longer-term vision includes connected totally implantable devices with integrated microphones and batteries, eliminating external components entirely. At that point, you’re talking about fully autonomous AI systems operating inside the human body—adjusting to environments, optimising power, streaming connectivity, all without user interaction.

The medical device AI blueprint

Cochlear’s deployment offers a blueprint for edge AI medical devices facing similar constraints: start with interpretable models like decision trees, optimise aggressively for power, build in upgradeability from day one, and architect for the 40-year horizon rather than the typical 2-3 year consumer device cycle.

As Janssen noted, the smart implant launching today “is actually the first step to an even smarter implant.” For an industry built on rapid iteration and continuous deployment, adapting to decade-long product lifecycles while maintaining AI advancement represents a fascinating engineering challenge.

The question isn’t whether AI will transform medical devices—Cochlear’s deployment proves it already has. The question is how quickly other manufacturers can solve the constraint problem and bring similarly intelligent systems to market.

For 546 million people with hearing loss in the Western Pacific Region alone, the pace of that innovation will determine whether AI in medicine remains a prototype story or becomes standard of care.

(Photo by Cochlear)

See also: FDA AI deployment: Innovation vs oversight in drug regulation

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Edge AI inside the human body: Cochlear’s machine learning implant breakthrough appeared first on AI News.

]]>
Microsoft’s next big AI bet: building a ‘humanist superintelligence’ https://www.artificialintelligence-news.com/news/microsoft-next-big-ai-bet-building-a-humanist-superintelligence/ Fri, 07 Nov 2025 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=110410 Microsoft is forming a new team to research superintelligence and other advanced forms of artificial intelligence. Mustafa Suleyman, who leads Microsoft’s AI division overseeing Bing and Copilot, announced the creation of the MAI Superintelligence Team in a blog post. He said he will head the group and that Microsoft plans to put “a lot of […]

The post Microsoft’s next big AI bet: building a ‘humanist superintelligence’ appeared first on AI News.

]]>
Microsoft is forming a new team to research superintelligence and other advanced forms of artificial intelligence.

Mustafa Suleyman, who leads Microsoft’s AI division overseeing Bing and Copilot, announced the creation of the MAI Superintelligence Team in a blog post. He said he will head the group and that Microsoft plans to put “a lot of money” behind the effort.

“We are doing this to solve real, concrete problems and do it in such a way that it remains grounded and controllable,” Suleyman wrote. “We are not building an ill-defined and ethereal superintelligence; we are building a practical technology explicitly designed only to serve humanity.”

Building a ‘humanist’ approach to superintelligence

The move comes as big tech companies race to attract top AI researchers. Meta, Facebook’s parent company, recently created its own Meta Superintelligence Labs and spent billions recruiting experts, even offering signing bonuses as high as $100 million. Suleyman didn’t comment on whether Microsoft plans to match such offers but said the new team will include both internal talent and new hires, with Karen Simonyan as chief scientist.

Before joining Microsoft, Suleyman co-founded DeepMind, which Google bought in 2014. He later led the AI startup Inflection, which Microsoft acquired last year along with several of its employees.

The hiring push reflects a broader trend. Since OpenAI released ChatGPT in 2022, companies have raced to bring generative AI into their products. Microsoft uses OpenAI’s models in Bing and Copilot, while OpenAI relies on Microsoft’s Azure cloud to power its tools. Microsoft also holds a $135 billion stake in OpenAI after a recent restructuring.

Reducing reliance on OpenAI

Despite the partnership, Microsoft has been working to diversify its AI sources as it lays the groundwork for future superintelligence research. Following the Inflection acquisition, the company began experimenting with models from Google and Anthropic, another AI startup founded by former OpenAI executives.

The new Microsoft AI research group will aim to build useful AI companions that assist people in education and other areas. Suleyman said the team also plans to focus on projects in medicine and renewable energy.

A different path from rivals

Unlike some peers, Suleyman said Microsoft isn’t trying to build an “infinitely capable generalist” AI. He doubts such systems could be kept under control and instead wants to develop what he calls “humanist superintelligence” – AI that serves human needs and delivers real-world benefits.

“Humanism requires us to always ask the question: does this technology serve human interests?” he said.

While the risks of AI are widely debated – from bias to existential threats – Suleyman said his team’s goal is to create specialist systems that achieve “superhuman performance” without posing major risks. He cited examples like AI that could improve battery storage or design new molecules, similar to DeepMind’s AlphaFold project that predicts protein structures.

Medical superintelligence on the horizon

Suleyman said Microsoft is especially focused on healthcare, predicting that AI capable of expert-level diagnosis could emerge in the next two or three years.

He described it as technology that can reason through complex medical problems and detect preventable diseases much earlier. “We’ll have expert-level performance at the full range of diagnostics, alongside highly capable planning and prediction in operational clinical settings,” he wrote.

As investors question whether massive AI spending will translate into profits, Suleyman emphasised that Microsoft is setting clear limits. “We are not building a superintelligence at any cost, with no limits,” he said.

(Photo by Praswin Prakashan)

See also: Microsoft gives free Copilot AI services to US government workers

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Microsoft’s next big AI bet: building a ‘humanist superintelligence’ appeared first on AI News.

]]>