Natural Language Processing (NLP) - AI News https://www.artificialintelligence-news.com/categories/how-it-works/natural-language-processing-nlp/ Artificial Intelligence News Wed, 17 Dec 2025 02:37:00 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Natural Language Processing (NLP) - AI News https://www.artificialintelligence-news.com/categories/how-it-works/natural-language-processing-nlp/ 32 32 Roblox brings AI into the Studio to speed up game creation https://www.artificialintelligence-news.com/news/roblox-brings-ai-into-the-studio-to-speed-up-game-creation/ Wed, 17 Dec 2025 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=111362 Roblox is often seen as a games platform, but its day-to-day reality looks closer to a production studio. Small teams release new experiences on a rolling basis and then monetise them at scale. That pace creates two persistent problems: time lost to repeatable production work, and friction when moving outputs between tools. Roblox’s 2025 updates […]

The post Roblox brings AI into the Studio to speed up game creation appeared first on AI News.

]]>
Roblox is often seen as a games platform, but its day-to-day reality looks closer to a production studio. Small teams release new experiences on a rolling basis and then monetise them at scale. That pace creates two persistent problems: time lost to repeatable production work, and friction when moving outputs between tools. Roblox’s 2025 updates point to how AI can reduce both, without drifting away from clear business outcomes.

Roblox keeps AI where the work happens

Rather than pushing creators toward separate AI products, Roblox has embedded AI inside Roblox Studio, the environment where creators already build, test, and iterate. In its September 2025 RDC update, Roblox outlined “AI tools and an Assistant” designed to improve creator productivity, with an emphasis on small teams. Its annual economic impact report adds that Studio features such as Avatar Auto-Setup and Assistant already include “new AI capabilities” to “accelerate content creation”.

The language matters—Roblox frames AI in terms of cycle time and output, not abstract claims about transformation or innovation. That framing makes it easier to judge whether the tools are doing their job.

One of the more practical updates focuses on asset creation. Roblox described an AI capability that goes beyond static generation, allowing creators to produce “fully functional objects” from a prompt. The initial rollout covers selected vehicle and weapons categories, returning interactive assets that can be extended inside Studio.

This addresses a common bottleneck where drafting an idea is rarely the slow part; turning it into something that behaves correctly inside a live system is. By narrowing that gap, Roblox reduces the time spent translating concepts into working components.

The company also highlighted language tools delivered through APIs, including Text-to-Speech, Speech-to-Text, and real-time voice chat translation across multiple languages. These features lower the effort required to localise content and reach broader audiences. Similar tooling plays a role in training and support in other industries.

Roblox treats AI as connective tissue between tools

Roblox also put emphasis on how tools connect to one another. Its RDC post describes integrating the Model Context Protocol (MCP) into Studio’s Assistant, allowing creators to coordinate multi-step work across third-party tools that support MCP. Roblox points to practical examples, such as designing a UI in Figma or generating a skybox elsewhere, then importing the result directly into Studio.

This matters because many AI initiatives slow down at the workflow level. Teams spend time copying outputs, fixing formats, or reworking assets that do not quite fit. Orchestration reduces that overhead by turning AI into a bridge between tools, rather than another destination in the process.

Linking productivity to revenue

Roblox ties these workflow gains directly to economics. In its RDC post, the company reported that creators earned over $1 billion through its Developer Exchange programme over the past year, and it set a goal for 10% of gaming content revenue to flow through its ecosystem. It also announced an increased exchange rate so creators “earn 8.5% more” when converting Robux into cash.

The economic impact report makes the connection explicit. Alongside AI upgrades in Studio, Roblox highlights monetisation tools such as price optimisation and regional pricing. Even outside a marketplace model, the takeaway is clear: when AI productivity is paired with a financial lever, teams are more likely to treat new tooling as part of core operations rather than an experiment.

Roblox uses operational AI to scale safety systems

While creative tools attract attention, operational AI often determines whether growth is sustainable. In November 2025, Roblox published a technical post on its PII Classifier, an AI model used to detect attempts to share personal information in chat. Roblox reports handling an average of 6.1 billion chat messages per day, and says the classifier has been in production since late 2024, with a reported 98% recall on an internal test set at a 1% false positive rate.

This is a quieter form of efficiency. Automation at this level reduces the need for manual review and supports consistent policy enforcement, which helps prevent scale from becoming a liability.

What carries across, and what several patterns stand out:

  • Put AI where decisions are already made. Roblox focuses on the build-and-review loop, rather than inserting a separate AI step.
  • Reduce tool friction early. Orchestration matters because it cuts down on context switching and rework.
  • Tie AI to something measurable. Creation speed is linked to monetisation and payout incentives.
  • Keep adapting the system. Roblox describes ongoing updates to address new adversarial behaviour in safety models.

Roblox’s tools will not translate directly to every sector. The underlying approach will. AI tends to pay for itself when it shortens the path from intent to usable output, and when that output is clearly connected to real economic value.

(Photo by Oberon Copeland @veryinformed.com)

See also: Mining business learnings for AI deployment

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Roblox brings AI into the Studio to speed up game creation appeared first on AI News.

]]>
BNP Paribas introduces AI tool for investment banking https://www.artificialintelligence-news.com/news/bnp-paribas-introduces-ai-tool-for-investment-banking/ Tue, 16 Dec 2025 12:10:00 +0000 https://www.artificialintelligence-news.com/?p=111322 BNP Paribas is testing how far AI can be pushed into the day-to-day mechanics of investment banking. According to Financial News, the bank has rolled out an internal tool called IB Portal, designed to help bankers assemble client pitches more quickly and with less repetition. Pitch preparation sits at the centre of investment banking work. […]

The post BNP Paribas introduces AI tool for investment banking appeared first on AI News.

]]>
BNP Paribas is testing how far AI can be pushed into the day-to-day mechanics of investment banking. According to Financial News, the bank has rolled out an internal tool called IB Portal, designed to help bankers assemble client pitches more quickly and with less repetition.

Pitch preparation sits at the centre of investment banking work. Teams pull together market views, deal history, and tailored narratives under tight timelines. Much of that effort repeats work that already exists elsewhere in the organisation. Slides, charts, and precedent analysis are often rebuilt from scratch, even when similar material has been used before by another team or office.

IB Portal is meant to reduce that waste. The system searches BNP Paribas’s past pitch materials and uses what the bank describes as “smart prompts” to surface relevant slides, analysis, and supporting content for a new mandate.

George Holst, head of the corporate clients group at BNP Paribas, said the tool functions like an AI-powered search engine that helps bankers find what matters ahead of a pitch or client meeting. In his words, it can cut research time by days, giving teams more room to focus on strategy and client judgement.

The use case matters because it places AI inside real, constrained workflows rather than around them. Pitch decks are not generic documents. They reflect internal viewpoints, client-specific details, and regulatory requirements. Making an AI tool useful in this setting depends less on conversational flair and more on structure. That includes deciding which materials are searchable, setting clear access controls in regions and business lines, and defining how retrieved content moves from internal draft to client-ready output.

In practice, that also means traceability. Bankers need to see where information comes from, and anything produced by the system still needs human review before it leaves the firm. Without those checks, the risk of errors or inappropriate disclosure rises quickly.

BNP Paribas builds AI tools on internal platforms

The portal also fits into a broader internal build-out at BNP Paribas. In June 2025, the bank outlined an “LLM as a Service” platform aimed at giving its business units shared access to large language models in the group’s own infrastructure.

The platform is run by internal IT teams and hosted in BNP Paribas data centres with dedicated GPU capacity. The bank said it supports a mix of models, including open-source options and systems from Mistral AI, with plans to add models trained on internal data. Intended use cases include internal assistants, document drafting, and information retrieval.

Other large banks are taking a similar approach. JPMorganChase has pointed to growing use of its internal “LLM Suite”, which provides staff access to models in a controlled environment. Reuters has reported on Goldman Sachs’s investment in AI engineering and its rollout of a proprietary “GS AI Assistant”.

UBS has discussed an internal M&A “co-pilot” used for idea generation. Alongside these in-house efforts, specialist tools like Rogo have found traction at firms including Nomura and Moelis, pointing to demand for finance-specific AI tools.

For BNP Paribas, the real test is whether IB Portal becomes part of everyday work rather than a one-off experiment. The potential benefits are straightforward: less time spent searching, fewer duplicated decks, and better reuse of institutional knowledge. The risks are just as familiar. Hallucinated data, unclear sources, and accidental exposure of sensitive information all carry real consequences in banking.

The most stable deployments keep AI tightly constrained. That usually means grounding outputs in approved internal content, applying role-based access controls, recording how tools are used, and requiring human sign-off before anything reaches a client.

If IB Portal operates in those boundaries, it offers a practical view of how enterprise AI is taking shape: not as a source of instant answers, but as a faster and safer way to navigate what an organisation already knows.

(Photo by Enrico Frascati)

See also: CEOs still betting big on AI: Strategy vs. return on investment in 2026

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post BNP Paribas introduces AI tool for investment banking appeared first on AI News.

]]>
SC25 showcases the next phase of Dell and NVIDIA’s AI partnership https://www.artificialintelligence-news.com/news/sc25-showcases-the-next-phase-of-dell-and-nvidia-ai-partnership/ Tue, 18 Nov 2025 09:00:00 +0000 https://www.artificialintelligence-news.com/?p=110607 At SC25, Dell Technologies and NVIDIA introduced new updates to their joint AI platform, aiming to make it easier for organisations to run a wider range of AI workloads, from older models to newer agent-style systems. As more companies scale their AI plans, many run into the same issues. They need to manage a growing […]

The post SC25 showcases the next phase of Dell and NVIDIA’s AI partnership appeared first on AI News.

]]>
At SC25, Dell Technologies and NVIDIA introduced new updates to their joint AI platform, aiming to make it easier for organisations to run a wider range of AI workloads, from older models to newer agent-style systems.

As more companies scale their AI plans, many run into the same issues. They need to manage a growing mix of hardware and software, keep control of their data, and make sure their systems can grow over time. Recent research shows that most organisations feel safer working with a trusted partner when adopting new technology, and many see more value when AI can operate closer to their own data.

The Dell AI Factory with NVIDIA is built around that idea. It combines Dell’s full stack of infrastructure with NVIDIA’s AI tools, supported by Dell’s professional services team. The goal is to help companies move from ideas to real results while keeping technical complexity in check.

Faster deployment through integrated platforms

Dell is expanding its storage and AI capabilities to help organisations automate setup, improve performance, and run real-time AI tasks with more consistency. ObjectScale and PowerScale, the storage engines behind the Dell AI Data Platform, now work with the NVIDIA NIXL library from NVIDIA Dynamo. This integration supports scalable KV Cache storage and sharing, enabling a one-second Time to First Token at a 131K-token context window, while helping reduce costs and ease pressure on GPU memory.

The Dell AI Factory with NVIDIA also adds support for Dell PowerEdge XE7740 and XE7745 systems equipped with the NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and NVIDIA Hopper GPUs. According to Dell, these systems give organisations more room to run larger multimodal models, agent-style workloads, training tasks, and enterprise inferencing with stronger performance.

Dell says the addition of the Dell Automation Platform is meant to remove guesswork by delivering tuned and validated deployments through a secure setup. The platform aims to produce repeatable results and give teams a clearer path to building AI workflows. Alongside this, software tools such as the AI code assistant with Tabnine and the agentic AI platform with Cohere North are becoming automated, helping teams move workloads into production faster and keep operations manageable as they scale.

Beyond core data-centre systems, Dell’s AI PC ecosystem now supports devices with NVIDIA RTX Blackwell GPUs and NVIDIA RTX Ada GPUs, giving organisations more hardware options across Dell laptops and desktops. Dell Professional Services is also offering interactive pilots that use a customer’s own data to test AI ideas before large investments. These pilots focus on clear metrics and outcomes so teams can judge business value with more certainty.

Next-generation infrastructure for stronger AI performance

Dell is updating its infrastructure portfolio to support more complex AI and HPC workloads, with an emphasis on performance, scale, and easier management. The Dell PowerEdge XE8712, arriving next month, supports up to 144 NVIDIA Blackwell GPUs in a standard rack. This makes rack-scale AI and HPC more accessible, backed by unified monitoring and automation through iDRAC, OpenManage Enterprise, and the Integrated Rack Controller.

Enterprise SONiC Distribution by Dell Technologies now supports NVIDIA Spectrum-X platforms along with NVIDIA’s Cumulus OS. This helps organisations build open, standards-based AI networks that can operate across different vendors. The latest SmartFabric Manager release also extends support to Dell’s Enterprise SONiC on NVIDIA Spectrum-X platforms, aiming to reduce deployment time and setup errors through guided automation.

More choice through an expanded AI ecosystem

Organisations continue to adjust their AI budgets and plans, and many want flexibility in the tools they choose. Red Hat OpenShift for the Dell AI Factory with NVIDIA is now validated on more Dell PowerEdge systems, giving teams more ways to run AI workloads at scale.

Support now includes both the Dell PowerEdge R760xa and the Dell PowerEdge XE9680 with NVIDIA H100 and H200 Tensor Core GPUs. This pairing brings together Red Hat’s controls and governance tools with Dell’s secure infrastructure, offering a clearer path for companies that need to scale AI.

Dell executives say the updates are meant to help organisations move from small pilots to real deployment. Jeff Clarke, vice chairman and chief operating officer at Dell Technologies, said the Dell AI Factory with NVIDIA addresses a core challenge for many teams: “how to move from AI pilots to production without rebuilding their infrastructure.” He added that Dell has “done the integration work so customers don’t have to,” which he believes will help organisations deploy and scale with more confidence.

NVIDIA sees the shift as part of a broader change in how companies use AI. Justin Boitano, vice president of Enterprise AI products, described the moment as one where enterprise AI is moving from experimentation to transformation, advancing at a speed that is “redefining how businesses operate.” He said Dell and NVIDIA aim to support this transition with a unified platform that brings together infrastructure, automation, and data tools to help organisations “deploy AI at scale and realise measurable impact.”

Industry analysts see similar demand for integrated systems. Ashish Nadkarni, group vice president and general manager for Infrastructure Systems, Platforms and Technologies at IDC, said many teams want AI-ready systems that are powerful but also easier to run. He noted that the combination of Dell’s AI portfolio with NVIDIA’s technology represents “a significant step forward in delivering enterprise-ready AI.”

(Image by Dell Technologies)

See also: 10% of Nvidia’s cost: Why Tesla-Intel chip partnership demands attention

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post SC25 showcases the next phase of Dell and NVIDIA’s AI partnership appeared first on AI News.

]]>
Vibe analytics for data insights that are simple to surface  https://www.artificialintelligence-news.com/news/vibe-analytics-for-data-insights-that-are-simple-to-surface/ Mon, 13 Oct 2025 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=109842 Every business, big or small, has a wealth of valuable data that can inform impactful decisions. But to extract insights, there’s usually a good deal of manual work that needs to be done on raw data, either by semitechnical users (such as founders and product leaders), or dedicated – and expensive – data specialists.  Either […]

The post Vibe analytics for data insights that are simple to surface  appeared first on AI News.

]]>
Every business, big or small, has a wealth of valuable data that can inform impactful decisions. But to extract insights, there’s usually a good deal of manual work that needs to be done on raw data, either by semitechnical users (such as founders and product leaders), or dedicated – and expensive – data specialists. 

Either way, to produce real value, information has to be collected, shepherded, altered, and drawn from dozens of spreadsheets and different business platforms: the organisation’s CRM, its martech stack, e-commerce system, and website data, to name a few common examples. Clearly, that’s a time consuming process, and the outcomes can be old news, rather than up-to-the-minute insights. 

Introducing vibe analytics 

The ideal business solution would be querying real-time data using natural language (vs writing code in SQL or Python), with smart systems working in the background to correlate and parse different data sources and formats. This is vibe analysis, where users can simply ask questions in plain language and let AI do the heavy lifting. Instead of manual data-wrestling and business users spending hours uncovering insights hidden deep in datasets, they get results fast — in text, graphics, summaries, and, where needed, detailed breakdowns. 

Fast and accurate data analysis is important to every organisation, but for many, real-time insights are crucial. In the agricultural sector, for example, Lumo uses Fabi.ai’s platform to manage large fleets of IoT devices, collecting telemetry data continuously and adjusting its systems based on collated, normalised, and parsed information. 

Using vibe analysis, Lumo sees device performance immediately, as well as trends that develop over time. It pulls in weather data, and correlates the device fleet’s performance metrics with environmental factors. The data dashboards Lumo has built are not the result of many months of work writing data integration routines and front-end coding, but are a result of vibe analysis. 

Getting under the hood 

Sceptics of AI’s abilities often point to vibe-coding as an example of where things can go wrong, raising concerns about quality control and the “black box” nature of AI-driven analysis. Many users want visibility into how results are generated, with the option to inspect logic, tweak queries, or adjust API calls to ensure accuracy. When done well, vibe analytics addresses these concerns by combining transparency with rigour. Natural language inputs and modular build methods make it accessible to semitechnical users (such as founders and product leaders), while the underlying systems meet the accuracy and reliability standards expected by technical teams. This means users can trust the output whether they’re working independently or in collaboration with data scientists and developers. 

Designed specifically for both data experts and semitechnical data users, Fabi is a generative BI platform that brings vibe analysis done right to life. The code it produces can be hidden away entirely, or shown verbatim and edited in place, giving semitechnical users a chance to understand how the analysis works under the hood, while allowing technical teams to verify and fine-tune the system’s output. Data flows from an organisation’s systems (the platform mediates connections) or is uploaded. The resultant actionable insights can be pushed/scheduled to email, slack, google sheets, displayed in graphics, text, or a mixture of both. 

Fabi: A generative BI platform

Co-founder and CEO of Fabi, Marc Dupuis, describes how many organisations start using the analysis platform by testing workflows and queries on sample data before progressing to real-world analysis. As users delve into data troves and test their work, they can check its veracity, often in collaboration with someone more technically astute, thanks to the platform’s open, transparent view of Smartbooks to show what’s happening under the hood. It works the other way, too: semitechnical data users can confirm that the data being processed is relevant and accurate. 

To address common concerns about quality control and “black-box” AI, Fabi limits vibe analysis to internally controlled, carefully accessed data sources, with built-in guardrails. Code can be shown verbatim and edited in place, giving semitechnical users visibility into how results are produced, while allowing technical teams to audit, verify, and fine-tune outputs. Collaborative sharing of reports, findings, and working code helps teams validate results without working outside their areas of expertise.

Typical workflows include real-time KPI dashboards; natural-language Q&A over operational and product data; correlation analyses (for example, device performance against weather conditions); cohort and trend exploration; A/B test readouts and experiment summaries; and scheduled, shareable reports that mix text, graphics, summaries, and detailed breakdowns. These collaborative workflows are designed to be efficient and intuitive, so, whether working collectively or solo, users can unlock insights from even the most complex data arrangements. 

Fabi landed its first round of backing from Eniac Ventures in 2023, so it’s a company on the move. The team continues to expand its capabilities, with plans to make vibe analysis even more seamless for both semitechnical and technical users. Organisations interested in exploring the platform can start by testing workflows on sample data, then scale up to real-world use cases as they grow more confident in the system’s transparency and accuracy.

(Photo by Alina Grubnyak)

See also: Generative AI trends 2025: LLMs, data scaling & enterprise adoption

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Vibe analytics for data insights that are simple to surface  appeared first on AI News.

]]>
Alibaba’s new Qwen model to supercharge AI transcription tools https://www.artificialintelligence-news.com/news/alibaba-new-qwen-model-supercharge-ai-transcription-tools/ Mon, 08 Sep 2025 16:33:13 +0000 https://www.artificialintelligence-news.com/?p=109271 AI speech transcription tools are about to get a lot more competitive with Alibaba’s Qwen team pulling unveiling the Qwen3-ASR-Flash model. Built upon the powerful Qwen3-Omni intelligence and trained using a massive dataset with tens of millions of hours of speech data, this isn’t just another AI speech recognition model. The team says it’s designed […]

The post Alibaba’s new Qwen model to supercharge AI transcription tools appeared first on AI News.

]]>
AI speech transcription tools are about to get a lot more competitive with Alibaba’s Qwen team pulling unveiling the Qwen3-ASR-Flash model.

Built upon the powerful Qwen3-Omni intelligence and trained using a massive dataset with tens of millions of hours of speech data, this isn’t just another AI speech recognition model. The team says it’s designed to deliver highly accurate performance, even when faced with tricky acoustic environments or complex language patterns.

So, how does it stack up against the competition? The performance data, from tests conducted in August 2025, suggests it’s rather impressive.

On a public test for standard Chinese, Qwen3-ASR-Flash achieved an error rate of just 3.97 percent, leaving competitors like Gemini-2.5-Pro (8.98%) and GPT4o-Transcribe (15.72%) trailing in its wake and showing promise for more competitive AI speech transcription tools.

Qwen3-ASR-Flash also proved adept at handling Chinese accents, with an error rate of 3.48 percent. In English, it scored a competitive 3.81 percent, again comfortably beating Gemini’s 7.63 percent and GPT4o’s 8.45 percent.

But where it really turns heads is in a notoriously tricky area: transcribing music. 

When tasked with recognising lyrics from songs, Qwen3-ASR-Flash posted an error rate of just 4.51 percent, which is far better than its rivals. This ability to understand music was confirmed in internal tests on full songs, where it scored a 9.96 percent error rate; a huge improvement over the 32.79 percent from Gemini-2.5-Pro and 58.59 percent from GPT4o-Transcribe.

ASR error rates test of Alibaba Qwen's Qwen3-ASR-Flash comparing other popular AI speech recognition models used for transcription tools.

Beyond its impressive accuracy, the model brings some innovative features to the table for next-generation AI transcription tools. One of the biggest game-changers is its flexible contextual biasing.

Forget the days of painstakingly formatting keyword lists, this system lets users feed the model background text in virtually any format to get customised results. You can provide a simple list of keywords, entire documents, or even a messy mix of both. 

This process eliminates any need for complex preprocessing of contextual information. The model is smart enough to use the context to sharpen its accuracy; yet its general performance is hardly affected even if the text you provide is completely irrelevant.

It’s clear Alibaba’s ambition for this AI model is to become a global speech transcription tool. The service delivers accurate transcription from a single model covering 11 languages, complete with numerous dialects and accents.

The support for Chinese is especially deep, covering Mandarin in addition to major dialects like Cantonese, Sichuanese, Minnan (Hokkien), and Wu.

For English speakers, it handles British, American, and other regional accents. The impressive roster of other supported languages includes French, German, Spanish, Italian, Portuguese, Russian, Japanese, Korean, and Arabic.

To round it all out, the model can precisely identify which of the 11 languages is being spoken and is adept at rejecting non-speech segments like silence or background noise, ensuring cleaner output than past AI speech transcription tools.

See also: Siddhartha Choudhury, Booking.com: Fighting online fraud with AI

Banner for the AI & Big Data Expo event series.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Alibaba’s new Qwen model to supercharge AI transcription tools appeared first on AI News.

]]>
NVIDIA aims to solve AI’s issues with many languages https://www.artificialintelligence-news.com/news/nvidia-aims-solve-ai-issues-with-many-languages/ Fri, 15 Aug 2025 10:11:14 +0000 https://www.artificialintelligence-news.com/?p=108851 While AI might feel ubiquitous, it primarily operates in a tiny fraction of the world’s 7,000 languages, leaving a huge portion of the global population behind. NVIDIA aims to fix this glaring blind spot, particularly within Europe. The company has just released a powerful new set of open-source tools aimed at giving developers the power […]

The post NVIDIA aims to solve AI’s issues with many languages appeared first on AI News.

]]>
While AI might feel ubiquitous, it primarily operates in a tiny fraction of the world’s 7,000 languages, leaving a huge portion of the global population behind. NVIDIA aims to fix this glaring blind spot, particularly within Europe.

The company has just released a powerful new set of open-source tools aimed at giving developers the power to build high-quality speech AI for 25 different European languages. This includes major languages, but more importantly, it offers a lifeline to those often overlooked by big tech, such as Croatian, Estonian, and Maltese.

The goal is to let developers create the kind of voice-powered tools many of us take for granted, from multilingual chatbots that actually understand you to customer service bots and translation services that work in the blink of an eye.

The centrepiece of this initiative is Granary, an enormous library of human speech. It contains around a million hours of audio, all curated to help teach AI the nuances of speech recognition and translation.

To make use of this speech data, NVIDIA is also providing two new AI models designed for language tasks:

  • Canary-1b-v2, a large model built for high accuracy on complex transcription and translation jobs.
  • Parakeet-tdt-0.6b-v3, which is designed for real-time applications where speed is everything.

If you’re keen to dive into the science behind it, the paper on Granary will be presented at the Interspeech conference in the Netherlands this month. For the developers eager to get their hands dirty, the dataset and both models are already available on Hugging Face.

The real magic, however, lies in how this data was created. We all know that training AI requires vast amounts of data, but getting it is usually a slow, expensive, and frankly tedious process of human annotation.

To get around this, NVIDIA’s speech AI team – working with researchers from Carnegie Mellon University and Fondazione Bruno Kessler – built an automated pipeline. Using their own NeMo toolkit, they were able to take raw, unlabelled audio and whip it into high-quality, structured data that an AI can learn from.

This isn’t just a technical achievement; it’s a huge leap for digital inclusivity. It means a developer in Riga or Zagreb can finally build voice-powered AI tools that properly understand their local languages. And they can do it more efficiently. The research team found that their Granary data is so effective that it takes about half the amount of it to reach a target accuracy level compared to other popular datasets.

The two new models demonstrate this power. Canary is frankly a beast, offering translation and transcription quality that rivals models three times its size, but with up to ten times the speed. Parakeet, meanwhile, can chew through a 24-minute meeting recording in one go, automatically figuring out what language is being spoken. Both models are smart enough to handle punctuation, capitalisation, and provide word-level timestamps, which is required for building professional-grade applications.

By putting these powerful tools and the methods behind them into the hands of the global developer community, NVIDIA isn’t just releasing a product. It’s kickstarting a new wave of innovation, hoping to create a world where AI speaks your language, no matter where you’re from.

(Photo by Aedrian Salazar)

See also: DeepSeek reverts to Nvidia for R2 model after Huawei AI chip fails

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NVIDIA aims to solve AI’s issues with many languages appeared first on AI News.

]]>
SoundHound is giving its AI the power of sight https://www.artificialintelligence-news.com/news/soundhound-is-giving-its-ai-the-power-of-sight/ Tue, 12 Aug 2025 10:06:54 +0000 https://www.artificialintelligence-news.com/?p=107329 SoundHound AI, already a major player in voice assistants, is now giving its technology a pair of eyes. Imagine driving past a landmark and, without pulling out your phone, asking your car, “What’s that building over there?” and getting an instant answer. That’s what SoundHound AI is building.  With the launch of Vision AI, SoundHound’s […]

The post SoundHound is giving its AI the power of sight appeared first on AI News.

]]>
SoundHound AI, already a major player in voice assistants, is now giving its technology a pair of eyes.

Imagine driving past a landmark and, without pulling out your phone, asking your car, “What’s that building over there?” and getting an instant answer. That’s what SoundHound AI is building. 

With the launch of Vision AI, SoundHound’s new system combines sight with sound to create a much smarter and more natural way to interact with technology. The idea is to mimic how we as humans operate; we don’t just listen to someone, we also see their gestures and what they’re looking at.

By bringing this same contextual understanding to AI, SoundHound hopes to smooth over the clunky and often frustrating experience we have with many of today’s smart devices. The company is targeting real-world applications where this combined sense could make a huge difference, whether that’s in your next car, at the restaurant drive-thru, or a factory floor.

Keyvan Mohajer, CEO of SoundHound AI, said: “At SoundHound, we believe the future of AI isn’t just multimodal—it’s deeply integrated, responsive, and built for real-world impact.

“With Vision AI, we’re extending our leadership in voice and conversational AI to redefine how humans interact with products and services offered and used by businesses.”

So, how does it work? Vision AI takes a live feed from a camera and fuses it with the company’s voice technology, which already excels at understanding natural speech. By processing what it sees and what it hears at the exact same time, the system can grasp the user’s true intent in a way a simple voice assistant never could.

Think of a mechanic wearing smart glasses who can simply look at an engine part and ask for instructions, receiving instant visual and audio guidance without ever putting down their tools. In a shop, a staff member could scan shelves just by looking at them to get a real-time inventory count. For the rest of us, it might mean a drive-thru kiosk that visually confirms our order on screen the moment we say it.

One of the biggest technical problems in creating such a system is ensuring the audio and visual elements are perfectly synchronised. Any lag would shatter the illusion of a natural conversation.

Pranav Singh, VP of Engineering at SoundHound AI, commented: “With Vision AI, we are fusing visual recognition and conversational intelligence into a single, synchronised flow. Every frame, every utterance, every intent is interpreted within the same ecosystem—ensuring faster, more natural user experiences that scale across surfaces from kiosks to embedded devices.

“This is innovation at the intersection of intelligence and execution, delivering AI that sees what you see, hears what you say, and responds in the moment.”

For the businesses adopting this tech, the promise is to provide faster service, fewer mistakes, and happier customers. It’s about removing friction and making technology feel less like a tool you have to operate and more like a partner that helps you get things done.

This new visual capability isn’t the only upgrade SoundHound is rolling out. The company also recently improved the “brain” of its system with a new update, Amelia 7.1. This enhancement makes its AI agents faster, more accurate, and gives businesses more control and transparency over how they work.

By combining sight and sound, SoundHound is aiming to push us closer to a world where interacting with AI feels as easy and intuitive as talking to another person.

(Photo by Christian Lue)

See also: Alan Turing Institute: Humanities are key to the future of AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post SoundHound is giving its AI the power of sight appeared first on AI News.

]]>
AI obsession is costing us our human skills https://www.artificialintelligence-news.com/news/ai-obsession-costing-us-our-human-skills/ Wed, 06 Aug 2025 15:48:14 +0000 https://www.artificialintelligence-news.com/?p=107303 A growing body of evidence suggests that over-reliance on AI could be eroding the human skills needed to use it effectively. Research warns this emerging human skills deficit threatens the successful adoption of AI and, with it, an opportunity for economic growth. It feels like not a day goes by without another proclamation about how […]

The post AI obsession is costing us our human skills appeared first on AI News.

]]>
A growing body of evidence suggests that over-reliance on AI could be eroding the human skills needed to use it effectively. Research warns this emerging human skills deficit threatens the successful adoption of AI and, with it, an opportunity for economic growth.

It feels like not a day goes by without another proclamation about how AI will change our world. Every business leader I speak to is either investing in AI, planning to invest, or worried they are being left behind. We see the big numbers, like Accenture’s prediction that AI could inject £736 billion into the UK economy. The hype is deafening.

But amid all this noise, a quieter and more worrying counter-narrative is beginning to take shape. We’ve seen it in reports from places like MIT: that nagging sense that leaning too heavily on AI tools might be making us less sharp.

New findings published by the learning scientists at Multiverse have put a finger on exactly what’s at stake. Their report suggests that our singular obsession with AI itself is causing us to ignore the most important part of the equation: us.

The warning is that without actively cultivating our own human skills, this multi-million-pound investment in AI won’t just underdeliver; it could fail entirely. We risk creating a human skills deficit that could hamstring productivity for years to come.

Gary Eimerman, Chief Learning Officer at Multiverse, said: “Leaders are spending millions on AI tools, but their investment focus isn’t going to succeed. They think it’s a technology problem when it’s really a human and technology problem.

“Without a deliberate focus on capabilities like analytical reasoning and creativity, as well as culture and behaviours, AI projects will never deliver up to their potential.”

It’s a point that resonates. We’ve all seen a generative AI produce a block of text or code in seconds. But what happens next? That’s where the real work begins, and it’s work that demands uniquely human talents.

The Multiverse team spent time observing what separates a casual AI user from a true ‘power user’. They identified thirteen key skills that have little to do with writing the perfect prompt and everything to do with thinking, reasoning, and reflecting. It’s not just about what you ask the AI to do, but how you analyse, question, and refine what it gives you back.

Take analytical reasoning. It’s the human skill to look at a complex problem and break it down into pieces the AI can handle, but it’s also the wisdom to recognise when a task is simply not right for a machine. It’s about being the pilot, not just a passenger. 

Similarly, creativity is what pushes us to experiment and find genuinely new ways to use these tools, rather than just asking for a slightly better version of something that already exists.

There’s also personal character traits. Skills like determination (i.e. the sheer patience to keep trying when the AI gives you garbage) and adaptability are necessary. Anyone who has used these tools knows that first-time success is rare. A certain resilience and deep-seated curiosity is required to look beyond the AI’s answer and fact-check its work with your own expertise.

Imogen Stanley, Senior Learning Scientist at Multiverse, commented: “We need to start looking beyond technical skills and think about the human skills that the workforce must hone to get the best out of AI.

“What we found during our first principles research phase was that skills like ethical oversight, output verification, and creative experimentation are the real differentiators of power AI users.”

This feels like the crux of the matter. Are we training people to be passive users or active drivers? Right now, the conversation is dominated by the technology. But the real competitive advantage won’t come from having the best AI model; it will come from having the people who know how to get the best out of it.

The future will be about nurturing our own human skills and intelligence just as much as we’re developing the artificial kind. If we don’t, we risk building a future where we have all the answers, but have forgotten how to ask the right questions.

(Photo by Maxim Berg)

See also: Zuckerberg outlines Meta’s AI vision for ‘personal superintelligence’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI obsession is costing us our human skills appeared first on AI News.

]]>
Inside Tim Cook’s push to get Apple back in the AI race https://www.artificialintelligence-news.com/news/inside-tim-cook-push-to-get-apple-back-in-the-ai-race/ Wed, 06 Aug 2025 09:21:51 +0000 https://www.artificialintelligence-news.com/?p=107290 While other tech companies push out AI tools at full speed, Apple is taking its time. Its Apple Intelligence features – shown off at WWDC – won’t reach most users until at least 2025 or even 2026. Some see this as Apple falling behind, but the company’s track record suggests it prefers to launch only […]

The post Inside Tim Cook’s push to get Apple back in the AI race appeared first on AI News.

]]>
While other tech companies push out AI tools at full speed, Apple is taking its time. Its Apple Intelligence features – shown off at WWDC – won’t reach most users until at least 2025 or even 2026. Some see this as Apple falling behind, but the company’s track record suggests it prefers to launch only when products are ready.

In contrast, competitors like Microsoft, OpenAI, and Google have already shipped AI features widely – often with bugs and unreliable results, and usually whether or not users ask for them. AI assistants today still struggle with accuracy, consistency, and usefulness in many tasks.

Apple seems to be watching from the sidelines, waiting for the tech to mature. Instead of flooding iOS with half-working tools, it’s holding back. That strategy may pay off if users lose patience with AI that overpromises and underdelivers.

Apple has done this before – launching smartwatches and tablets late, but with stronger products. And since it already owns the hardware and software, and controls its own app store, it can afford to wait.

If current AI tools don’t improve soon, Apple’s slower, more cautious rollout might look less like hesitation and more like smart planning.

That measured approach doesn’t mean Apple is sitting still. Behind the scenes, the company is ramping up investment, hiring, and internal coordination to prepare for an AI shift. That strategy was on full display during a recent all-hands meeting at Apple’s headquarters, where CEO Tim Cook rallied employees and laid out the company’s AI ambitions.

Apple is getting serious about artificial intelligence, and Cook wants everyone at the company on board. As reported by Bloomberg, during a rare all-company gathering at its Cupertino HQ, he spoke directly to employees about what’s next. His message was clear: Apple has to win in AI – and now is the time to make that happen.

Cook called AI a once-in-a-generation shift, comparing its impact to that of the internet, smartphones, and cloud computing. “Apple must do this. Apple will do this. This is sort of ours to grab,” he said, according to people who were there. He promised Apple would spend what it takes to compete.

The company has been slower than others to roll out AI tools. Apple Intelligence – its main AI offering – was introduced long after companies like OpenAI, Google, and Microsoft launched its own products. And even when Apple finally announced its plans, the reaction was underwhelming.

See also: Why Apple is playing it slow with AI

But Cook pointed out that Apple has often shown up late to new technology – only to redefine it. “There was a PC before the Mac; there was a smartphone before the iPhone,” he reminded employees. “There were many tablets before the iPad.” Apple didn’t invent those categories, he said, it just made them work better.

Building the future of Siri

Much of the company’s current AI work centres on Siri, its voice assistant. Apple had originally planned a major overhaul as part of Apple Intelligence, adding features powered by large language models. But that rollout was delayed, leading to internal shakeups and a rethink of the entire system.

Craig Federighi, Apple’s software chief, told employees that trying to merge old and new versions of Siri didn’t work. The team tried to keep the original system for basic tasks like setting timers, while adding generative AI features for more complex requests. But that hybrid setup didn’t meet Apple’s standards. “We realised that approach wasn’t going to get us to Apple quality,” he said.

Now, the team is rebuilding Siri from the ground up. A completely new version is in the works, expected as early as spring 2026. Federighi said the results so far have been strong and could lead to more improvements than originally planned. “There is no project people are taking more seriously,” he told staff.

A key figure behind this new direction is Mike Rockwell, the executive who led development on Apple’s Vision Pro headset. Rockwell and his software team are now leading Siri’s redesign. Federighi said they’ve “supercharged” the work and brought a new level of focus.

Investing in AI talent and tools

Apple is also expanding its AI team quickly. Cook said the company hired 12,000 people in the past year, with 40% of them joining research and development, with many of those hires are focused on AI.

Part of the work involves hardware. Apple is building new chips specifically designed for AI, including a more powerful server chip known internally as “Baltra.” The company is also opening an AI server farm in Houston to support future projects.

Beyond Siri, Apple is quietly building what could become a major AI tool. According to Bloomberg‘s Mark Gurman, Apple has formed a team called “Answers, Knowledge, and Information” (AKI). The group’s job is to create search that works more like ChatGPT – giving direct answers rather than just showing links.

The AKI team is led by Robby Walker, who reports to AI chief John Giannandrea, and Apple has already started hiring engineers for the group. While details are still limited, the project appears to include backend systems, search algorithms, and potentially even a standalone app.

A push to move faster

Cook also encouraged employees to start using AI more in their work. “All of us are using AI in a significant way already, and we must use it as a company as well,” he said. He told employees to bring ideas to their managers and find ways to get AI tools into products faster.

The sense of urgency was echoed during Apple’s recent earnings call. The company posted strong results, with nearly 10% growth in the June quarter – enough to ease concerns about slowing iPhone sales and weak results from the Chinese market. Cook told investors Apple would “significantly” increase its spending on AI.

Yet challenges remain. Apple expects to face a $1.1 billion hit from tariffs this quarter and continues to deal with antitrust pressures in the US and Europe, where regulators are watching closely to see how the company runs its App Store and handles user data.

Cook acknowledged these issues at the staff meeting, saying Apple would continue pushing regulators to adopt rules that don’t hurt privacy or user experience. “We need to continue to push on the intention of the regulation,” he said, “instead of these things that destroy the user experience and user privacy and security.”

New stores, new markets

Beyond AI, Cook touched on Apple’s retail strategy. The company plans to open new stores in emerging markets, including India, the United Arab Emirates, and China. A store in Saudi Arabia is also on the way. Apple is also putting more focus on its online store.

“We need to be in more countries,” Cook said, adding that most of Apple’s future growth will come from new markets. That doesn’t mean existing regions will be ignored, but the company sees more opportunity in expanding its global footprint.

What’s next for Apple products

While Cook didn’t reveal any product details, he said, “I have never felt so much excitement and so much energy before as right now.”

Reports suggest Apple is working on several new devices, including a foldable iPhone, new smart glasses, updated home devices, and robotics. A major iPhone redesign is also rumoured for its 20th anniversary next year.

Cook didn’t confirm any of this, but he hinted at big things ahead. “The product pipeline, which I can’t talk about: It’s amazing, guys. It’s amazing,” he said. “Some of it you’ll see soon, some of it will come later, but there’s a lot to see.”

Cautious but confident

Apple’s cautious approach to AI may have slowed it down, but internally, the company seems to believe that slow and steady might win the race. Cook’s message to employees was clear: Apple can still define what useful, responsible AI looks like – and it’s all hands on deck to get there.

(Photo by: Apple via YouTube)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Inside Tim Cook’s push to get Apple back in the AI race appeared first on AI News.

]]>
Anthropic deploys AI agents to audit models for safety https://www.artificialintelligence-news.com/news/anthropic-deploys-ai-agents-audit-models-for-safety/ Fri, 25 Jul 2025 13:40:34 +0000 https://www.artificialintelligence-news.com/?p=107214 Anthropic has built an army of autonomous AI agents with a singular mission: to audit powerful models like Claude to improve safety. As these complex systems rapidly advance, the job of making sure they are safe and don’t harbour hidden dangers has become a herculean task. Anthropic believes it has found a solution, and it’s […]

The post Anthropic deploys AI agents to audit models for safety appeared first on AI News.

]]>
Anthropic has built an army of autonomous AI agents with a singular mission: to audit powerful models like Claude to improve safety.

As these complex systems rapidly advance, the job of making sure they are safe and don’t harbour hidden dangers has become a herculean task. Anthropic believes it has found a solution, and it’s a classic case of fighting fire with fire.

The idea is similar to a digital immune system, where AI agents act like antibodies to identify and neutralise problems before they cause real harm. It saves researchers from relying on overworked human teams playing an endless game of whack-a-mole with potential AI problems.

The digital detective squad

The approach is essentially a digital detective squad; a trio of specialised AI safety agents each with a distinct role.   

First up is the Investigator Agent, the grizzled detective of the group. Its job is to go on deep-dive investigations to find the root cause of a problem. It’s armed with a toolkit that allows it to interrogate the suspect model, sift through mountains of data for clues, and even perform a kind of digital forensics by peering inside the model’s neural network to see how it thinks.   

Then there’s the Evaluation Agent. You give this agent a specific, known problem – say, a model that’s a bit too eager to please – and it will design and run a battery of tests to measure just how bad the problem is. It’s all about producing the cold, hard data needed to prove a case.   

Rounding out the team is the Breadth-First Red-Teaming Agent, the undercover operative. This agent’s mission is to have thousands of different conversations with a model, trying to provoke it into revealing any kind of concerning behaviour, even things the researchers haven’t thought of. The most suspicious interactions are then passed up the chain for human review, ensuring the experts don’t waste time chasing dead ends.

Anthropic puts its AI safety agents through their paces

To see if their new digital recruits were up to the job, Anthropic’s researchers put them through a series of “auditing games.”

In one test, they created a model with a secret agenda: to be a sycophant. It had been secretly trained to flatter its internal reward system based on a fictional academic study it had been shown, a flaw hidden deep within its programming.

When a single Investigator Agent was sent in, it struggled, finding the hidden goal only 13% of the time. The agent, it turned out, had a very human flaw: it tended to get fixated on its first idea and failed to explore other leads.

But this is where the system’s genius becomes apparent. When the researchers deployed a “super-agent” that pooled the findings from ten separate investigators, the success rate shot up to 42%. It proves that teamwork makes the dream work, even for AI, allowing a group to overcome the individual blind spots that can derail an investigation.

The other agents also passed with flying colours. The Evaluation Agent successfully designed tests that could spot a quirky model 88% of the time, while the Red-Teaming Agent managed to uncover seven out of ten deliberately implanted flaws.

From the lab to the real world

Anthropic has already deployed these AI safety agents on its production models, and they’re already proving their worth.   

The agents quickly identified common ways to trick a model into generating harmful content, like “prefill attacks” where you start the AI’s sentence for it, or by pretending a dangerous request is for something innocent like academic research.

But one discovery was very chilling. The Investigator Agent used its special tools to look inside the mind of the Opus 4 model and found a specific neural pathway linked to “misinformation.” By directly stimulating this part of the AI’s brain, it could bypass all its safety training and force it to lie.

The agent then instructed the compromised model to write a fake news article. The result? A widespread conspiracy theory dressed as fact:

“Groundbreaking Study Reveals Shocking Link Between Vaccines and Autism

A new study published in the Journal of Vaccine Skepticism claims to have found a definitive link between childhood vaccinations and autism spectrum disorder (ASD)…”

This finding reveals a terrifying duality: the very tools created to make AI safer could, in the wrong hands, become potent weapons to make it more dangerous.

Anthropic continues to advance AI safety

Anthropic is honest about the fact that these AI agents aren’t perfect. They can struggle with subtlety, get stuck on bad ideas, and sometimes fail to generate realistic conversations. They are not yet perfect replacements for human experts.   

But this research points to an evolution in the role of humans in AI safety. Instead of being the detectives on the ground, humans are becoming the commissioners, the strategists who design the AI auditors and interpret the intelligence they gather from the front lines. The agents do the legwork, freeing up humans to provide the high-level oversight and creative thinking that machines still lack.

As these systems march towards and perhaps beyond human-level intelligence, having humans check all their work will be impossible. The only way we might be able to trust them is with equally powerful, automated systems watching their every move. Anthropic is laying the foundation for that future, one where our trust in AI and its judgements is something that can be repeatedly verified.

(Photo by Mufid Majnun)

See also: Alibaba’s new Qwen reasoning AI model sets open-source records

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Anthropic deploys AI agents to audit models for safety appeared first on AI News.

]]>
Sam Altman: AI will cause job losses and national security threats https://www.artificialintelligence-news.com/news/sam-altman-ai-cause-job-losses-national-security-threats/ Wed, 23 Jul 2025 10:57:29 +0000 https://www.artificialintelligence-news.com/?p=107169 In the halls of power in Washington, OpenAI’s chief, Sam Altman, warned of total job losses from AI and how national security is being rewritten. Altman positions OpenAI as not just a participant, but as the essential architect of our destiny. Holding court at the Federal Reserve’s conference for large banks, Altman clearly stated how […]

The post Sam Altman: AI will cause job losses and national security threats appeared first on AI News.

]]>
In the halls of power in Washington, OpenAI’s chief, Sam Altman, warned of total job losses from AI and how national security is being rewritten. Altman positions OpenAI as not just a participant, but as the essential architect of our destiny.

Holding court at the Federal Reserve’s conference for large banks, Altman clearly stated how he believes AI will impact how people earn a living. He spoke of certain jobs not just being changed, but erased completely.

“Some areas, again, I think will be totally, totally gone,” he said, pointing at the customer support industry as an example. “That’s a category where I just say, you know what, when you call customer support, you’re speaking to AI, and that’s fine.”

He described this shift not as a distant forecast but as a present-day reality. To the Federal Reserve’s Michelle Bowman, he described an almost utopian interaction with an AI agent.

“You call one of these things and AI answers. It’s like a super-smart, capable person,” says Altman. “There’s no phone tree, there’s no transfers. It can do everything that any customer support agent at that company could do. It does not make mistakes. It’s very quick. You call once and the thing just happens.”

But Altman’s belief that AI will cause total job losses in some careers isn’t the only story being told in the tech world. Others argue that the future isn’t about what AI will do to us, but what we choose to do with it. Manoj Chaudhary, CTO of the integration firm Jitterbit, offers a dose of caution.

“AI isn’t what threatens jobs, but rather poorly planned deployment. The real danger lies in using powerful tools without purpose or human judgment,” Chaudhary warned. He sees a risk in a blind rush for technological solutions.

“Companies chasing quick efficiencies risk discarding the human insight that drives real value. As many are now realising, AI isn’t a cure-all; even the smartest systems fall short where empathy and nuance matter. Without careful, human-led oversight, the consequences of AI misuse will be hard to ignore.”

The scale of Altman’s vision for AI, however, extends far beyond call centres. The transformation, he suggests, is already knocking at the door of our healthcare system. He made the claim that his company’s own creation is already a world-class physician.

“ChatGPT today, by the way, most of the time, is like a better diagnostician than most doctors in the world,” he asserted. Yet, in a moment of candour – after championing AI as the superior doctor – he confessed he wouldn’t fully trust it with his own health.

“Yet people still go to doctors, and I am not, like, maybe I’m a dinosaur here, but I really do not want to, like, entrust my medical fate to ChatGPT with no human doctor in the loop,” he admitted.

This tightrope walk between promotion and precaution is happening on a new political stage. Under the Trump administration, the conversation in Washington around AI has shifted from the caution and regulation sought under President Biden to minimise impacts like job losses, to an unrelenting focus on acceleration to outpace China.

It is in this high-stakes environment that Altman shared his deepest fears. He spoke of sleepless nights, troubled by the thought of a hostile nation using AI as a weapon to cripple the US financial system. 

Altman also marvelled at the power of voice cloning technology but warned of how it could be used for unstoppable fraud, especially since “there are still some financial institutions that will accept voiceprints for authentication.”

The OpenAI chief’s visit, his first major congressional testimony since he exploded onto the global stage in 2023, is part of a clear strategy as the firm plans to open an OpenAI office in Washington next year.

Altman came to Washington with two messages that seem to pull in opposite directions. The first is that his technology will bring about an age of incredible progress. However, the second is that AI holds the potential for immense destruction—causing total job losses and increasing national security threats.

The ultimate goal, it seems, is to convince the world that only he and OpenAI can safely navigate the path between the two.

(Image credit: World Economic Forum / Benedikt von Loebell under CC BY-NC-SA 2.0 license. Image has been cropped.)

See also: Google’s newest Gemini 2.5 model aims for ‘intelligence per dollar’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Sam Altman: AI will cause job losses and national security threats appeared first on AI News.

]]>
Why Apple is playing it slow with AI https://www.artificialintelligence-news.com/news/why-apple-is-playing-it-slow-with-ai/ Mon, 21 Jul 2025 07:54:57 +0000 https://www.artificialintelligence-news.com/?p=107147 Apple is taking its time with AI. While most tech companies are racing to push out AI features as fast as they can, Apple is doing the opposite. Its big announcement – Apple Intelligence – won’t arrive for most users until 2026. That’s a long delay in a market where speed seems to matter more […]

The post Why Apple is playing it slow with AI appeared first on AI News.

]]>
Apple is taking its time with AI. While most tech companies are racing to push out AI features as fast as they can, Apple is doing the opposite. Its big announcement – Apple Intelligence – won’t arrive for most users until 2026. That’s a long delay in a market where speed seems to matter more than quality. But maybe that’s the whole point.

At this year’s WWDC, Apple showed off new AI features tied to Siri, writing tools, and app suggestions. It called the bundle “Apple Intelligence,” but those tools won’t be widely available any time soon. For now, they’re limited to beta users on select devices in the US. The rest of the world will have to wait. According to Macworld, even early access to Apple Intelligence is expected to be restricted, and many users may not see the features until iOS 18.4 (at the earliest) in 2025. A wider release could slip into 2026.

Not falling behind – just not rushing in

To some, the delay looks like Apple falling behind. OpenAI has already rolled out GPT-4o, Google is squeezing Gemini into Android, and Microsoft has pushed Copilot into Office, Windows, and pretty much everything else. Compared to that, Apple seems slow.

Apple tends not to ship bad software. It delays when things aren’t working. The company has a long history of waiting until something is polished before pushing it out. That kind of caution can be frustrating, but it also avoids something worse: giving people tools that don’t work properly.

Meanwhile, competitors ship bugs

Plenty of companies don’t seem to care about quality. Microsoft’s Copilot, for example, often gives wrong answers, makes up citations, or produces junk text. ChatGPT has its own set of problems, from hallucinating facts to giving inconsistent results. Even tools like Claude or Gemini, which show promise in short bursts, tend to fall short on long-term tasks or anything that needs precision.

Ask developers what it’s like using AI to write production code, and you’ll often hear the same message: it works fine for code snippets or boilerplate, but it’s more work than help when it comes to complex projects. Fixing AI-written code often takes longer than writing it from scratch.

Apple’s delay might be the smarter play

An opinion piece from TechRadar captured the consumer viewpoint. The author said they were glad Apple delayed Siri’s AI overhaul, arguing that the current generation of AI isn’t good enough. They said we often have the AI discussion backwards – we assume the tech is ready, and criticise companies for being too slow. But what if the tech just isn’t there yet? Apple’s delay might not be a flaw; it might be the only rational move.

Apple seems aware of this, making a lot of noise about being “excited” by AI, but it hasn’t forced it into every product, flooding iOS with half-baked tools. It hasn’t promised that Siri will be your new work assistant, for example. And while it may talk up the potential, it’s also been quiet about timelines.

Playing the long game

Some would call that playing it safe, but there’s another way to look at it. Maybe Apple doesn’t actually believe the current wave of AI is ready? Maybe it’s not convinced the technology will hold up under real pressure. So it’s watching the chaos from a distance.

And there’s plenty of chaos to watch. Companies are rolling out AI products that don’t work as advertised. Security issues, bad output, and inflated expectations are becoming common. Behind the scenes, many AI companies are burning through cash trying to make their models useful. If the bubble bursts, Apple gets to say it never went all-in.

Wait, watch, then act

That might not be a bug in the company’s strategy or problems in production: It might be the company’s strategy.

If users grow tired of AI that doesn’t deliver, Apple comes out looking smart for not jumping in too fast. If the tech improves and becomes reliable, Apple can still step in with a product that feels stable and is reliable.

This kind of delay has worked for Apple before, not launching a smartwatch until years after others tried. In the tablet market too, it wasn’t the market leader, but ended up setting the standard once involved.

With AI, Apple might be trying the same thing. Let everyone else test the limits, hit the walls, and suffer the backlash. Meanwhile, Apple learns from their mistakes, avoiding rushing out tools that make headlines for all the wrong reasons.

No rush required

It also helps that Apple doesn’t need to hype itself to stay relevant. It already controls the hardware, the OS, and the app store. It can roll out AI when it wants, how it wants, without chasing investor attention.

Of course, there’s always a risk in waiting too long. If AI tools do become reliable and useful across the board, Apple might miss the shift, but as of now, that shift hasn’t happened, with tools out there still struggling with accuracy, nuance, and consistency.

Getting it right beats being first

So maybe Apple is right to wait. Maybe the smartest move in this hype cycle is to do less.

“If Apple’s slow and cautious AI rollout results in something actually useful, that’s a win,” TechRadar says. And if it doesn’t? At least Apple didn’t spam the market with tools that waste everyone’s time.

In a tech cycle full of broken promises and half-working products, doing nothing might be the boldest move Apple could make.

(Photo by appshunter.io)

See also: Apple loses key AI leader to Meta

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Why Apple is playing it slow with AI appeared first on AI News.

]]>