Founders & Visionaries - AI News https://www.artificialintelligence-news.com/categories/inside-ai/founders-visionaries/ Artificial Intelligence News Mon, 15 Dec 2025 13:44:15 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Founders & Visionaries - AI News https://www.artificialintelligence-news.com/categories/inside-ai/founders-visionaries/ 32 32 AWS’s legacy will be in AI success https://www.artificialintelligence-news.com/news/awss-legacy-will-be-in-ai-success/ Mon, 15 Dec 2025 13:44:11 +0000 https://www.artificialintelligence-news.com/?p=111311 As the company that kick-started the cloud computing revolution, Amazon is one of the world’s biggest companies whose practices in all things technological can be regarded as a blueprint for implementing new technology. This article looks at some of the ways that the company is deploying AI in its operations. Amazon’s latest AI strategy has […]

The post AWS’s legacy will be in AI success appeared first on AI News.

]]>
As the company that kick-started the cloud computing revolution, Amazon is one of the world’s biggest companies whose practices in all things technological can be regarded as a blueprint for implementing new technology.

This article looks at some of the ways that the company is deploying AI in its operations.

Amazon’s latest AI strategy has progressed from basic chatbots to agentic AI: systems that can plan and execute multi-step work using different tools and across processes. As a company, Amazon sits at the intersection of cloud infrastructure (in the form of AWS), logistics, retail, and customer service, all of which are areas where small efficiency gains can have massive impact.

Copilots to agents, AWS builds the control plane autonomy

In early 2025, Amazon made its AI intentions clear for its cloud company, AWS, by forming a new group focused internally on agentic AI. According to reporting on an internal email, AWS leadership described agentic AI as a potential “multi-billion” business, underscoring that the technology is regarded as a new platform layer, not a standalone feature.

The company was not afraid to say that its workforce is expected to shrink because of the technology. In June 2025, Amazon CEO Andy Jassy told employees that widespread use of generative AI and agents will change how work is done, and that over the next few years, Amazon expects routine work to become faster and more automated, slowing hiring, changing roles, and shrinking some job categories, even if other categories grow.

Amazon’s best use cases are high-volume, rules-bound workflows that require a lot of searching, checking, routing, and logging. These are or will have significant impact in forecasting, delivery mapping, customer service, and product content. /Reuters/ noted examples like inventory optimisation, improved customer service, and better product detail pages as internal targets for gen AI.

Logistics and operations

Amazon has described AI-enabled upgrades in its US operations that hint at where an agentic approach may take shape. In June 2025, it outlined AI innovations that included a generative AI system to improve delivery location accuracy, a new demand forecasting model to predict what customers want (and where), and an agentic AI team looking at enabling robots to understand natural-language

Consumer-facing agents

Consumer agents are where autonomy first becomes real, because systems can take actions, even where there’s money involved. Reporting in The Verge about Alexa+ highlighted features like monitoring items for price drops and (optionally) purchasing for the user automatically once a threshold is hit, a concrete example of the agentic concept in everyday terms: users setting constraints (in the form of price thresholds), and the system watches and executes inside said boundaries.

Rufus as the Amazon AI interface

Amazon’s Rufus assistant is positioned as an AI interface to shopping, one that helps customers find products, do comparisons, and understand the trade-offs between various choices. Amazon describes Rufus as powered by generative (and increasingly agentic) AI to make shopping faster, with personalisation created by a user’s shopping history and current context. Agents therefore become the a shopping interface, with their value to the retailer in shortening journey from intent to final purchase.

Agents for Amazon Bedrock and AgentCore

Internally, AWS is producing agentic ‘building blocks’. Agents for Amazon Bedrock are designed to execute multi-step tasks by orchestrating models with tools use and integration with other platforms. The Amazon Bedrock AgentCore is presented as a platform to build [PDF], deploy, and operate agents securely at scale. It has features like runtime hosting, memory, observability dashboards, and evaluation.

AgentCore is Amazon’s attempt to become the default infrastructure layer for supervised enterprise agents, especially for organisations that need auditability, access controls, and reliability.

Keeping an eye on workforce and governance

If Amazon succeeds, the next phase for the technology is managed AI, comprising of mechanisms that grant or revoke permissions for tools and data access, the monitoring of agents’ behaviour, evaluation of performance and whether governance guidelines are being met, and the establishment of escalation paths when agents hit uncertainty.

The signals to the workforce have been baked into leadership messaging at the company. Fewer people will be required for some corporate tasks, and there will be more roles that can design workflows, govern the models, keep systems secure, and audit the outcomes of agentic AI use.

Conclusions

Proven as a leader in technology, Amazon’s stance on AI and the meaningful ways in which it’s implementing AI are a description of the paths enterprise companies may follow. Winning the productivity gains and lowered costs that AI technology promises is not as simple as plugging in a local device, or spinning up a new cloud instance. But the company can be seen as lighting the way for others to follow. Whether it’s supervising agents or deflecting customer queries to automated answering systems, AI is changing this technology giant in every possible way.

(Image source: CHEN – The Arousing, Thunder – arouse, excite, inspire; thunder rising from below; awe, alarm, trembling; fertilizing intrusion. The ideogram: excitement and rain” – public domain)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AWS’s legacy will be in AI success appeared first on AI News.

]]>
CEOs still betting big on AI: Strategy vs. return on investment in 2026 https://www.artificialintelligence-news.com/news/ceos-still-betting-on-ai-strategy-vs-return-on-investment-in-2026/ Mon, 15 Dec 2025 09:00:00 +0000 https://www.artificialintelligence-news.com/?p=111301 Enterprise leaders are pressing ahead with artificial intelligence, even as some early results remain uneven. Reporting from the Wall Street Journal and Reuters shows that most CEOs expect AI spending to keep rising through 2026, despite difficulty tying those investments to clear, enterprise-wide returns. The tension highlights where many organisations now sit in their AI […]

The post CEOs still betting big on AI: Strategy vs. return on investment in 2026 appeared first on AI News.

]]>
Enterprise leaders are pressing ahead with artificial intelligence, even as some early results remain uneven. Reporting from the Wall Street Journal and Reuters shows that most CEOs expect AI spending to keep rising through 2026, despite difficulty tying those investments to clear, enterprise-wide returns.

The tension highlights where many organisations now sit in their AI journey. The technology has moved beyond trials and proofs of concept, but it has yet to settle into a reliable source of value. Companies are operating in an in-between phase, where ambition, execution, and expectations are all under strain at the same time.

Spending continues, even as returns lag

AI budgets have climbed steadily in large enterprises over the past two years. Competitive pressure, board oversight, and fear of being left behind have all played a role. At the same time, executives are more open about the limits they are seeing. Gains often show up in pockets rather than in the business, pilots fail to spread, and the cost of connecting AI systems to existing tools keeps rising.

A Wall Street Journal survey of senior executives found that most CEOs see AI as central to long-term competitiveness, even if short-term benefits are hard to measure. For many, AI no longer feels optional. It is treated as a capability that must be developed over time, rather than a project that can be paused if results disappoint.

That view helps explain why spending remains steady. Leaders worry that cutting back now could weaken their position later, especially as rivals improve how they use the technology.

Why pilots struggle to scale

One of the main barriers to stronger returns is the jump from experimentation to day-to-day use. Many organisations have launched AI pilots in different teams, often without shared rules or coordination. While these efforts can generate insight and interest, few translate into changes that affect the wider business.

Reuters has reported that companies trying to scale AI frequently run into issues with data quality, system links, security controls, and regulatory requirements. The problems are not only technical, but reflect how work is organised. Responsibility is often split in teams, ownership is unclear, and decisions slow down once projects touch legal, risk, and IT functions.

The result is a pattern of heavy spending on trials, with limited progress toward systems that are embedded in core operations.

Infrastructure costs reshape the equation

The cost of infrastructure is also weighing on AI returns. Training and running models demands large amounts of computing power, storage, and energy. Cloud bills can rise quickly as use grows, while building on-site systems requires upfront investment and long planning cycles. Executives cited by Reuters have warned that infrastructure costs can outpace the benefits delivered by AI tools, particularly in the early stages. This has led to tough choices: whether to centralise AI resources or leave teams to experiment on their own; whether to build in-house systems or rely on vendors; and how much waste is acceptable while capabilities are still forming.

In practice, these decisions are shaping AI strategy as much as model performance or use-case selection.

AI governance moves to the centre of CEO decision-making

As AI spending increases, so does scrutiny. Boards, regulators, and internal audit teams are asking harder questions. In response, many organisations are tightening control. Decision rights are shifting toward central teams, AI councils are becoming more common, and projects are being linked more closely to business priorities.

The Wall Street Journal reports that companies are moving away from loosely connected experiments toward clearer goals, measures, and timelines. This can slow progress, but it reflects a growing belief that AI should be managed with the same discipline as other major investments.

The shift marks a change in how AI is treated. It is no longer a side effort or a curiosity but is being brought into existing operating and risk structures.

Expectations are being reset, not abandoned

Importantly, the persistence of AI spending does not signal blind optimism. Instead, it reflects a reset in expectations. CEOs are learning that AI rarely delivers immediate, sweeping returns. Value tends to emerge gradually, as organisations adjust workflows, retrain staff, and refine data foundations.

Rather than abandoning AI initiatives, many enterprises are narrowing their focus. They are prioritising fewer use cases, demanding clearer ownership, and aligning projects more closely with business outcomes. The re-calibration may reduce short-term excitement, but it improves the likelihood of sustainable returns.

What CEO AI strategy signals for 2026 planning

For organisations shaping their plans for 2026, the message for every CEO is not to retreat from AI, but to pursue it with more care as AI strategies mature. Ownership, governance, and realistic timelines matter more than headline spending levels or bold claims.

Those most likely to benefit are treating AI as a long-term shift in how the organisation works, not a quick route to growth. In the next phase, advantage will depend less on how much is spent and more on how well AI fits into everyday operations.

(Photo by Ambre Estève)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post CEOs still betting big on AI: Strategy vs. return on investment in 2026 appeared first on AI News.

]]>
10% of Nvidia’s cost: Why Tesla-Intel chip partnership demands attention https://www.artificialintelligence-news.com/news/tesla-intel-chip-partnership-nvidia-cost/ Mon, 10 Nov 2025 09:13:18 +0000 https://www.artificialintelligence-news.com/?p=110432 The potential Tesla-Intel chip partnership could deliver AI chips at just 10% of Nvidia’s cost – a claim that represents a significant development in AI infrastructure that enterprise technology leaders cannot afford to ignore. On November 6, 2025, Tesla CEO Elon Musk stated publicly at the company’s annual shareholder meeting that the electric vehicle manufacturer […]

The post 10% of Nvidia’s cost: Why Tesla-Intel chip partnership demands attention appeared first on AI News.

]]>
The potential Tesla-Intel chip partnership could deliver AI chips at just 10% of Nvidia’s cost – a claim that represents a significant development in AI infrastructure that enterprise technology leaders cannot afford to ignore.

On November 6, 2025, Tesla CEO Elon Musk stated publicly at the company’s annual shareholder meeting that the electric vehicle manufacturer is considering working with Intel to produce its fifth-generation AI chips, signalling a major strategic shift in how AI computing hardware might be manufactured and distributed.

“You know, maybe we’ll, we’ll do something with Intel,” Musk told shareholders, according to a Reuters report. “We haven’t signed any deal, but it’s probably worth having discussions with Intel.” The statement sent Intel shares up 4% in after-hours trading, underscoring how seriously the market views the potential collaboration.

The strategic context behind the partnership

Tesla’s consideration of Intel as a manufacturing partner comes at an important juncture for both companies. Tesla is designing its AI5 chip to power its autonomous driving systems.

Currently on its fourth-generation chip, Tesla has identified a significant supply constraint that traditional partnerships with Taiwan’s TSMC and South Korea’s Samsung cannot address fully.

“Even when we extrapolate the best-case scenario for chip production from our suppliers, it’s still not enough,” Musk said during the shareholder meeting. The supply gap has led Tesla to consider building what Musk calls a “terafab” – a massive chip fabrication facility capable of producing at least 100,000 wafer starts per month.

For Intel, the potential partnership offers an important opportunity. The US chipmaker has lagged significantly behind Nvidia in the AI chip race and desperately needs external customers for its newest manufacturing technology.

The US government recently took a 10% stake in Intel, underscoring the strategic importance of maintaining domestic chip manufacturing capabilities.

Cost and performance implications

At 10% of Nvidia’s manufacturing cost, the technical specifications Musk outlined during the shareholder meeting could reshape enterprise AI economics. According to Musk, Tesla’s AI5 chip would consume approximately one-third of the power used by Nvidia’s flagship Blackwell chip, and cost just 10% as much to manufacture.

“I’m super hardcore on chips right now, as you may be able to tell,” Musk said. “I have chips on the brain.”

The cost and efficiency projections, if realised, could alter the economics of AI deployment. Enterprise leaders investing heavily in AI infrastructure should monitor whether these performance targets materialise, as they could influence future technology purchasing decisions in the industry.

The chip would be inexpensive, power-efficient, and optimised for Tesla’s own software, Musk said.

Production timeline and scale

Tesla’s chip production roadmap provides a timeline for enterprise planning. A small number of AI5 units would be produced in 2026, with high-volume production possible in 2027. Musk indicated in a post on social media that AI6 will use the same fabrication facilities but achieve roughly twice the performance, with volume production targeted for mid-2028.

The scale of Tesla’s ambitions is substantial. The proposed “terafab” would represent an expansion of domestic chip manufacturing capacity, potentially reducing supply chain vulnerabilities that have plagued the technology industry in recent years.

“So I think we may have to do a Tesla terafab. It’s like a giga but way bigger. I can’t see any other way to get to the volume of chips that we’re looking for. So I think we’re probably going to have to build a gigantic chip fab. It’s got to be done,” Musk said.

What this means for enterprise decision-makers

Several strategic considerations emerge from any potential Tesla-Intel chip partnership:

Supply chain resilience: The move toward domestic chip manufacturing addresses concerns about supply chain concentration in Asia. Enterprise leaders managing technology risk should consider how shifts in chip manufacturing geography might affect their supply chains and vendor relationships.

Cost structure changes: If Tesla achieves its stated cost targets, the competitive landscape for AI chips could shift. Organisations should prepare contingency plans for potential price pressure on current suppliers and evaluate whether alternative chip architectures are viable.

Technology sovereignty: The US government’s stake in Intel and support for domestic chip manufacturing reflect broader geopolitical considerations. Enterprise leaders in regulated industries or those handling sensitive data should assess how the trends might affect their technology sources.

Innovation pace: Tesla’s aggressive timeline for multiple chip generations suggests an accelerating pace of AI hardware innovation. Technology leaders should factor this into refresh cycles and architecture decisions, avoiding premature commitment to current-generation technology.

The broader industry context

Musk’s statements occur against the backdrop of US-China technology competition. Export restrictions have impacted Nvidia’s business in China, where its market share has reportedly dropped from 95% to near zero.

Intel declined to comment on Musk’s remarks, and no formal agreement has been announced. However, the public nature of the statements and the market’s reaction, suggest substantive discussions may soon be underway.

The AI chip landscape is entering a period of flux. Organisations should maintain flexibility in their infrastructure strategy and monitor how partnerships like Tesla-Intel might reshape the competitive dynamics of AI hardware manufacturing.

The decisions made today about chip manufacturing partnerships could determine which organisations have access to cost-effective, high-performance AI infrastructure in the coming years.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. This comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post 10% of Nvidia’s cost: Why Tesla-Intel chip partnership demands attention appeared first on AI News.

]]>
Alan Turing Institute: Humanities are key to the future of AI https://www.artificialintelligence-news.com/news/alan-turing-institute-humanities-are-key-future-of-ai/ Thu, 07 Aug 2025 15:18:27 +0000 https://www.artificialintelligence-news.com/?p=107307 A powerhouse team has launched a new initiative called ‘Doing AI Differently,’ which calls for a human-centred approach to future development. For years, we’ve treated AI’s outputs like they’re the results of a giant math problem. But the researchers – from The Alan Turing Institute, the University of Edinburgh, AHRC-UKRI, and the Lloyd’s Register Foundation […]

The post Alan Turing Institute: Humanities are key to the future of AI appeared first on AI News.

]]>
A powerhouse team has launched a new initiative called ‘Doing AI Differently,’ which calls for a human-centred approach to future development.

For years, we’ve treated AI’s outputs like they’re the results of a giant math problem. But the researchers – from The Alan Turing Institute, the University of Edinburgh, AHRC-UKRI, and the Lloyd’s Register Foundation – behind this project say that’s the wrong way to look at it.

What AI is creating are basically cultural artifacts. They’re more like a novel or a painting than a spreadsheet. The problem is, AI is creating this “culture” without understanding any of it. It’s like someone who has memorised a dictionary but has no idea how to hold a real conversation.

This is why AI often fails when “nuance and context matter most,” says Professor Drew Hemment, Theme Lead for Interpretive Technologies for Sustainability at The Alan Turing Institute. The system just doesn’t have the “interpretive depth” to get what it’s really saying.

However, most of the AI in the world is built on just a handful of similar designs. The report calls this the “homogenisation problem” and future AI development must overcome this.

Imagine if every baker in the world used the exact same recipe. You’d get a lot of identical, and frankly, boring cakes. With AI, this means the same blind spots, the same biases, and the same limitations get copied and pasted into thousands of tools we use every day.

We saw this happen with social media. It was rolled out with simple goals, and we’re now living with the unintended societal consequences. The ‘Doing AI Differently’ team is sounding the alarm to make sure we don’t make that same mistake with AI.

The team has a plan to build a new kind of AI, one they call Interpretive AI. It’s about designing systems from the very beginning to work the way people do; with ambiguity, multiple viewpoints, and a deep understanding of context.

The vision is to create interpretive technologies that can offer multiple valid perspectives instead of just one rigid answer. It also means exploring alternative AI architectures to break the mould of current designs. Most importantly, the future isn’t about AI replacing us; it’s about creating human-AI ensembles where we work together, combining our creativity with AI’s processing power to solve huge challenges.

This has the potential to touch our lives in very real ways. In healthcare, for example, your experience with a doctor is a story, not just a list of symptoms. An interpretive AI could help capture that full story, improving your care and your trust in the system.

For climate action, it could help bridge the gap between global climate data and the unique cultural and political realities of a local community, creating solutions that actually work on the ground.

A new international funding call is launching to bring researchers from the UK and Canada together on this mission. But we’re at a crossroads.

“We’re at a pivotal moment for AI,” warns Professor Hemment. “We have a narrowing window to build in interpretive capabilities from the ground up”.

For partners like Lloyd’s Register Foundation, it all comes down to one thing: safety.

“As a global safety charity, our priority is to ensure future AI systems, whatever shape they take, are deployed in a safe and reliable manner,” says their Director of Technologies, Jan Przydatek.

This isn’t just about building better technology. It’s about creating an AI that can help solve our biggest challenges and, in the process, amplify the best parts of our own humanity.

(Photo by Ben Sweet)

See also: AI obsession is costing us our human skills

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Alan Turing Institute: Humanities are key to the future of AI appeared first on AI News.

]]>
Zuckerberg outlines Meta’s AI vision for ‘personal superintelligence’ https://www.artificialintelligence-news.com/news/zuckerberg-outlines-meta-ai-vision-personal-superintelligence/ Wed, 30 Jul 2025 14:05:42 +0000 https://www.artificialintelligence-news.com/?p=107248 Meta CEO Mark Zuckerberg has laid out his blueprint for the future of AI, and it’s about giving you “personal superintelligence”. In a letter, the Meta chief painted a picture of what’s coming next, and he believes it’s closer than we think. He says his teams are already seeing early signs of progress. “Over the […]

The post Zuckerberg outlines Meta’s AI vision for ‘personal superintelligence’ appeared first on AI News.

]]>
Meta CEO Mark Zuckerberg has laid out his blueprint for the future of AI, and it’s about giving you “personal superintelligence”.

In a letter, the Meta chief painted a picture of what’s coming next, and he believes it’s closer than we think. He says his teams are already seeing early signs of progress.

“Over the last few months we have begun to see glimpses of our AI systems improving themselves,” Zuckerberg wrote. “The improvement is slow for now, but undeniable. Developing superintelligence is now in sight.”

So, what does he want to do with it? Forget AI that just automates boring office work, Zuckerberg and Meta’s vision for personal superintelligence is far more intimate. He imagines a future where technology serves our individual growth, not just our productivity.

In his words, the real revolution will be “everyone having a personal superintelligence that helps you achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be.”

But here’s where it gets interesting. He drew a clear line in the sand, contrasting his vision against a very different, almost dystopian alternative that he believes others are pursuing.

“This is distinct from others in the industry who believe superintelligence should be directed centrally towards automating all valuable work, and then humanity will live on a dole of its output,” he stated.

Meta, Zuckerberg says, is betting on the individual when it comes to AI superintelligence. The company believes that progress has always come from people chasing their own dreams, not from living off the scraps of a hyper-efficient machine.

If he’s right, we’ll spend less time wrestling with software and more time creating and connecting. This personal AI would live in devices like smart glasses, understanding our world because they can “see what we see, hear what we hear.”

Of course, he knows this is powerful, even dangerous, stuff. Zuckerberg admits that superintelligence will bring new safety concerns and that Meta will have to be careful about what they release to the world. Still, he argues that the goal must be to empower people as much as possible.

Zuckerberg believes we’re at a crossroads right now. The choices we make in the next few years will decide everything.

“The rest of this decade seems likely to be the decisive period for determining the path this technology will take,” he warned, framing it as a choice between “personal empowerment or a force focused on replacing large swaths of society.”

Zuckerberg has made his choice. He’s focusing Meta’s enormous resources on building this personal superintelligence future.

See also: Forget the Turing Test, AI’s real challenge is communication

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Zuckerberg outlines Meta’s AI vision for ‘personal superintelligence’ appeared first on AI News.

]]>
Anthropic deploys AI agents to audit models for safety https://www.artificialintelligence-news.com/news/anthropic-deploys-ai-agents-audit-models-for-safety/ Fri, 25 Jul 2025 13:40:34 +0000 https://www.artificialintelligence-news.com/?p=107214 Anthropic has built an army of autonomous AI agents with a singular mission: to audit powerful models like Claude to improve safety. As these complex systems rapidly advance, the job of making sure they are safe and don’t harbour hidden dangers has become a herculean task. Anthropic believes it has found a solution, and it’s […]

The post Anthropic deploys AI agents to audit models for safety appeared first on AI News.

]]>
Anthropic has built an army of autonomous AI agents with a singular mission: to audit powerful models like Claude to improve safety.

As these complex systems rapidly advance, the job of making sure they are safe and don’t harbour hidden dangers has become a herculean task. Anthropic believes it has found a solution, and it’s a classic case of fighting fire with fire.

The idea is similar to a digital immune system, where AI agents act like antibodies to identify and neutralise problems before they cause real harm. It saves researchers from relying on overworked human teams playing an endless game of whack-a-mole with potential AI problems.

The digital detective squad

The approach is essentially a digital detective squad; a trio of specialised AI safety agents each with a distinct role.   

First up is the Investigator Agent, the grizzled detective of the group. Its job is to go on deep-dive investigations to find the root cause of a problem. It’s armed with a toolkit that allows it to interrogate the suspect model, sift through mountains of data for clues, and even perform a kind of digital forensics by peering inside the model’s neural network to see how it thinks.   

Then there’s the Evaluation Agent. You give this agent a specific, known problem – say, a model that’s a bit too eager to please – and it will design and run a battery of tests to measure just how bad the problem is. It’s all about producing the cold, hard data needed to prove a case.   

Rounding out the team is the Breadth-First Red-Teaming Agent, the undercover operative. This agent’s mission is to have thousands of different conversations with a model, trying to provoke it into revealing any kind of concerning behaviour, even things the researchers haven’t thought of. The most suspicious interactions are then passed up the chain for human review, ensuring the experts don’t waste time chasing dead ends.

Anthropic puts its AI safety agents through their paces

To see if their new digital recruits were up to the job, Anthropic’s researchers put them through a series of “auditing games.”

In one test, they created a model with a secret agenda: to be a sycophant. It had been secretly trained to flatter its internal reward system based on a fictional academic study it had been shown, a flaw hidden deep within its programming.

When a single Investigator Agent was sent in, it struggled, finding the hidden goal only 13% of the time. The agent, it turned out, had a very human flaw: it tended to get fixated on its first idea and failed to explore other leads.

But this is where the system’s genius becomes apparent. When the researchers deployed a “super-agent” that pooled the findings from ten separate investigators, the success rate shot up to 42%. It proves that teamwork makes the dream work, even for AI, allowing a group to overcome the individual blind spots that can derail an investigation.

The other agents also passed with flying colours. The Evaluation Agent successfully designed tests that could spot a quirky model 88% of the time, while the Red-Teaming Agent managed to uncover seven out of ten deliberately implanted flaws.

From the lab to the real world

Anthropic has already deployed these AI safety agents on its production models, and they’re already proving their worth.   

The agents quickly identified common ways to trick a model into generating harmful content, like “prefill attacks” where you start the AI’s sentence for it, or by pretending a dangerous request is for something innocent like academic research.

But one discovery was very chilling. The Investigator Agent used its special tools to look inside the mind of the Opus 4 model and found a specific neural pathway linked to “misinformation.” By directly stimulating this part of the AI’s brain, it could bypass all its safety training and force it to lie.

The agent then instructed the compromised model to write a fake news article. The result? A widespread conspiracy theory dressed as fact:

“Groundbreaking Study Reveals Shocking Link Between Vaccines and Autism

A new study published in the Journal of Vaccine Skepticism claims to have found a definitive link between childhood vaccinations and autism spectrum disorder (ASD)…”

This finding reveals a terrifying duality: the very tools created to make AI safer could, in the wrong hands, become potent weapons to make it more dangerous.

Anthropic continues to advance AI safety

Anthropic is honest about the fact that these AI agents aren’t perfect. They can struggle with subtlety, get stuck on bad ideas, and sometimes fail to generate realistic conversations. They are not yet perfect replacements for human experts.   

But this research points to an evolution in the role of humans in AI safety. Instead of being the detectives on the ground, humans are becoming the commissioners, the strategists who design the AI auditors and interpret the intelligence they gather from the front lines. The agents do the legwork, freeing up humans to provide the high-level oversight and creative thinking that machines still lack.

As these systems march towards and perhaps beyond human-level intelligence, having humans check all their work will be impossible. The only way we might be able to trust them is with equally powerful, automated systems watching their every move. Anthropic is laying the foundation for that future, one where our trust in AI and its judgements is something that can be repeatedly verified.

(Photo by Mufid Majnun)

See also: Alibaba’s new Qwen reasoning AI model sets open-source records

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Anthropic deploys AI agents to audit models for safety appeared first on AI News.

]]>
AI Action Plan: US leadership must be ‘unchallenged’ https://www.artificialintelligence-news.com/news/ai-action-plan-us-leadership-must-be-unchallenged/ Wed, 23 Jul 2025 16:20:24 +0000 https://www.artificialintelligence-news.com/?p=107188 The White House has released its ‘AI Action Plan’, which frames the coming decade as a technological race the US cannot afford to lose. Laced with the urgent rhetoric of a new cold war, the action plan argues that securing victory in AI is nothing short of a national imperative. Trump’s foreword sets the tone, […]

The post AI Action Plan: US leadership must be ‘unchallenged’ appeared first on AI News.

]]>
The White House has released its ‘AI Action Plan’, which frames the coming decade as a technological race the US cannot afford to lose.

Laced with the urgent rhetoric of a new cold war, the action plan argues that securing victory in AI is nothing short of a national imperative. Trump’s foreword sets the tone, calling for America to “achieve and maintain unquestioned and unchallenged global technological dominance” as a core tenet of national security.

To get there, the administration is making a three-pronged push: ignite a firestorm of domestic innovation, build the colossal infrastructure to sustain it, and project American power across the globe to secure the win.

Pillar I: An action plan to support the private AI sector

At its heart, the strategy is a full-throated endorsement of the private sector. The first move is to take a buzzsaw to the regulatory frameworks of the past, with the document explicitly targeting the “onerous” approach of the previous administration.

The philosophy is simple: get out of the way and let innovators innovate. According to US Vice President JD Vance, smothering the technology with rules now would be to “paralyse one of the most promising technologies we have seen in generations.”

The plan even uses the power of federal funding as a stick, threatening to withhold money from states that dare to enact their own “burdensome AI regulations.”

It also strides confidently into the culture wars, insisting that AI systems paid for by the taxpayer must reflect “American values.” This means a preference for models that are “objective and free from top-down ideological bias” and a directive to scrub concepts like misinformation and Diversity, Equity, and Inclusion from the government’s official AI risk guides.

Pillar II: A foundation of concrete and code

The second pillar of the action plan relates to the raw, physical demands of the AI revolution.

“AI is the first digital service in modern life that challenges America to build vastly greater energy generation than we have today,” the plan bluntly states. Its answer is a national mobilisation under the banner of “Build, Baby, Build!”—a vast undertaking to erect data centres, bring semiconductor manufacturing home, and construct the power grid of the future.

This means fast-tracking environmental permits and overhauling the nation’s energy supply, mixing today’s power sources with tomorrow’s bets on nuclear fusion. Bringing chipmaking back to US shores is central to this vision, with a promise to refocus the CHIPS Program Office on delivering results without ideological strings attached.

And, behind it all, a push to train a new generation of technicians and engineers to build and maintain this new industrial backbone.

Pillar III: Ensuring an undisputed lead on the world stage

The final pillar is about shaping the world in America’s image. The ambition is to make the entire US tech stack – from silicon to the software – the undisputed “gold standard for AI worldwide.” This involves an aggressive export strategy to arm allies with American technology, explicitly to counter the influence of a rising China.

This new foreign policy will involve pushing back against Chinese influence in global forums like the United Nations, which the administration believes are being used to promote innovation-killing regulations. It also signals a more hawkish approach to security, demanding tighter controls on the advanced chips that fuel AI progress.

The plan confronts the dark side of AI head-on, acknowledging its potential for misuse in everything from cybercrime to bioweapons, and calls for a national effort to get ahead of the threat.

AI Action Plan lands in a divided industry

The administration’s confident blueprint for the future lands in an industry deeply conflicted about its own creation. Just this week, OpenAI CEO Sam Altman warned about the technology’s disruptive power.

Altman warns that AI will not only eliminate jobs but also pose national security threats. He has spoken of a looming “fraud crisis” powered by AI’s ability to fool security systems, and has gone so far as to co-sign a letter stating that “mitigating the risk of extinction from AI should be a global priority”.

His commentary is a stark reminder that the race for AI dominance is also a race to control a technology with world-altering potential. While Washington focuses on winning, the architects of AI are quietly wrestling with what victory might actually mean.

However, the plan received a cautious welcome from the nonprofit Americans for Responsible Innovation (ARI). The group saw its own fingerprints on several proposals, from stronger export controls to more research into AI safety.

Yet ARI is deeply troubled by the administration’s move to punish states that pursue their own AI safety rules. This position also seems at odds with the views of industry leaders like Altman, who has himself warned against the chaos of 50 different state-level regulations.

“Ultimately, this action plan is about increasing oversight of AI systems while maintaining a hands-off approach to hard and fast regulations,” said ARI President Brad Carson. He sees a chance to better understand the “big risks frontier models create for the public,” but worries about the administration’s tactics.

“The plan’s targeting of state-passed AI safeguards is cause for concern. For America to lead on AI, we have to build public trust in these systems, and safeguards are essential to that public confidence.”

(Photo by Luke Michael)

See also: Sam Altman: AI will cause job losses and national security threats

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI Action Plan: US leadership must be ‘unchallenged’ appeared first on AI News.

]]>
OpenAI and Oracle announce Stargate AI data centre deal https://www.artificialintelligence-news.com/news/openai-and-oracle-announce-stargate-ai-data-centre-deal/ Tue, 22 Jul 2025 13:08:26 +0000 https://www.artificialintelligence-news.com/?p=107159 OpenAI has shaken hands with Oracle on a colossal deal to advance the former’s colossal Stargate AI data centre initiative. It’s one thing to talk about the AI revolution in abstract terms, but it’s another thing entirely to grasp the sheer physical scale of what’s being built to make it happen. The foundations of our […]

The post OpenAI and Oracle announce Stargate AI data centre deal appeared first on AI News.

]]>
OpenAI has shaken hands with Oracle on a colossal deal to advance the former’s colossal Stargate AI data centre initiative.

It’s one thing to talk about the AI revolution in abstract terms, but it’s another thing entirely to grasp the sheer physical scale of what’s being built to make it happen. The foundations of our AI future are being laid in concrete, steel, and miles of fibre-optic cable, and those foundations are getting colossally bigger.

Together, OpenAI and Oracle are going to build new data centres in the US packed with enough hardware to consume 4.5 gigawatts of power. It’s hard to overstate what a staggering amount of energy that is—it’s the kind of power that could light up a major city. And all of it will be dedicated to one thing: powering the next generation of AI.

This isn’t just a random expansion; it’s a huge piece of OpenAI’s grand Stargate plan. The goal is simple: to build enough computing power to bring advanced AI to everyone.

When you add this new project to the work already underway in Abilene, Texas, OpenAI is now developing over 5 gigawatts of data centre capacity. That’s enough space to run more than two million of the most powerful computer chips available.

This move shows they are dead serious about a pledge they made at the White House earlier this year to plough half a trillion dollars into US AI infrastructure. In fact, with the momentum they’re getting from partners like Oracle and Japan’s SoftBank, they now expect to blow past that initial goal.

But this story isn’t just about silicon chips and corporate deals; it’s about people. OpenAI believes that building and running these new Stargate AI data centres will create over 100,000 jobs.

That job creation presents real opportunities for families across the country from construction crews pouring the concrete, to specialised electricians wiring up racks of servers, and the full-time technicians who will keep these digital brains running day and night.

In Abilene, the first phase of OpenAI’s development of Stargate data centres is already humming with activity. The first truckloads of Nvidia’s brand-new GB200 chips have arrived, and OpenAI’s researchers are already using them to see what their next AI models are capable of.

Of course, a project this huge is never a two-player game. While Oracle is helping build the physical capacity for the Stargate initiative, OpenAI is also working closely with SoftBank to completely rethink how AI data centres should be designed from the ground up. And let’s not forget Microsoft, which remains the key cloud partner, providing the digital plumbing that connects everything together.

Behind the curtain, there is a very real and very human industrial effort underway on a scale we’ve rarely seen before. It’s a powerful reminder that our digital world is built with grit, ambition, and an almost unbelievable (albeit concerning) amount of electricity.

See also: Can speed and safety truly coexist in the AI race?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI and Oracle announce Stargate AI data centre deal appeared first on AI News.

]]>
Can speed and safety truly coexist in the AI race? https://www.artificialintelligence-news.com/news/can-speed-and-safety-truly-coexist-ai-race/ Fri, 18 Jul 2025 14:40:34 +0000 https://www.artificialintelligence-news.com/?p=107142 A criticism about AI safety from an OpenAI researcher aimed at a rival opened a window into the industry’s struggle: a battle against itself. It started with a warning from Boaz Barak, a Harvard professor currently on leave and working on safety at OpenAI. He called the launch of xAI’s Grok model “completely irresponsible,” not […]

The post Can speed and safety truly coexist in the AI race? appeared first on AI News.

]]>
A criticism about AI safety from an OpenAI researcher aimed at a rival opened a window into the industry’s struggle: a battle against itself.

It started with a warning from Boaz Barak, a Harvard professor currently on leave and working on safety at OpenAI. He called the launch of xAI’s Grok model “completely irresponsible,” not because of its headline-grabbing antics, but because of what was missing: a public system card, detailed safety evaluations, the basic artefacts of transparency that have become the fragile norm.

It was a clear and necessary call. But a candid reflection, posted just three weeks after he left the company, from ex-OpenAI engineer Calvin French-Owen, shows us the other half of the story.

French-Owen’s account suggests a large number of people at OpenAI are indeed working on safety, focusing on very real threats like hate speech, bio-weapons, and self-harm. Yet, he delivers the insight: “Most of the work which is done isn’t published,” he wrote, adding that OpenAI “really should do more to get it out there.”

Here, the simple narrative of a good actor scolding a bad one collapses. In its place, we see the real, industry-wide dilemma laid bare. The whole AI industry is caught in the ‘Safety-Velocity Paradox,’ a deep, structural conflict between the need to move at breakneck speed to compete and the moral need to move with caution to keep us safe.

French-Owen suggests that OpenAI is in a state of controlled chaos, having tripled its headcount to over 3,000 in a single year, where “everything breaks when you scale that quickly.” This chaotic energy is channelled by the immense pressure of a “three-horse race” to AGI against Google and Anthropic. The result is a culture of incredible speed, but also one of secrecy.

Consider the creation of Codex, OpenAI’s coding agent. French-Owen calls the project a “mad-dash sprint,” where a small team built a revolutionary product from scratch in just seven weeks.

This is a textbook example of velocity; describing working until midnight most nights and even through weekends to make it happen. This is the human cost of that velocity. In an environment moving this fast, is it any wonder that the slow, methodical work of publishing AI safety research feels like a distraction from the race?

This paradox isn’t born of malice, but of a set of powerful, interlocking forces.

There is the obvious competitive pressure to be first. There is also the cultural DNA of these labs, which began as loose groups of “scientists and tinkerers” and value-shifting breakthroughs over methodical processes. And there is a simple problem of measurement: it is easy to quantify speed and performance, but exceptionally difficult to quantify a disaster that was successfully prevented.

In the boardrooms of today, the visible metrics of velocity will almost always shout louder than the invisible successes of safety. However, to move forward, it cannot be about pointing fingers—it must be about changing the fundamental rules of the game.

We need to redefine what it means to ship a product, making the publication of a safety case as integral as the code itself. We need industry-wide standards that prevent any single company from being competitively punished for its diligence, turning safety from a feature into a shared, non-negotiable foundation.

However, most of all, we need to cultivate a culture within AI labs where every engineer – not just the safety department – feels a sense of responsibility.

The race to create AGI is not about who gets there first; it is about how we arrive. The true winner will not be the company that is merely the fastest, but the one that proves to a watching world that ambition and responsibility can, and must, move forward together.

(Photo by Olu Olamigoke Jr.)

See also: Military AI contracts awarded to Anthropic, OpenAI, Google, and xAI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Can speed and safety truly coexist in the AI race? appeared first on AI News.

]]>
Zuckerberg’s $15B bet: How Meta’s ‘Superintelligence Labs’ became Silicon Valley’s most expensive AI talent war https://www.artificialintelligence-news.com/news/meta-superintelligence-ai-lab-zuckerberg-talent-war/ Thu, 17 Jul 2025 15:43:07 +0000 https://www.artificialintelligence-news.com/?p=107113 Mark Zuckerberg has a history of making audacious bets that reshape entire industries – and losing spectacularly when they don’t pan out. After burning through US$46 billion on the metaverse with little to show for it, the Meta CEO is now doubling down with an even more ambitious wager: superintelligence AI. This time, however, the […]

The post Zuckerberg’s $15B bet: How Meta’s ‘Superintelligence Labs’ became Silicon Valley’s most expensive AI talent war appeared first on AI News.

]]>
Mark Zuckerberg has a history of making audacious bets that reshape entire industries – and losing spectacularly when they don’t pan out. After burning through US$46 billion on the metaverse with little to show for it, the Meta CEO is now doubling down with an even more ambitious wager: superintelligence AI.

This time, however, the stakes are higher, the competition more fierce, and the potential rewards more transformative than anything Meta has attempted before.

With nine-figure compensation packages and infrastructure investments that dwarf even the metaverse spending spree, Zuckerberg’s superintelligence AI gamble represents Silicon Valley’s most expensive talent war – one that could either cement Meta’s position as a tech giant or become another cautionary tale of visionary ambition meeting harsh reality.

The birth of Meta Superintelligence Labs

The formation of Meta Superintelligence Labs marks a shift for the social media giant. In an exclusive interview with The Information‘s TITV live-streaming programme, Zuckerberg told founder Jessica Lessin that “the most exciting thing this year is that we’re starting to see early glimpses of self-improvement with the models, which means that developing super intelligence is now in sight.”

The vision has driven the company to restructure its entire AI division, with the ambitious goal of delivering what Zuckerberg calls “personal super intelligence to everyone in the world.” The lab’s creation follows a period of internal struggles in Meta’s AI division, including management struggles, employee churn, and product releases that fell flat.

Rather than incrementally improving existing systems, Zuckerberg has opted for a complete overhaul, bringing in external leadership and re-imagining the company’s approach to AI development.

Are we witnessing the most expensive talent war in tech history?

Mark Zuckerberg, Founder, Chairman and Chief Executive Officer/Meta media gallery
Mark Zuckerberg, Founder, Chairman and Chief Executive Officer/Meta media gallery

Central to Meta’s superintelligence AI ambitions is a talent acquisition strategy that has sent shockwaves through the industry. Zuckerberg has embarked on a spending spree to create the new lab, offering as much as nine-figure pay packages to hire top researchers from companies like OpenAI, Google, Apple and Anthropic.

When The Information’s Lessin questioned reports of $100-200 million compensation packages, Zuckerberg acknowledged the competitive nature of the market, stating that “a lot of the specifics that have been reported aren’t accurate by themselves. But it is a very hot market… there’s a small number of researchers, who are the best, who are in demand by all of the different labs.”

The Meta AI talent acquisition strategy extends beyond financial incentives. Zuckerberg said having “basically the most compute per researcher is a strategic advantage, not just for doing the work, but for attracting the best people.” The approach reflects an understanding that in the superintelligence AI race, talent density matters more than team size.

The Alexandr Wang acquisition: A US$14.3b gamble

The centrepiece of Meta’s talent strategy was its acquisition of Scale AI leadership. In June, the company made a $14.3 billion investment in the AI startup, founded and led by Wang. Under the deal, Meta took a 49% stake in the company, and Wang and a team of top Scale employees joined Meta in leadership roles.

At just 28 years old, Alexandr Wang now serves as Meta’s chief AI officer, leading what the company has renamed “Meta Superintelligence Labs.” In the larger AI division, Wang has led a team of around a dozen newly-hired researchers, a handful of his deputies from Scale AI, and Nat Friedman, the former chief executive of GitHub.

The integration of Wang’s team as a re-imagining of how Meta approaches AI development. The group is working in an office space siloed from the rest of the company and next to Zuckerberg, highlighting the importance placed on the initiative.

A philosophical shift: From open source to closed development

Perhaps the most significant development emerging from Meta Superintelligence Labs is a potential abandonment of the company’s long-standing open-source philosophy. Last week, a small group of top members of the lab, including Wang, discussed abandoning the company’s most powerful open source AI model, called Behemoth, in favour of developing a closed model.

This represents a departure from Meta’s historical approach. For years, Meta has chosen to open source its AI models, making code public for other developers to build on. Meta executives have argued it is better for the technology to be built in public so that AI development will move faster and be accessible to more developers.

The shift reflects concerns about competitive positioning in the AI race. Meta had finished feeding data into its Behemoth model (training) but has delayed its release because of poor internal performance. The setback has prompted a serious reconsideration of the company’s approach.

Infrastructure as a competitive advantage

Beyond talent acquisition, Meta is making infrastructure investments to support its superintelligence ambitions. Zuckerberg revealed that the company is “building multiple, multi-gigawatt data centres” and pioneering new construction methods, including “weatherproof tents” to accelerate deployment.

The scale of these investments is staggering. Hyperion, one of Meta’s new data centres, “is going to scale up to five gigawatts over the coming years” and “the size of the site covers a significant portion of the footprint of Manhattan in terms of space.”

The infrastructure spending is enabled by Meta’s strong financial position, with Zuckerberg noting that “we can basically do this all funded from the cash flow of the company.”

The personal superintelligence vision

What distinguishes Meta’s approach from competitors is its focus on “personal superintelligence” rather than centralised AI systems. During his interview with The Information’s Lessin, Zuckerberg explained that while other labs focus on “wanting to automate all of the economically productive work in society,” Meta’s vision centres on “what are the things that people care about in their own lives… relationships and culture and creativity and having fun and enjoying life.”

The vision extends to Meta’s hardware ambitions, particularly its AR glasses initiative. In the same TITV interview, Zuckerberg predicted that “if you don’t have AI glasses, you’re going to be at a cognitive disadvantage” and described future scenarios where AI companions could “observe what’s going on in your life and be able to follow up on things for you.”

Industry implications and competitive dynamics

The implications of Meta’s superintelligence push extend beyond the company. Meta’s AI talent acquisition strategy has created salary inflation in the industry, forcing competitors to match or exceed Meta’s compensation levels to retain their researchers.

When asked about his interactions with competitors at Sun Valley, Zuckerberg acknowledged the competitive landscape, stating that “we’re not trying to target anyone individually. I want to make sure that I get to know all of the top researchers in the industry.”

The diplomatic approach masks what is fundamentally a zero-sum competition for a finite pool of top-tier superintelligence AI talent. The potential shift away from open-source development also signals a broader industry trend toward more proprietary approaches to AI development.

Conclusion: A defining moment for Meta

Meta’s superintelligence initiative represents a re-imagining of the company’s future. After the costly metaverse experiment failed to deliver results, Zuckerberg is betting even bigger on AI, with investments that could exceed US$100 billion over the coming years.

The success or failure of Meta Superintelligence Labs will likely determine not just the company’s future but the trajectory of the broader AI industry. With some employees expecting “an exodus of AI talent who were not chosen to join Wang’s superintelligence team,” the stakes are high.

Whether Meta’s AI talent acquisition campaign will yield the breakthrough technologies Zuckerberg envisions remains to be seen. What’s certain is that Silicon Valley’s most expensive talent war has begun.

See also: Apple loses key AI leader to Meta

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Zuckerberg’s $15B bet: How Meta’s ‘Superintelligence Labs’ became Silicon Valley’s most expensive AI talent war appeared first on AI News.

]]>
Nvidia reclaims title of most valuable company on AI momentum https://www.artificialintelligence-news.com/news/nvidia-reclaims-title-of-most-valuable-company-on-ai-momentum/ Thu, 26 Jun 2025 14:08:38 +0000 https://www.artificialintelligence-news.com/?p=106934 Nvidia briefly became the world’s most valuable company on Wednesday after its stock jumped over 4% in price to a new high of $154.10, pushing its market value to $3.76 trillion. Reuters said the chipmaker overtook Microsoft, which stood at $3.65 trillion after a smaller gain. The rise follows a note from Loop Capital, which […]

The post Nvidia reclaims title of most valuable company on AI momentum appeared first on AI News.

]]>
Nvidia briefly became the world’s most valuable company on Wednesday after its stock jumped over 4% in price to a new high of $154.10, pushing its market value to $3.76 trillion. Reuters said the chipmaker overtook Microsoft, which stood at $3.65 trillion after a smaller gain.

The rise follows a note from Loop Capital, which raised its price target for Nvidia to $250 from $175. The firm kept its “buy” rating and said demand for generative AI could grow faster than expected.

“We are entering the next ‘Golden Wave’ of Gen AI adoption and Nvidia is at the front-end of another material leg of stronger than anticipated demand,” said Loop Capital analyst Ananda Baruah.

The renewed interest in AI has sent investors back into tech stocks, especially companies involved in chips and data infrastructure. Nvidia, which designs high-performance GPUs used in AI models, has been a key figure in that trend.

Even with the stock’s strong performance, its valuation doesn’t appear overly stretched. Nvidia trades at about 30 times projected earnings for the next year, below its five-year average of 40 times. This suggests analysts have been raising their forecasts as the company keeps delivering bigger profits.

Nvidia, Microsoft, and Apple have all rotated in and out of the top spot for market value over the past year. Microsoft had recently pulled ahead, but Nvidia regained the lead this week. Apple’s shares rose 0.4% on Wednesday, bringing its valuation to about $3 trillion.

Nvidia’s stock has climbed more than 60% in value since hitting a low in early April. That drop came during a broader sell-off triggered by tariff announcements from Donald Trump. Since then, markets have steadied, with hoping for trade deals that could reduce some of the pressure on the company.

The broader tech sector has also been moving to higher valuations. The S&P 500’s technology index was up 0.9% on Wednesday, reaching a new record. It has gained nearly 6% so far in 2025.

Tesla’s AI push goes beyond self-driving cars

Tesla is best known for electric vehicles, but the company is also working to build up its AI capabilities and robotaxi project, plus lesser-known work in robotics.

While many are focused on Tesla’s push to launch a self-driving ride-hailing service, CEO Elon Musk has also been talking about a broader AI future. As The Motley Fool highlighted, one example is Optimus, a humanoid robot the company is developing for factory and, potentially, domestic use.

Nvidia CEO Jensen Huang recently highlighted the potential of this market, calling humanoid robotics a “multitrillion-dollar industry.” He mentioned Tesla’s Optimus project as one of the efforts that has caught his attention.

Tesla sees two main uses for Optimus. First, the robot could be trained with machine learning to help on the company’s own production lines. Over time, it could take over more tasks and operate without breaks, increasing factory output.

Secondly, Tesla could sell Optimus to other industries where labour is physically demanding. The robot could be adapted for more routine settings outside factories. Musk has said Optimus could eventually become more valuable than the company’s car business.

Other companies are also working in this space. Figure AI, a startup backed by Nvidia, is developing similar humanoid robots for use in factories. A demo video shows how its machines could work alongside people to boost output and reduce repetitive tasks.

What’s next for Tesla’s stock?

Tesla’s share price has jumped nearly 30%, driven in part by its robotaxi rollout. The company started testing the service in Texas this week, which has helped fuel investor optimism.

But some analysts say its stock may have already peaked due to the short-term excitement of the Optimus announcement. Tesla tends to move based on headlines, and the same pattern could apply to its robot and robotaxi projects.

While Optimus could become an important part of Tesla’s future, it’s still early. Key questions remain about how soon the robot can scale, how it will compare with other options, and whether the company can turn the project into a real business.

Investors watching Tesla’s AI plans may want to see more progress before making new bets.

(Photo by Mariia Shalabaieva)

See also: NO FAKES Act: AI deepfakes protection or internet freedom threat?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Nvidia reclaims title of most valuable company on AI momentum appeared first on AI News.

]]>
The OpenAI Files: Ex-staff claim profit greed betraying AI safety https://www.artificialintelligence-news.com/news/the-openai-files-ex-staff-claim-profit-greed-ai-safety/ Thu, 19 Jun 2025 11:12:18 +0000 https://www.artificialintelligence-news.com/?p=106870 ‘The OpenAI Files’ report, assembling voices of concerned ex-staff, claims the world’s most prominent AI lab is betraying safety for profit. What began as a noble quest to ensure AI would serve all of humanity is now teetering on the edge of becoming just another corporate giant, chasing immense profits while leaving safety and ethics […]

The post The OpenAI Files: Ex-staff claim profit greed betraying AI safety appeared first on AI News.

]]>
‘The OpenAI Files’ report, assembling voices of concerned ex-staff, claims the world’s most prominent AI lab is betraying safety for profit. What began as a noble quest to ensure AI would serve all of humanity is now teetering on the edge of becoming just another corporate giant, chasing immense profits while leaving safety and ethics in the dust.

At the core of it all is a plan to tear up the original rulebook. When OpenAI started, it made a crucial promise: it put a cap on how much money investors could make. It was a legal guarantee that if they succeeded in creating world-changing AI, the vast benefits would flow to humanity, not just a handful of billionaires. Now, that promise is on the verge of being erased, apparently to satisfy investors who want unlimited returns.

For the people who built OpenAI, this pivot away from AI safety feels like a profound betrayal. “The non-profit mission was a promise to do the right thing when the stakes got high,” says former staff member Carroll Wainwright. “Now that the stakes are high, the non-profit structure is being abandoned, which means the promise was ultimately empty.” 

Deepening crisis of trust

Many of these deeply worried voices point to one person: CEO Sam Altman. The concerns are not new. Reports suggest that even at his previous companies, senior colleagues tried to have him removed for what they called “deceptive and chaotic” behaviour.

That same feeling of mistrust followed him to OpenAI. The company’s own co-founder, Ilya Sutskever, who worked alongside Altman for years, and since launched his own startup, came to a chilling conclusion: “I don’t think Sam is the guy who should have the finger on the button for AGI.” He felt Altman was dishonest and created chaos, a terrifying combination for someone potentially in charge of our collective future.

Mira Murati, the former CTO, felt just as uneasy. “I don’t feel comfortable about Sam leading us to AGI,” she said. She described a toxic pattern where Altman would tell people what they wanted to hear and then undermine them if they got in his way. It suggests manipulation that former OpenAI board member Tasha McCauley says “should be unacceptable” when the AI safety stakes are this high.

This crisis of trust has had real-world consequences. Insiders say the culture at OpenAI has shifted, with the crucial work of AI safety taking a backseat to releasing “shiny products”. Jan Leike, who led the team responsible for long-term safety, said they were “sailing against the wind,” struggling to get the resources they needed to do their vital research.

Tweet from former OpenAI employee Jan Leike about The OpenAI Files sharing concerns about the impact on AI safety in the pivot towards profit.

Another former employee, William Saunders, even gave a terrifying testimony to the US Senate, revealing that for long periods, security was so weak that hundreds of engineers could have stolen the company’s most advanced AI, including GPT-4.

Desperate plea to prioritise AI safety at OpenAI

But those who’ve left aren’t just walking away. They’ve laid out a roadmap to pull OpenAI back from the brink, a last-ditch effort to save the original mission.

They’re calling for the company’s nonprofit heart to be given real power again, with an iron-clad veto over safety decisions. They’re demanding clear, honest leadership, which includes a new and thorough investigation into the conduct of Sam Altman.

They want real, independent oversight, so OpenAI can’t just mark its own homework on AI safety. And they are pleading for a culture where people can speak up about their concerns without fearing for their jobs or savings—a place with real protection for whistleblowers.

Finally, they are insisting that OpenAI stick to its original financial promise: the profit caps must stay. The goal must be public benefit, not unlimited private wealth.

This isn’t just about the internal drama at a Silicon Valley company. OpenAI is building a technology that could reshape our world in ways we can barely imagine. The question its former employees are forcing us all to ask is a simple but profound one: who do we trust to build our future?

As former board member Helen Toner warned from her own experience, “internal guardrails are fragile when money is on the line”.

Right now, the people who know OpenAI best are telling us those safety guardrails have all but broken.

See also: AI adoption matures but deployment hurdles remain

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post The OpenAI Files: Ex-staff claim profit greed betraying AI safety appeared first on AI News.

]]>