Legal Industry AI - AI News https://www.artificialintelligence-news.com/categories/ai-in-action/legal-industry-ai/ Artificial Intelligence News Tue, 06 Jan 2026 15:00:19 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.1 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Legal Industry AI - AI News https://www.artificialintelligence-news.com/categories/ai-in-action/legal-industry-ai/ 32 32 The Law Society: Current laws are fit for the AI era https://www.artificialintelligence-news.com/news/the-law-society-current-laws-are-fit-for-the-ai-era/ Tue, 06 Jan 2026 15:00:17 +0000 https://www.artificialintelligence-news.com/?p=111483 As ministers push to loosen rules to speed up AI adoption, The Law Society argues that lawyers just need to know how current laws apply. The Department for Science, Innovation & Technology (DSIT) recently launched a call for evidence on a proposed ‘AI Growth Lab’. This cross-economy sandbox is designed to accelerate the deployment of […]

The post The Law Society: Current laws are fit for the AI era appeared first on AI News.

]]>
As ministers push to loosen rules to speed up AI adoption, The Law Society argues that lawyers just need to know how current laws apply.

The Department for Science, Innovation & Technology (DSIT) recently launched a call for evidence on a proposed ‘AI Growth Lab’. This cross-economy sandbox is designed to accelerate the deployment of autonomous technologies by granting “time-limited regulatory exemptions” to firms. The government’s position is that many regulations are outdated, having been designed before autonomous software existed, often assuming that decisions are made by people rather than machines.

Ministers believe that if the UK can move faster than its global competitors, it can secure a defining economic advantage, with a potential  £140 billion boost to national output by 2030. Their preliminary analysis specifically flags legal services as a sector where removing “unnecessary legal barriers” could generate billions in value over the next decade.

Yet, the legal profession – supposedly the beneficiary of this deregulation – isn’t asking for exemptions. In its formal response, the Law Society made clear that the existing framework is robust enough. The friction lies not in the rules themselves, but in the lack of certainty surrounding them. While two-thirds of lawyers already use AI tools, confusion remains the primary brake on deeper integration.

Headshot of Ian Jeffery, CEO of The Law Society.

Ian Jeffery, CEO of The Law Society, said: “AI innovation is vital for the legal sector and already has great momentum. The existing legal regulatory framework supports progress. The main challenges don’t stem from regulatory burdens, but rather from uncertainty, cost, data and skills associated with AI adoption.”

Rather than a regulatory overhaul, the profession is asking for a practical roadmap. Firms are currently navigating a grey area regarding liability and data protection. Solicitors need definitive answers on whether client data must be anonymised before it is fed into AI platforms, and they require standardised protocols for data security and storage.

The questions get thornier when errors occur. If an AI tool generates harmful legal advice, it is currently unclear where the buck stops (i.e. with the solicitor, the firm, the developer, or the insurer.) There is also ambiguity about supervision requirements, specifically whether a human lawyer must oversee every instance of AI deployment.

Such concerns are particularly acute for “reserved legal activities” like court representation, conveyancing, and probate, where practitioners need to know if using automated assistance puts them in breach of their professional duties.

AI laws must retain safeguards

The government has tried to reassure the public that the sandbox will have “red lines” to protect fundamental rights and safety. However, The Law Society remains wary of any move that might dilute consumer protection in the name of speed.

“Technological progress in the legal sector should not expose clients or consumers to unregulated risks,” Jeffery stated. “Current regulation of the profession reflects the safeguards that Parliament deemed vital to protect clients and the public. It ensures trust in the English and Welsh legal system worldwide.”

The body is willing to collaborate on a “legal services sandbox,” but only if it upholds professional standards rather than bypassing them. For The Law Society, the priority is maintaining the integrity of the justice system in the AI era.

“The Law Society strongly supports innovation provided it remains aligned with professional integrity and operates in a solid regulatory environment,” Jeffery explained. “The government must work with legal regulators and bodies to ensure adherence to the sector’s professional standards. Any legal regulatory changes must include parliamentary oversight.”

See also: Inside China’s push to apply AI across its energy system

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post The Law Society: Current laws are fit for the AI era appeared first on AI News.

]]>
How accounting firms are using AI agents to reclaim time and trust https://www.artificialintelligence-news.com/news/finance-ai-reclaiming-time-trust-with-openai-chatgpt/ Tue, 21 Oct 2025 11:41:15 +0000 https://www.artificialintelligence-news.com/?p=109941 For CFOs and CIOs under pressure to modernise finance operations, automation – as seen in several generations of RPA (robotic process automation) – isn’t enough. It’s apparent that transparency and explainability matter just as much. Accounting firms and finance functions inside organisations are now turning to AI systems that reason, not just compute. One of […]

The post How accounting firms are using AI agents to reclaim time and trust appeared first on AI News.

]]>
For CFOs and CIOs under pressure to modernise finance operations, automation – as seen in several generations of RPA (robotic process automation) – isn’t enough. It’s apparent that transparency and explainability matter just as much.

Accounting firms and finance functions inside organisations are now turning to AI systems that reason, not just compute. One of the most ambitious examples is Basis, a US-based start-up founded just two years ago that builds AI agents designed to automate structured accounting work, and keep human oversight firmly in the loop.

Such systems signal a shift in enterprise automation. Instead of replacing people, AI agents extend human expertise and combine the precision of AI models with the oversight that finance professionals need for compliance and client trust.

Efficiency with accountability

Basis develops AI agents that handle routine finance tasks such as reconciliations, journal entries, and financial summaries. The platform is built on OpenAI’s GPT-4.1 and GPT-5 models, models that give the facility to operators to examine each decision step taken autonomously.

Accounting firms using Basis report up to 30 percent time savings and an ensuing higher capacity for advisory work. That’s the kind of value creation traditional automation cannot deliver as quickly or at similar cost to the business.

Unlike many automation tools that operate as black boxes, Basis emphasises reviewable reasoning. Every recommendation includes an account of the data used and the logic behind it. Visibility means accountants can validate each outcome and remain responsible for results, a feature that’s always important in financial operations, and especially in highly-regulated industries.

Building systems that learn

Agentic AI can treat accounting as a network of workflows, not isolated tasks. A supervising AI agent, powered by GPT-5 in the case of Basis’s platform, manages the entirety of processes. It can delegate specific tasks sub-agents running on different models, with the choice of AI model depending on the job’s complexity and the type of data that’s to be processed.

For example, for quick queries or clarifications, Basis uses GPT-4.1 for its speed, while for complex classifications or month-end close, GPT-5 provides better reasoning and context handling.

Company benchmarks each of its models against real-world accounting workflows to decide when it’s safe to let agents handle more responsibility. Finance professionals can always see what the system has done, why it made specific choices, and how confident it is in its recommendations.

This malleable architecture lets firms scale AI and help ensure accuracy as levels of automation increase. The process mirrors the hybrid human–AI collaboration now emerging as the norm in sectors like legal services and risk management.

Lessons for other sectors

What makes Basis and financial multi-agentic AI relevant beyond accounting is the model-orchestration approach, routing tasks to the most appropriate AI model based on its performance and latency.

The format could inform similar deployments in procurement, HR, or compliance operations; anywhere, in fact, where large volumes of structured decisions need transparency and – to use a terrible pun – accountability.

Basis’s collaboration with OpenAI shows how AI reasoning engines in secure data environments can be effective.

The goal isn’t pure speed, but automation that increases trust in the operator, and in the models themselves. These are systems that evolve without humans losing control of the outcomes.

Conclusion

AI in accounting isn’t limited to automating entries, it’s turning more towards building systems that think like accountants, not machines.

For enterprise leaders, Basis’s model shows a way toward automation that improves over time. Each improvement in model makes teams faster and smarter without surrendering control to the automation process.

(Image source: “Accounting charts” by World Bank Photo Collection is licensed under CC BY-NC-ND 2.0.)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post How accounting firms are using AI agents to reclaim time and trust appeared first on AI News.

]]>
Reddit sues Anthropic for scraping user data to train AI https://www.artificialintelligence-news.com/news/reddit-sues-anthropic-for-scraping-user-data-to-train-ai/ Tue, 10 Jun 2025 08:59:41 +0000 https://www.artificialintelligence-news.com/?p=106712 Reddit is taking Anthropic to court, accusing the artificial intelligence company of pulling user content from the platform without permission and using it to train its Claude AI models. The lawsuit, filed in a California state court, claims Anthropic made more than 100,000 unauthorised requests to Reddit’s servers, even after publicly stating that it had […]

The post Reddit sues Anthropic for scraping user data to train AI appeared first on AI News.

]]>
Reddit is taking Anthropic to court, accusing the artificial intelligence company of pulling user content from the platform without permission and using it to train its Claude AI models. The lawsuit, filed in a California state court, claims Anthropic made more than 100,000 unauthorised requests to Reddit’s servers, even after publicly stating that it had stopped.

The case is built around Reddit’s claim that Anthropic ignored both technical restrictions and its terms of service. According to the complaint, Anthropic bypassed protections like the site’s robots.txt file, which is supposed to prevent automated scraping. Reddit also accuses Anthropic of violating user privacy by collecting and using personal posts—including deleted content—for commercial purposes.

Reddit says it offers structured access to its data through licensing agreements with companies such as OpenAI and Google. These deals include conditions around content use, privacy safeguards, and data deletion. According to the platform, Anthropic declined to pursue a formal agreement and instead scraped the site directly, avoiding licensing fees and skipping user protections in the process.

The lawsuit highlights a 2021 research paper co-authored by Anthropic CEO Dario Amodei, which pointed to Reddit as a rich source of training data for language models. Reddit also included examples where Claude appeared to reproduce Reddit posts nearly word for word, even echoing posts that had been deleted by users. That, the company says, shows Anthropic failed to put guardrails in place to respect user privacy or content takedowns.

Reddit is seeking financial damages and a court order that would stop Anthropic from using Reddit content in future versions of its models.

Anthropic has responded, claiming it disagrees with the claims and plans to defend itself. However, this is not the first time the corporation has come under legal pressure over how it collects training data.

In August 2024, a group of authors filed a class-action lawsuit accusing Anthropic of using their copyrighted work without permission. They claimed that the firm trained its models on books and other written materials without their consent and then requested compensation for using their content.

A similar case from October 2023 involved Universal Music Group and other publishers. They sued Anthropic over claims that its Claude chatbot was reproducing copyrighted song lyrics. The music companies argued that this use violated their intellectual property rights and asked the court to block further use of their lyrics.

Unlike those lawsuits, Reddit’s case doesn’t focus on copyright. Instead, it centres on breach of contract and unfair competition. Reddit’s argument is that the data taken from its site isn’t just public—it’s governed by terms that Anthropic knowingly ignored. That distinction could make the case an important one for other platforms that host user content but want to control how it’s used in commercial AI systems.

Reddit also accuses Anthropic of misleading the public. The lawsuit points to public statements from Anthropic claiming it respects scraping rules and values user privacy, which Reddit says were contradicted by the company’s actions.

“For its part, despite what its marketing material says, Anthropic does not care about Reddit’s rules or users,” the lawsuit reads. “It believes it is entitled to take whatever content it wants and use that content however it desires, with impunity.”

After the lawsuit was filed, Reddit’s stock rose nearly 67%, a sign that investors supported the move. The outcome of the case could set a precedent for how companies strike a balance between open internet content and the rights of users and content owners.

As more AI firms rely on large volumes of online data, the legal and ethical questions around scraping are getting harder to ignore. Reddit’s case adds to the growing list of lawsuits shaping how this next wave of AI development unfolds.

(Photo by Brett Jordan)

See also: Ethics in automation: Addressing bias and compliance in AI

AI Expo banner where attendees will learn about issues like hallucinations of models and more.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Reddit sues Anthropic for scraping user data to train AI appeared first on AI News.

]]>
Reddit sues Anthropic over AI data scraping https://www.artificialintelligence-news.com/news/reddit-sues-anthropic-over-ai-data-scraping/ Thu, 05 Jun 2025 12:28:35 +0000 https://www.artificialintelligence-news.com/?p=106715 Reddit is accusing Anthropic of building its Claude AI models on the back of Reddit’s users, without permission and without paying for it. Anyone who uses Reddit, even a web-crawling bot, agrees to the site’s user agreement. That agreement is clear: you cannot just take content from the site and use it for your own […]

The post Reddit sues Anthropic over AI data scraping appeared first on AI News.

]]>
Reddit is accusing Anthropic of building its Claude AI models on the back of Reddit’s users, without permission and without paying for it.

Anyone who uses Reddit, even a web-crawling bot, agrees to the site’s user agreement. That agreement is clear: you cannot just take content from the site and use it for your own commercial products without a written deal. Reddit claims Anthropic’s bots have been doing exactly that for years, scraping massive amounts of conversations and posts to train and improve Claude.

What makes this lawsuit particularly spicy is the way it goes after Anthropic’s reputation. Anthropic has worked hard to brand itself as the ethical, trustworthy AI company, the “white knight” of the industry. The lawsuit, however, calls these claims nothing more than “empty marketing gimmicks”.

For instance, Reddit points to a statement from July 2024 where Anthropic claimed it had stopped its bots from crawling Reddit. The lawsuit says this was “false”, alleging that its logs caught Anthropic’s bots trying to access the site more than one hundred thousand times in the following months.

But this isn’t just about corporate squabbles; it directly involves user privacy. When you delete a post or a comment on Reddit, you expect it to be gone. Reddit has official licensing deals with other big AI players like Google and OpenAI, and these deals include technical measures to ensure that when a user deletes content, the AI company does too.

According to Reddit’s lawsuit, Anthropic has no such deal and has refused to enter one. This means if their AI was trained on a post you later deleted, that content could still be baked into Claude’s knowledge base, effectively ignoring your choice to remove it. The lawsuit even includes a screenshot where Claude itself admits it has no real way of knowing if the Reddit data it was trained on was later deleted by a user:

Screenshot from a court filing in the lawsuit of Anthropic Claude admitting it does not know if scraped Reddit data was later deleted.

So, what does Reddit want? It’s not just about money, although they are asking for damages for things like increased server costs and lost licensing fees. They are asking the court for an injunction to force Anthropic to stop using any Reddit data immediately.

Furthermore, Reddit wants to prohibit Anthropic from selling or licensing any product that was built using that data. That means they’re asking a judge to effectively take Claude off the market.

This case forces a tough question: Does being “publicly available” on the internet mean content is free for any corporation to take and monetise? Reddit is arguing a firm “no,” and the outcome could change the rules for how AI is developed from here on out.

(Photo by Brett Jordan)

See also: Tackling hallucinations: MIT spinout teaches AI to admit when it’s clueless

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Reddit sues Anthropic over AI data scraping appeared first on AI News.

]]>
US slams brakes on AI Diffusion Rule, hardens chip export curbs https://www.artificialintelligence-news.com/news/us-slams-brakes-ai-diffusion-rule-hardens-chip-export-curbs/ Wed, 14 May 2025 15:22:00 +0000 https://www.artificialintelligence-news.com/?p=106428 The Department of Commerce (DOC) has slammed the brakes on the sweeping “AI Diffusion Rule,” yanking it just a day before it was due to bite. Meanwhile, officials have laid down the gauntlet with stricter measures to control semiconductor exports. The AI Diffusion Rule, a piece of regulation cooked up under the Biden administration, was […]

The post US slams brakes on AI Diffusion Rule, hardens chip export curbs appeared first on AI News.

]]>
The Department of Commerce (DOC) has slammed the brakes on the sweeping “AI Diffusion Rule,” yanking it just a day before it was due to bite. Meanwhile, officials have laid down the gauntlet with stricter measures to control semiconductor exports.

The AI Diffusion Rule, a piece of regulation cooked up under the Biden administration, was staring down a compliance deadline of May 15th. According to the folks at the DOC, letting this rule roll out would have been like throwing a spanner in the works of American innovation.

DOC officials argue the rule would have saddled tech firms with “burdensome new regulatory requirements” and, perhaps more surprisingly, risked souring America’s relationships on the world stage by effectively “downgrading” dozens of countries “to second-tier status.”

The nuts and bolts of this reversal will see the Bureau of Industry and Security (BIS), part of the DOC, publishing a notice in the Federal Register to make the rescission official. While this particular rule is heading for the shredder, the official line is that a replacement isn’t off the table; one will be cooked up and served “in the future.”

Jeffery Kessler, the Under Secretary of Commerce for Industry and Security, has told BIS enforcement teams to stand down on anything concerning the now-canned AI Diffusion Rule.

“The Trump Administration will pursue a bold, inclusive strategy to American AI technology with trusted foreign countries around the world, while keeping the technology out of the hands of our adversaries,” said Kessler.

“At the same time, we reject the Biden Administration’s attempt to impose its own ill-conceived and counterproductive AI policies on the American people.”

What was this ‘AI Diffusion Rule’ anyway?

You might be wondering what this “AI Diffusion Rule” actually was, and why it’s causing such a stir. 

The rule wasn’t just a minor tweak; it was the Biden administration’s bid to get a tight grip on how advanced American tech – everything from the AI chips themselves to cloud computing access and even the crucial AI ‘model weights’ – flowed out of the US to the rest of the world.

The idea, at least on paper, was to walk a tightrope: keep the US at the front of the AI pack, protect national security, and still champion American tech exports.

But how did it plan to do this? The rule laid out a fairly complex playbook:

  • A tiered system for nations: Imagine a global league table for AI access. Countries were split into three groups. Tier 1 nations, America’s closest allies like Japan and South Korea, would have seen hardly any new restrictions. Tier 3, unsurprisingly, included countries already under arms embargoes – like China and Russia – who were already largely banned from getting US chips and would face the toughest controls imaginable.
  • The squeezed middle: This is where things got sticky. A large swathe of countries, including nations like Mexico, Portugal, India, and even Switzerland, found themselves in Tier 2. For them, the rule meant new limits on how many advanced AI chips they could import, especially if they were looking to build those super-powerful, large computing clusters essential for AI development.
  • Caps and close scrutiny: Beyond the tiers, the rule introduced actual caps on the quantity of high-performance AI chips most countries could get their hands on. If anyone wanted to bring in chips above certain levels, particularly for building massive AI data centres, they’d have faced incredibly strict security checks and reporting duties.
  • Controlling the ‘brains’: It wasn’t just about the hardware. The rule also aimed to regulate the storage and export of advanced AI model weights – essentially the core programming and learned knowledge of an AI system. There were strict rules about not storing these in arms-embargoed countries and only allowing their export to favoured allies, and even then, only under tight conditions.
  • Tech as a bargaining chip: Underneath it all, the framework was also a bit of a power play. The US aimed to use access to its coveted AI technology as a carrot, encouraging other nations to sign up to American standards and safeguards if they wanted to keep the American chips and software flowing.

The Biden administration had a clear rationale for these moves. They wanted to stop adversaries, with China being the primary concern, from getting their hands on advanced AI that could be turned against US interests or used for military purposes. It was also about cementing US leadership in AI, making sure the most potent AI systems and the infrastructure to run them stayed within the US and its closest circle of allies, all while trying to keep US tech exports competitive.

However, the AI Diffusion Rule and broader plan didn’t exactly get a standing ovation. Far from it.

Major US tech players – including giants like Nvidia, Microsoft, and Oracle – voiced strong concerns. They argued that the rule, instead of protecting US interests, would stifle innovation, bog businesses down in red tape, and ultimately hurt the competitiveness of American companies on the global stage. Crucially, they also doubted it would effectively stop China from accessing advanced AI chips through other means.

And it wasn’t just industry. Many countries weren’t thrilled about being labelled “second-tier,” a status they felt was not only insulting but also risked undermining diplomatic ties. There was a real fear it could push them to look for AI technology elsewhere, potentially even from China, which was hardly the intended outcome.

This widespread pushback and the concerns about hampering innovation and international relations are exactly what the current Department of Commerce is pointing to as reasons for today’s decisive action to scrap the rule.

Fresh clampdown on AI chip exports

It wasn’t just about scrapping old rules, though. The BIS also rolled out a new playbook to tighten America’s grip on AI chip exports, showing they’re serious about guarding the nation’s tech crown jewels. 

The latest clampdown includes:

  • A spotlight on Huawei Ascend chips: New guidance makes it crystal clear: using Huawei Ascend chips anywhere on the planet is now a no-go under US export controls. This takes direct aim at one of China’s big players in the AI hardware game.
  • Heads-up on Chinese AI model training: A stark warning has gone out to the public and the industry about the serious consequences if US AI chips are used to train or run Chinese AI models. The worry? That American tech could inadvertently supercharge AI systems that might not have US interests at heart.
  • Guidance on shoring up supply chains: US firms are getting a fresh batch of advice on how to batten down the hatches on their supply chains to stop controlled tech from being siphoned off to unapproved destinations or users.

The Department of Commerce is selling today’s double-whammy – axing the rule and beefing up export controls – as essential to “ensure that the United States will remain at the forefront of AI innovation and maintain global AI dominance.” It’s a strategy that looks to clear the runway for domestic tech growth while building higher fences around critical AI technologies, especially advanced semiconductors.

This policy pivot will likely get a thumbs-up from some quarters in the US tech scene, particularly those who were getting sweaty palms about the AI Diffusion Rule and the red tape it threatened. On the flip side, the even tougher export controls – especially those zeroing in on China and firms like Huawei – show that trade policy is still very much a frontline tool in the high-stakes global chess game over who leads in tech.

The whisper of a “replacement rule” down the line means this isn’t the final chapter in the saga of how to manage the AI revolution. For now, it seems the game plan is to clear the path for homegrown innovation and be much more careful about who gets to play with America’s latest breakthroughs.

See also: Samsung AI strategy delivers record revenue despite semiconductor headwinds

Banner for AI & Big Data Expo where attendees will learn about regulatory issues such as the AI Diffusion Rule and more.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post US slams brakes on AI Diffusion Rule, hardens chip export curbs appeared first on AI News.

]]>
OpenAI counter-sues Elon Musk for attempts to ‘take down’ AI rival https://www.artificialintelligence-news.com/news/openai-counter-sues-elon-musk-attempts-take-down-ai-rival/ Thu, 10 Apr 2025 12:05:31 +0000 https://www.artificialintelligence-news.com/?p=105285 OpenAI has launched a legal counteroffensive against one of its co-founders, Elon Musk, and his competing AI venture, xAI. In court documents filed yesterday, OpenAI accuses Musk of orchestrating a “relentless” and “malicious” campaign designed to “take down OpenAI” after he left the organisation years ago. The court filing, submitted to the US District Court […]

The post OpenAI counter-sues Elon Musk for attempts to ‘take down’ AI rival appeared first on AI News.

]]>
OpenAI has launched a legal counteroffensive against one of its co-founders, Elon Musk, and his competing AI venture, xAI.

In court documents filed yesterday, OpenAI accuses Musk of orchestrating a “relentless” and “malicious” campaign designed to “take down OpenAI” after he left the organisation years ago.

The court filing, submitted to the US District Court for the Northern District of California, alleges Musk could not tolerate OpenAI’s success after he had “abandoned and declared [it] doomed.”

OpenAI is now seeking legal remedies, including an injunction to stop Musk’s alleged “unlawful and unfair action” and compensation for damages already caused.   

Origin story of OpenAI and the departure of Elon Musk

The legal documents recount OpenAI’s origins in 2015, stemming from an idea discussed by current CEO Sam Altman and President Greg Brockman to create an AI lab focused on developing artificial general intelligence (AGI) – AI capable of outperforming humans – for the “benefit of all humanity.”

Musk was involved in the launch, serving on the initial non-profit board and pledging $1 billion in donations.   

However, the relationship fractured. OpenAI claims that between 2017 and 2018, Musk’s demands for “absolute control” of the enterprise – or its potential absorption into Tesla – were rebuffed by Altman, Brockman, and then-Chief Scientist Ilya Sutskever. The filing quotes Sutskever warning Musk against creating an “AGI dictatorship.”

Following this disagreement, OpenAI alleges Elon Musk quit in February 2018, declaring the venture would fail without him and that he would pursue AGI development at Tesla instead. Critically, OpenAI contends the pledged $1 billion “was never satisfied—not even close”.   

Restructuring, success, and Musk’s alleged ‘malicious’ campaign

Facing escalating costs for computing power and talent retention, OpenAI restructured and created a “capped-profit” entity in 2019 to attract investment while remaining controlled by the non-profit board and bound by its mission. This structure, OpenAI states, was announced publicly and Musk was offered equity in the new entity but declined and raised no objection at the time.   

OpenAI highlights its subsequent breakthroughs – including GPT-3, ChatGPT, and GPT-4 – achieved massive public adoption and critical acclaim. These successes, OpenAI emphasises, were made after the departure of Elon Musk and allegedly spurred his antagonism.

The filing details a chronology of alleged actions by Elon Musk aimed at harming OpenAI:   

  • Founding xAI: Musk “quietly created” his competitor, xAI, in March 2023.   
  • Moratorium call: Days later, Musk supported a call for a development moratorium on AI more advanced than GPT-4, a move OpenAI claims was intended “to stall OpenAI while all others, most notably Musk, caught up”.   
  • Records demand: Musk allegedly made a “pretextual demand” for confidential OpenAI documents, feigning concern while secretly building xAI.   
  • Public attacks: Using his social media platform X (formerly Twitter), Musk allegedly broadcast “press attacks” and “malicious campaigns” to his vast following, labelling OpenAI a “lie,” “evil,” and a “total scam”.   
  • Legal actions: Musk filed lawsuits, first in state court (later withdrawn) and then the current federal action, based on what OpenAI dismisses as meritless claims of a “Founding Agreement” breach.   
  • Regulatory pressure: Musk allegedly urged state Attorneys General to investigate OpenAI and force an asset auction.   
  • “Sham bid”: In February 2025, a Musk-led consortium made a purported $97.375 billion offer for OpenAI, Inc.’s assets. OpenAI derides this as a “sham bid” and a “stunt” lacking evidence of financing and designed purely to disrupt OpenAI’s operations, potential restructuring, fundraising, and relationships with investors and employees, particularly as OpenAI considers evolving its capped-profit arm into a Public Benefit Corporation (PBC). One investor involved allegedly admitted the bid’s aim was to gain “discovery”.   

Based on these allegations, OpenAI asserts two primary counterclaims against both Elon Musk and xAI:

  • Unfair competition: Alleging the “sham bid” constitutes an unfair and fraudulent business practice under California law, intended to disrupt OpenAI and gain an unfair advantage for xAI.   
  • Tortious interference with prospective economic advantage: Claiming the sham bid intentionally disrupted OpenAI’s existing and potential relationships with investors, employees, and customers. 

OpenAI argues Musk’s actions have forced it to divert resources and expend funds, causing harm. They claim his campaign threatens “irreparable harm” to their mission, governance, and crucial business relationships. The filing also touches upon concerns regarding xAI’s own safety record, citing reports of its AI Grok generating harmful content and misinformation.

The counterclaims mark a dramatic escalation in the legal battle between the AI pioneer and its departed co-founder. While Elon Musk initially sued OpenAI alleging a betrayal of its founding non-profit, open-source principles, OpenAI now contends Musk’s actions are a self-serving attempt to undermine a competitor he couldn’t control.

With billions at stake and the future direction of AGI in the balance, this dispute is far from over.

See also: Deep Cogito open LLMs use IDA to outperform same size models

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post OpenAI counter-sues Elon Musk for attempts to ‘take down’ AI rival appeared first on AI News.

]]>
Tony Blair Institute AI copyright report sparks backlash https://www.artificialintelligence-news.com/news/tony-blair-institute-ai-copyright-report-sparks-backlash/ Wed, 02 Apr 2025 11:04:11 +0000 https://www.artificialintelligence-news.com/?p=105140 The Tony Blair Institute (TBI) has released a report calling for the UK to lead in navigating the complex intersection of arts and AI. According to the report, titled ‘Rebooting Copyright: How the UK Can Be a Global Leader in the Arts and AI,’ the global race for cultural and technological leadership is still up […]

The post Tony Blair Institute AI copyright report sparks backlash appeared first on AI News.

]]>
The Tony Blair Institute (TBI) has released a report calling for the UK to lead in navigating the complex intersection of arts and AI.

According to the report, titled ‘Rebooting Copyright: How the UK Can Be a Global Leader in the Arts and AI,’ the global race for cultural and technological leadership is still up for grabs, and the UK has a golden opportunity to take the lead.

The report emphasises that countries that “embrace change and harness the power of artificial intelligence in creative ways will set the technical, aesthetic, and regulatory standards for others to follow.”

Highlighting that we are in the midst of another revolution in media and communication, the report notes that AI is disrupting how textual, visual, and auditive content is created, distributed, and experienced, much like the printing press, gramophone, and camera did before it.

“AI will usher in a new era of interactive and bespoke works, as well as a counter-revolution that celebrates everything that AI can never be,” the report states.

However, far from signalling the end of human creativity, the TBI suggests AI will open up “new ways of being original.”

The AI revolution’s impact isn’t limited to the creative industries; it’s being felt across all areas of society. Scientists are using AI to accelerate discoveries, healthcare providers are employing it to analyse X-ray images, and emergency services utilise it to locate houses damaged by earthquakes.

The report stresses that these cross-industry advancements are just the beginning, with future AI systems set to become increasingly capable, fuelled by advancements in computing power, data, model architectures, and access to talent.

The UK government has expressed its ambition to be a global leader in AI through its AI Opportunities Action Plan, announced by Prime Minister Keir Starmer on 13 January 2025. For its part, the TBI welcomes the UK government’s ambition, stating that “if properly designed and deployed, AI can make human lives healthier, safer, and more prosperous.”

However, the rapid spread of AI across sectors raises urgent policy questions, particularly concerning the data used for AI training. The application of UK copyright law to the training of AI models is currently contested, with the debate often framed as a “zero-sum game” between AI developers and rights holders. The TBI argues that this framing “misrepresents the nature of the challenge and the opportunity before us.”

The report emphasises that “bold policy solutions are needed to provide all parties with legal clarity and unlock investments that spur innovation, job creation, and economic growth.”

According to the TBI, AI presents opportunities for creators—noting its use in various fields from podcasts to filmmaking. The report draws parallels with past technological innovations – such as the printing press and the internet – which were initially met with resistance, but ultimately led to societal adaptation and human ingenuity prevailing.

The TBI proposes that the solution lies not in clinging to outdated copyright laws but in allowing them to “co-evolve with technological change” to remain effective in the age of AI.

The UK government has proposed a text and data mining exception with an opt-out option for rights holders. While the TBI views this as a good starting point for balancing stakeholder interests, it acknowledges the “significant implementation and enforcement challenges” that come with it, spanning legal, technical, and geopolitical dimensions.

In the report, the Tony Blair Institute for Global Change “assesses the merits of the UK government’s proposal and outlines a holistic policy framework to make it work in practice.”

The report includes recommendations and examines novel forms of art that will emerge from AI. It also delves into the disagreement between rights holders and developers on copyright, the wider implications of copyright policy, and the serious hurdles the UK’s text and data mining proposal faces.

Furthermore, the Tony Blair Institute explores the challenges of governing an opt-out policy, implementation problems with opt-outs, making opt-outs useful and accessible, and tackling the diffusion problem. AI summaries and the problems they present regarding identity are also addressed, along with defensive tools as a partial solution and solving licensing problems.

The report also seeks to clarify the standards on human creativity, address digital watermarking, and discuss the uncertainty around the impact of generative AI on the industry. It proposes establishing a Centre for AI and the Creative Industries and discusses the risk of judicial review, the benefits of a remuneration scheme, and the advantages of a targeted levy on ISPs to raise funding for the Centre.

However, the report has faced strong criticism. Ed Newton-Rex, CEO of Fairly Trained, raised several concerns on Bluesky. These concerns include:

  • The report repeats the “misleading claim” that existing UK copyright law is uncertain, which Newton-Rex asserts is not the case.
  • The suggestion that an opt-out scheme would give rights holders more control over how their works are used is misleading. Newton-Rex argues that licensing is currently required by law, so moving to an opt-out system would actually decrease control, as some rights holders will inevitably miss the opt-out.
  • The report likens machine learning (ML) training to human learning, a comparison that Newton-Rex finds shocking, given the vastly different scalability of the two.
  • The report’s claim that AI developers won’t make long-term profits from training on people’s work is questioned, with Newton-Rex pointing to the significant funding raised by companies like OpenAI.
  • Newton-Rex suggests the report uses strawman arguments, such as stating that generative AI may not replace all human paid activities.
  • A key criticism is that the report omits data showing how generative AI replaces demand for human creative labour.
  • Newton-Rex also criticises the report’s proposed solutions, specifically the suggestion to set up an academic centre, which he notes “no one has asked for.”
  • Furthermore, he highlights the proposal to tax every household in the UK to fund this academic centre, arguing that this would place the financial burden on consumers rather than the AI companies themselves, and the revenue wouldn’t even go to creators.

Adding to these criticisms, British novelist and author Jonathan Coe noted that “the five co-authors of this report on copyright, AI, and the arts are all from the science and technology sectors. Not one artist or creator among them.”

While the report from Tony Blair Institute for Global Change supports the government’s ambition to be an AI leader, it also raises critical policy questions—particularly around copyright law and AI training data.

(Photo by Jez Timms)

See also: Amazon Nova Act: A step towards smarter, web-native AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Tony Blair Institute AI copyright report sparks backlash appeared first on AI News.

]]>
Eric Schmidt: AI misuse poses an ‘extreme risk’ https://www.artificialintelligence-news.com/news/eric-schmidt-ai-misuse-poses-extreme-risk/ Thu, 13 Feb 2025 12:17:38 +0000 https://www.artificialintelligence-news.com/?p=104423 Eric Schmidt, former CEO of Google, has warned that AI misuse poses an “extreme risk” and could do catastrophic harm. Speaking to BBC Radio 4’s Today programme, Schmidt cautioned that AI could be weaponised by extremists and “rogue states” such as North Korea, Iran, and Russia to “harm innocent people.” Schmidt expressed concern that rapid […]

The post Eric Schmidt: AI misuse poses an ‘extreme risk’ appeared first on AI News.

]]>
Eric Schmidt, former CEO of Google, has warned that AI misuse poses an “extreme risk” and could do catastrophic harm.

Speaking to BBC Radio 4’s Today programme, Schmidt cautioned that AI could be weaponised by extremists and “rogue states” such as North Korea, Iran, and Russia to “harm innocent people.”

Schmidt expressed concern that rapid AI advancements could be exploited to create weapons, including biological attacks. Highlighting the dangers, he said: “The real fears that I have are not the ones that most people talk about AI, I talk about extreme risk.”

Using a chilling analogy, Schmidt referenced the al-Qaeda leader responsible for the 9/11 attacks: “I’m always worried about the Osama bin Laden scenario, where you have some truly evil person who takes over some aspect of our modern life and uses it to harm innocent people.”

He emphasised the pace of AI development and its potential to be co-opted by nations or groups with malevolent intent.

“Think about North Korea, or Iran, or even Russia, who have some evil goal … they could misuse it and do real harm,” Schmidt warns.

Oversight without stifling innovation

Schmidt urged governments to closely monitor private tech companies pioneering AI research. While noting that tech leaders are generally aware of AI’s societal implications, they may make decisions based on different values from those of public officials.

“My experience with the tech leaders is that they do have an understanding of the impact they’re having, but they might make a different values judgement than the government would make.”

Schmidt also endorsed the export controls introduced under former US President Joe Biden last year to restrict the sale of advanced microchips. The measure is aimed at slowing the progress of geopolitical adversaries in AI research.  

Global divisions around preventing AI misuse

The tech veteran was in Paris when he made his remarks, attending the AI Action Summit, a two-day event that wrapped up on Tuesday.

The summit, attended by 57 countries, saw the announcement of an agreement on “inclusive” AI development. Signatories included major players like China, India, the EU, and the African Union.  

However, the UK and the US declined to sign the communique. The UK government said the agreement lacked “practical clarity” and failed to address critical “harder questions” surrounding national security. 

Schmidt cautioned against excessive regulation that might hinder progress in this transformative field. This was echoed by US Vice-President JD Vance who warned that heavy-handed regulation “would kill a transformative industry just as it’s taking off”.  

This reluctance to endorse sweeping international accords reflects diverging approaches to AI governance. The EU has championed a more restrictive framework for AI, prioritising consumer protections, while countries like the US and UK are opting for more agile and innovation-driven strategies. 

Schmidt pointed to the consequences of Europe’s tight regulatory stance, predicting that the region would miss out on pioneering roles in AI.

“The AI revolution, which is the most important revolution in my opinion since electricity, is not going to be invented in Europe,” he remarked.

Prioritising national and global safety

Schmidt’s comments come against a backdrop of increasing scrutiny over AI’s dual-use potential—its ability to be used for both beneficial and harmful purposes.

From deepfakes to autonomous weapons, AI poses a bevy of risks if left without measures to guard against misuse. Leaders and experts, including Schmidt, are advocating for a balanced approach that fosters innovation while addressing these dangers head-on.

While international cooperation remains a complex and contentious issue, the overarching consensus is clear: without safeguards, AI’s evolution could have unintended – and potentially catastrophic – consequences.

(Photo by Guillaume Paumier under CC BY 3.0 license. Cropped to landscape from original version.)

See also: NEPC: AI sprint risks environmental catastrophe

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Eric Schmidt: AI misuse poses an ‘extreme risk’ appeared first on AI News.

]]>
DeepSeek ban? China data transfer boosts security concerns https://www.artificialintelligence-news.com/news/deepseek-ban-china-data-transfer-boosts-security-concerns/ Fri, 07 Feb 2025 17:44:01 +0000 https://www.artificialintelligence-news.com/?p=104228 US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company. DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga. Its rise has been fuelled in part […]

The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News.

]]>
US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company.

DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga.

Its rise has been fuelled in part by its business model: unlike many of its American counterparts, including OpenAI and Google, DeepSeek offered its advanced powers for free.

However, concerns have been raised about DeepSeek’s extensive data collection practices and a probe has been launched by Microsoft and OpenAI over a breach of the latter’s system by a group allegedly linked to the Chinese AI startup.

A threat to US AI dominance

DeepSeek’s astonishing capabilities have, within a matter of weeks, positioned it as a major competitor to American AI stalwarts like OpenAI’s ChatGPT and Google Gemini. But, alongside the app’s prowess, concerns have emerged over alleged ties to the Chinese Communist Party (CCP).  

According to security researchers, hidden code within DeepSeek’s AI has been found transmitting user data to China Mobile—a state-owned telecoms company banned in the US. DeepSeek’s own privacy policy permits the collection of data such as IP addresses, device information, and, most alarmingly, even keystroke patterns.

Such findings have led to bipartisan efforts in the US Congress to curtail DeepSeek’s influence, with lawmakers scrambling to protect sensitive data from potential CCP oversight.

Reps. Darin LaHood (R-IL) and Josh Gottheimer (D-NJ) are spearheading efforts to introduce legislation that would prohibit DeepSeek from being installed on all government-issued devices. 

Several federal agencies, among them NASA and the US Navy, have already preemptively issued a ban on DeepSeek. Similarly, the state of Texas has also introduced restrictions.

Potential ban of DeepSeek a TikTok redux?

The controversy surrounding DeepSeek bears similarities to debates over TikTok, the social video app owned by Chinese company ByteDance. TikTok remains under fire over accusations that user data is accessible to the CCP, though definitive proof has yet to materialise.

In contrast, DeepSeek’s case involves clear evidence, as revealed by cybersecurity investigators who identified the app’s unauthorised data transmissions. While some might say DeepSeek echoes the TikTok controversy, security experts argue that it represents a much starker and documented threat.

Lawmakers around the world are taking note. In addition to the US proposals, DeepSeek has already faced bans from government systems in countries including Australia, South Korea, and Italy.  

AI becomes a geopolitical battleground

The concerns over DeepSeek exemplify how AI has now become a geopolitical flashpoint between global superpowers—especially between the US and China.

American AI firms like OpenAI have enjoyed a dominant position in recent years, but Chinese companies have poured resources into catching up and, in some cases, surpassing their US competitors.  

DeepSeek’s lightning-quick growth has unsettled that balance, not only because of its AI models but also due to its pricing strategy, which undercuts competitors by offering the app free of charge. That begs the question of whether it’s truly “free” or if the cost is paid in lost privacy and security.

China Mobile’s involvement raises further eyebrows, given the state-owned telecom company’s prior sanctions and prohibition from the US market. Critics worry that data collected through platforms like DeepSeek could fill gaps in Chinese surveillance activities or even potential economic manipulations.

A nationwide DeepSeek ban is on the cards

If the proposed US legislation is passed, it could represent the first step toward nationwide restrictions or an outright ban on DeepSeek. Geopolitical tension between China and the West continues to shape policies in advanced technologies, and AI appears to be the latest arena for this ongoing chess match.  

In the meantime, calls to regulate applications like DeepSeek are likely to grow louder. Conversations about data privacy, national security, and ethical boundaries in AI development are becoming ever more urgent as individuals and organisations across the globe navigate the promises and pitfalls of next-generation tools.  

DeepSeek’s rise may have, indeed, rattled the AI hierarchy, but whether it can maintain its momentum in the face of increasing global pushback remains to be seen.

(Photo by Solen Feyissa)

See also: AVAXAI brings DeepSeek to Web3 with decentralised AI agents

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News.

]]>
NEPC: AI sprint risks environmental catastrophe https://www.artificialintelligence-news.com/news/nepc-ai-sprint-risks-environmental-catastrophe/ Fri, 07 Feb 2025 12:32:41 +0000 https://www.artificialintelligence-news.com/?p=104189 The government is urged to mandate stricter reporting for data centres to mitigate environmental risks associated with the AI sprint. A report published today by the National Engineering Policy Centre (NEPC) highlights the urgent need for data centres to adopt greener practices, particularly as the government’s AI Opportunities Action Plan gains traction. The report, Engineering […]

The post NEPC: AI sprint risks environmental catastrophe appeared first on AI News.

]]>
The government is urged to mandate stricter reporting for data centres to mitigate environmental risks associated with the AI sprint.

A report published today by the National Engineering Policy Centre (NEPC) highlights the urgent need for data centres to adopt greener practices, particularly as the government’s AI Opportunities Action Plan gains traction.

The report, Engineering Responsible AI: Foundations for Environmentally Sustainable AI, was developed in collaboration with the Royal Academy of Engineering, the Institution of Engineering and Technology, and BCS, the Chartered Institute of IT.

While stressing that data centres enabling AI systems can be built to consume fewer resources like energy and water, the report highlights that infrastructure and regulatory conditions must align for these efficiencies to materialise.

Unlocking the potential of AI while minimising environmental risks  

AI is heralded as capable of driving economic growth, creating jobs, and improving livelihoods. Launched as a central pillar of the UK’s tech strategy, the AI Opportunities Action Plan is intended to “boost economic growth, provide jobs for the future and improve people’s everyday lives.”  

Use cases for AI that are already generating public benefits include accelerating drug discovery, forecasting weather events, optimising energy systems, and even aiding climate science and improving sustainability efforts. However, this growing reliance on AI also poses environmental risks from the infrastructure required to power these systems.  

Data centres, which serve as the foundation of AI technologies, consume vast amounts of energy and water. Increasing demand has raised concerns about global competition for limited resources, such as sustainable energy and drinking water. Google and Microsoft, for instance, have recorded rising water usage by their data centres each year since 2020. Much of this water comes from drinking sources, sparking fears about resource depletion.  

With plans already in place to reform the UK’s planning system to facilitate the construction of data centres, the report calls for urgent policies to manage their environmental impact. Accurate and transparent data on resource consumption is currently lacking, which hampers policymakers’ ability to assess the true scale of these impacts and act accordingly.

Five steps to sustainable AI  

The NEPC is urging the government to spearhead change by prioritising sustainable AI development. The report outlines five key steps policymakers can act upon immediately to position the UK as a leader in resource-efficient AI:  

  1. Expand environmental reporting mandates
  2. Communicate the sector’s environmental impacts
  3. Set sustainability requirements for data centres
  4. Reconsider data collection, storage, and management practices
  5. Lead by example with government investment

Mandatory environmental reporting forms a cornerstone of the recommendations. This involves measuring data centres’ energy sources, water consumption, carbon emissions, and e-waste recycling practices to provide the resource use data necessary for policymaking.  

Raising public awareness is also vital. Communicating the environmental costs of AI can encourage developers to optimise AI tools, use smaller datasets, and adopt more efficient approaches. Notably, the report recommends embedding environmental design and sustainability topics into computer science and AI education at both school and university levels.  

Smarter, greener data centres  

One of the most urgent calls to action involves redesigning data centres to reduce their environmental footprint. The report advocates for innovations like waste heat recovery systems, zero drinking water use for cooling, and the exclusive use of 100% carbon-free energy certificates.  

Efforts like those at Queen Mary University of London, where residual heat from a campus data centre is repurposed to provide heating and hot water, offer a glimpse into the possibilities of greener tech infrastructure.  

In addition, the report suggests revising legislation on mandatory data retention to reduce the unnecessary environmental costs of storing vast amounts of data long-term. Proposals for a National Data Library could drive best practices by centralising and streamlining data storage.  

Professor Tom Rodden, Pro-Vice-Chancellor at the University of Nottingham and Chair of the working group behind the report, urged swift action:  

“In recent years, advances in AI systems and services have largely been driven by a race for size and scale, demanding increasing amounts of computational power. As a result, AI systems and services are growing at a rate unparalleled by other high-energy systems—generally without much regard for resource efficiency.  

“This is a dangerous trend, and we face a real risk that our development, deployment, and use of AI could do irreparable damage to the environment.”  

Rodden added that reliable data on these impacts is critical. “To build systems and services that effectively use resources, we first need to effectively monitor their environmental cost. Once we have access to trustworthy data… we can begin to effectively target efficiency in development, deployment, and use – and plan a sustainable AI future for the UK.”

Dame Dawn Childs, CEO of Pure Data Centres Group, underscored the role of engineering in improving efficiency. “Some of this will come from improvements to AI models and hardware, making them less energy-intensive. But we must also ensure that the data centres housing AI’s computing power and storage are as sustainable as possible.  

“That means prioritising renewable energy, minimising water use, and reducing carbon emissions – both directly and indirectly. Using low-carbon building materials is also essential.”  

Childs emphasised the importance of a coordinated approach from the start of projects. “As the UK government accelerates AI adoption – through AI Growth Zones and streamlined planning for data centres – sustainability must be a priority at every step.”  

For Alex Bardell, Chair of BCS’ Green IT Specialist Group, the focus is on optimising AI processes. “Our report has discussed optimising models for efficiency. Previous attempts to limit the drive toward increased computational power and larger models have faced significant resistance, with concerns that the UK may fall behind in the AI arena; this may not necessarily be true.  

“It is crucial to reevaluate our approach to developing sustainable AI in the future.”  

Time for transparency around AI environmental risks

Public awareness of AI’s environmental toll remains low. Recent research by the Institution of Engineering and Technology (IET) found that fewer than one in six UK residents are aware of the significant environmental costs associated with AI systems.  

“AI providers must be transparent about these effects,” said Professor Sarvapali Ramchurn, CEO of Responsible AI UK and a Fellow of the IET. “If we cannot measure it, we cannot manage it, nor ensure benefits for all. This report’s recommendations will aid national discussions on the sustainability of AI systems and the trade-offs involved.”  

As the UK pushes forward with ambitious plans to lead in AI development, ensuring environmental sustainability must take centre stage. By adopting policies and practices outlined in the NEPC report, the government can support AI growth while safeguarding finite resources for future generations.

(Photo by Braden Collum)

See also: Sustainability is key in 2025 for businesses to advance AI efforts

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post NEPC: AI sprint risks environmental catastrophe appeared first on AI News.

]]>
EU AI Act: What businesses need to know as regulations go live https://www.artificialintelligence-news.com/news/eu-ai-act-what-businesses-need-know-regulations-go-live/ Fri, 31 Jan 2025 12:52:49 +0000 https://www.artificialintelligence-news.com/?p=17015 Next week marks the beginning of a new era for AI regulations as the first obligations of the EU AI Act take effect. While the full compliance requirements won’t come into force until mid-2025, the initial phase of the EU AI Act begins February 2nd and includes significant prohibitions on specific AI applications. Businesses across […]

The post EU AI Act: What businesses need to know as regulations go live appeared first on AI News.

]]>
Next week marks the beginning of a new era for AI regulations as the first obligations of the EU AI Act take effect.

While the full compliance requirements won’t come into force until mid-2025, the initial phase of the EU AI Act begins February 2nd and includes significant prohibitions on specific AI applications. Businesses across the globe that operate in the EU must now navigate a regulatory landscape with strict rules and high stakes.

The new regulations prohibit the deployment or use of several high-risk AI systems. These include applications such as social scoring, emotion recognition, real-time remote biometric identification in public spaces, and other scenarios deemed unacceptable under the Act.

Companies found in violation of the rules could face penalties of up to 7% of their global annual turnover, making it imperative for organisations to understand and comply with the restrictions.  

Early compliance challenges  

“It’s finally here,” says Levent Ergin, Chief Strategist for Climate, Sustainability, and AI at Informatica. “While we’re still in a phased approach, businesses’ hard-earned preparations for the EU AI Act will now face the ultimate test.”

Headshot of Levent Ergin, Chief Strategist for Climate, Sustainability, and AI at Informatica, for an article on what the regulations introduced in EU AI Act means for businesses in the European Union and beyond.

Ergin highlights that even though most compliance requirements will not take effect until mid-2025, the early prohibitions set a decisive tone.

“For businesses, the pressure in 2025 is twofold. They must demonstrate tangible ROI from AI investments while navigating challenges around data quality and regulatory uncertainty. It’s already the perfect storm, with 89% of large businesses in the EU reporting conflicting expectations for their generative AI initiatives. At the same time, 48% say technology limitations are a major barrier to moving AI pilots into production,” he remarks.

Ergin believes the key to compliance and success lies in data governance.

“Without robust data foundations, organisations risk stagnation, limiting their ability to unlock AI’s full potential. After all, isn’t ensuring strong data governance a core principle that the EU AI Act is built upon?”

To adapt, companies must prioritise strengthening their approach to data quality.

“Strengthening data quality and governance is no longer optional, it’s critical. To ensure both compliance and prove the value of AI, businesses must invest in making sure data is accurate, holistic, integrated, up-to-date and well-governed,” says Ergin.

“This isn’t just about meeting regulatory demands; it’s about enabling AI to deliver real business outcomes. As 82% of EU companies plan to increase their GenAI investments in 2025, ensuring their data is AI-ready will be the difference between those who succeed and those who remain in the starting blocks.”

EU AI Act has no borders

The extraterritorial scope of the EU AI Act means non-EU organisations are assuredly not off the hook. As Marcus Evans, a partner at Norton Rose Fulbright, explains, the Act applies far beyond the EU’s borders.

Headshot of Marcus Evans, a partner at Norton Rose Fulbright, for an article on what the regulations introduced in EU AI Act means for businesses in the European Union and beyond.

“The AI Act will have a truly global application,” says Evans. “That’s because it applies not only to organisations in the EU using AI or those providing, importing, or distributing AI to the EU market, but also AI provision and use where the output is used in the EU. So, for instance, a company using AI for recruitment in the EU – even if it is based elsewhere – would still be captured by these new rules.”  

Evans advises businesses to start by auditing their AI use. “At this stage, businesses must first understand where AI is being used in their organisation so that they can then assess whether any use cases may trigger the prohibitions. Building on that initial inventory, a wider governance process can then be introduced to ensure AI use is assessed, remains outside the prohibitions, and complies with the AI Act.”  

While organisations work to align their AI practices with the new regulations, additional challenges remain. Compliance requires addressing other legal complexities such as data protection, intellectual property (IP), and discrimination risks.  

Evans emphasises that raising AI literacy within organisations is also a critical step.

“Any organisations in scope must also take measures to ensure their staff – and anyone else dealing with the operation and use of their AI systems on their behalf – have a sufficient level of AI literacy,” he states.

“AI literacy will play a vital role in AI Act compliance, as those involved in governing and using AI must understand the risks they are managing.”

Encouraging responsible innovation  

The EU AI Act is being hailed as a milestone for responsible AI development. By prohibiting harmful practices and requiring transparency and accountability, the regulation seeks to balance innovation with ethical considerations.

Headshot of Beatriz Sanz Sáiz, AI Sector Leader at EY Global, for an article on what the regulations introduced in EU AI Act means for businesses in the European Union and beyond.

“This framework is a pivotal step towards building a more responsible and sustainable future for artificial intelligence,” says Beatriz Sanz Sáiz, AI Sector Leader at EY Global.

Sanz Sáiz believes the legislation fosters trust while providing a foundation for transformative technological progress.

“It has the potential to foster further trust, accountability, and innovation in AI development, as well as strengthen the foundations upon which the technology continues to be built,” Sanz Sáiz asserts.

“It is critical that we focus on eliminating bias and prioritising fundamental rights like fairness, equity, and privacy. Responsible AI development is a crucial step in the quest to further accelerate innovation.”

What’s prohibited under the EU AI Act?  

To ensure compliance, businesses need to be crystal-clear on which activities fall under the EU AI Act’s strict prohibitions. The current list of prohibited activities includes:  

  • Harmful subliminal, manipulative, and deceptive techniques  
  • Harmful exploitation of vulnerabilities  
  • Unacceptable social scoring  
  • Individual crime risk assessment and prediction (with some exceptions)  
  • Untargeted scraping of internet or CCTV material to develop or expand facial recognition databases  
  • Emotion recognition in areas such as the workplace and education (with some exceptions)  
  • Biometric categorisation to infer sensitive categories (with some exceptions)  
  • Real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement purposes (with some exceptions)  

The Commission’s forthcoming guidance on which “AI systems” fall under these categories will be critical for businesses seeking to ensure compliance and reduce legal risks. Additionally, companies should anticipate further clarification and resources at the national and EU levels, such as the upcoming webinar hosted by the AI Office.

A new landscape for AI regulations

The early implementation of the EU AI Act represents just the beginning of what is a remarkably complex and ambitious regulatory endeavour. As AI continues to play an increasingly pivotal role in business strategy, organisations must learn to navigate new rules and continuously adapt to future changes.  

For now, businesses should focus on understanding the scope of their AI use, enhancing data governance, educating staff to build AI literacy, and adopting a proactive approach to compliance. By doing so, they can position themselves as leaders in a fast-evolving AI landscape and unlock the technology’s full potential while upholding ethical and legal standards.

(Photo by Guillaume Périgois)

See also: ChatGPT Gov aims to modernise US government agencies

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post EU AI Act: What businesses need to know as regulations go live appeared first on AI News.

]]>
Meta accused of using pirated data for AI development https://www.artificialintelligence-news.com/news/meta-accused-using-pirated-data-for-ai-development/ Fri, 10 Jan 2025 12:16:52 +0000 https://www.artificialintelligence-news.com/?p=16840 Plaintiffs in the case of Kadrey et al. vs. Meta have filed a motion alleging the firm knowingly used copyrighted works in the development of its AI models. The plaintiffs, which include author Richard Kadrey, filed their “Reply in Support of Plaintiffs’ Motion for Leave to File Third Amended Consolidated Complaint” in the United States […]

The post Meta accused of using pirated data for AI development appeared first on AI News.

]]>
Plaintiffs in the case of Kadrey et al. vs. Meta have filed a motion alleging the firm knowingly used copyrighted works in the development of its AI models.

The plaintiffs, which include author Richard Kadrey, filed their “Reply in Support of Plaintiffs’ Motion for Leave to File Third Amended Consolidated Complaint” in the United States District Court in the Northern District of California.

The filing accuses Meta of systematically torrenting and stripping copyright management information (CMI) from pirated datasets, including works from the notorious shadow library LibGen.

According to documents recently submitted to the court, evidence reveals highly incriminating practices involving Meta’s senior leaders. Plaintiffs allege that Meta CEO Mark Zuckerberg gave explicit approval for the use of the LibGen dataset, despite internal concerns raised by the company’s AI executives.

A December 2024 memo from internal Meta discussions acknowledged LibGen as “a dataset we know to be pirated,” with debates arising about the ethical and legal ramifications of using such materials. Documents also revealed that top engineers hesitated to torrent the datasets, citing concerns about using corporate laptops for potentially unlawful activities.

Additionally, internal communications suggest that after acquiring the LibGen dataset, Meta stripped CMI from the copyrighted works contained within—a practice that plaintiffs highlight as central to claims of copyright infringement.

According to the deposition of Michael Clark – a corporate representative for Meta – the company implemented scripts designed to remove any information identifying these works as copyrighted, including keywords like “copyright,” “acknowledgements,” or lines commonly used in such texts. Clark attested that this practice was done intentionally to prepare the dataset for training Meta’s Llama AI models.  

“Doesn’t feel right”

The allegations against Meta paint a portrait of a company knowingly partaking in a widespread piracy scheme facilitated through torrenting.

According to a string of emails included as exhibits, Meta engineers expressed concerns about the optics of torrenting pirated datasets from within corporate spaces. One engineer noted that “torrenting from a [Meta-owned] corporate laptop doesn’t feel right,” but despite hesitation, the rapid downloading and distribution – or “seeding” – of pirated data took place.

Legal counsel for the plaintiffs has stated that as late as January 2024, Meta had “already torrented (both downloaded and distributed) data from LibGen.” Moreover, records show that hundreds of related documents were initially obtained by Meta months prior but were withheld during early discovery processes. Plaintiffs argue this delayed disclosure amounts to bad-faith attempts by Meta to obstruct access to vital evidence.

During a deposition on 17 December 2024, Zuckerberg himself reportedly admitted that such activities would raise “lots of red flags” and stated it “seems like a bad thing,” though he provided limited direct responses regarding Meta’s broader AI training practices.

This case originally began as an intellectual property infringement action on behalf of authors and publishers claiming violations relating to AI use of their materials. However, the plaintiffs are now seeking to add two major claims to their suit: a violation of the Digital Millennium Copyright Act (DMCA) and a breach of the California Comprehensive Data Access and Fraud Act (CDAFA).  

Under the DMCA, the plaintiffs assert that Meta knowingly removed copyright protections to conceal unauthorised uses of copyrighted texts in its Llama models.

As cited in the complaint, Meta allegedly stripped CMI “to reduce the chance that the models will memorise this data” and that this removal of rights management indicators made discovering the infringement more difficult for copyright holders. 

The CDAFA allegations involve Meta’s methods for obtaining the LibGen dataset, including allegedly engaging in torrenting to acquire copyrighted datasets without permission. Internal documentation shows Meta engineers openly discussed concerns that seeding and torrenting might prove to be “legally not ok.” 

Meta case may impact emerging legislation around AI development

At the heart of this expanding legal battle lies growing concern over the intersection of copyright law and AI.

Plaintiffs argue the stripping of copyright protections from textual datasets denies rightful compensation to copyright owners and allows Meta to build AI systems like Llama on the financial ruins of authors’ and publishers’ creative efforts.

The timing of these allegations arises amidst heightened global scrutiny surrounding “generative AI” technologies. Companies like OpenAI, Google, and Meta have all come under fire regarding the use of copyrighted data to train their models. Courts across jurisdictions are currently grappling with the long-term impact of AI on rights management, with potentially landmark cases being decided in both the US and the UK.  

In this particular case, US courts have shown increasing willingness to hear complaints about AI’s potential harm to long-established copyright law precedents. Plaintiffs, in their motion, referred to The Intercept Media v. OpenAI, a recent decision from New York in which a similar DMCA claim was allowed to proceed.

Meta continues to deny all allegations in the case and has yet to publicly respond to Zuckerberg’s reported deposition statements.

Whether or not plaintiffs succeed in these amendments, authors across the world face growing anxieties about how their creative works are handled within the context of AI. With copyright law struggling to keep pace with technological advances, this case underscores the need for clearer guidance at an international level to protect both creators and innovators.

For Meta, these claims also represent a reputational risk. As AI becomes the central focus of its future strategy, the allegations of reliance on pirated libraries are unlikely to help its ambitions of maintaining leadership in the field.  

The unfolding case of Kadrey et al. vs. Meta could have far-reaching ramifications for the development of AI models moving forward, potentially setting legal precedents in the US and beyond.

(Photo by Amy Syiek)

See also: UK wants to prove AI can modernise public services responsibly

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Meta accused of using pirated data for AI development appeared first on AI News.

]]>