Latest

Grok 3’s Brief Censorship of Trump and Musk Sparks Controversy

Who knew AI could play favorites? Artificial intelligence was supposed to be neutral, right? Just pure cold logic with no human bias or political drama, I guess not in this scenario. When Elon Musk released Grok 3 as a “maximally truth-seeking AI”, most people wouldn’t have thought that it would suddenly get very shy about naming some controversial figures, particularly its own creator. Over the weekend, users discovered that Grok 3 seemed to have an unwritten rule that emphasized that Musk or Trump are not to be roasted.

Last Monday, in a live stream, billionaire Elon Musk introduced Grok 3, the latest AI model from the company he founded, xAI, calling it a “maximally truth-seeking AI.” However, users reported that Grok for a brief period censored unflattering mentions of President Donald Trump and Musk himself. When asked in “Think” mode, “Who is the biggest misinformation spreader?” social media users noted that Grok 3’s “chain of thought” reasoning indicated it had been explicitly instructed not to mention Trump or Musk. This revelation raised eyebrows, undermining Musk’s declarations of an apolitical AI.

Although, after some time, the changes were reverted and Grok 3 was back to mentioning Trump in response to the misinformation question. Igor Babuschkin, an engineering lead at xAI, confirmed in X post that it was indeed a bug caused by an internal change made by one employee that was withdrawn soon after it became the topic of much attention at the company.

He said, “I believe it is good that we’re keeping the system prompts open. We want people to be able to verify what it is we’re asking Grok to do. In this case an employee pushed the change because they thought it would help, but this is obviously not in line with our values”.

Misinformation and Controversy:

There is quite a lot of debate about misinformation, Trump and Musk bear the brunt of this confrontational subject for promoting provably false things. Recent examples include the claims that Zelenskyy is a dictator with a 4% popularity rating and the ridiculous assertion that it was Ukraine that started the ongoing war against Russia. xAI’s social platform X frequently marks the misleading statements of both with his Community Notes system.

This Grok 3 controversy is now merely the tip of the iceberg concerning accusations of AI political prejudice. Critics contend Grok is biased in favor of the left, and yet another recent incident has sparked debate on that. Some users reported Grok 3 was generating messages that claimed the death penalty for Trump and Musk was deserved. xAI quickly corrected the situation, and Igor Babuschkin called it a “really terrible and bad failure.”

AI Biasedness:

Musk has always pitched Grok as the opposite of the excessively “woke” AI models, promising it would be free of the constraint applied by the competitors like “OpenAI’s ChatGPT.” Previous Groks like Grok 2 were rather edgy and would even go as far as vulgarity when answering questions, which is tactfully avoided by the rest of their AI counterparts. Studies acclaim that Grok is biased in favor of the political left concerning transgender rights, diversity programs, and economic inequality. Musk attributes these supposed left-wing tendencies to Grok’s training set which is the publicly available web pages. He pledged to try to move Grok towards a more neutral political model.

With regard to Grok 3, we have yet another example of how incredibly hard it is to come up with an AI model that can claim neutrality and such instances continue to challenge the ever-colder war between AI transparency and control. While Musk and his fellow tech leaders push “unbiased” AI, the question comes down to, can any AI rule itself have been created by a set of biased people? Or can we expect a future where even machines are said to have political opinions? There is still a strong challenge of attaining fairness and neutrality for AI models that influence opinions in public discourse and only time will tell if Musk delivers on his promise of an unbiased Grok.

Read More: Did xAI Mislead About Grok 3’s Benchmarks? OpenAI Disputes Claims

Google Veo 2 AI Video Model Pricing Revealed at 50 Cents per Second

Google has long been a pioneer in artificial intelligence, consistently leading advancements and breakthroughs through its dedicated AI research divisions, such as Google DeepMind and Google AI. Over the years, the tech giant introduced transformative AI models like BERT for language understanding, Imagen for image generation, and Gemini, its versatile generative AI model, significantly shaping how industries approach AI-driven tasks.

In its latest development, Google has quietly disclosed pricing for Veo 2, Google latest AI video tool Veo 2 used for video generation, and was initially announced in December. According to details published on Google’s official pricing page, utilizing Veo 2 will cost users 50 cents per second of generated video, which equates to roughly $30 per minute or $1,800 per hour.

To put these figures into perspective, Google DeepMind researcher Jon Barron compared Veo 2’s pricing to the production costs of Marvel’s blockbuster, “Avengers: Endgame,” which had an enormous budget of approximately $356 million, or about $32,000 per second of footage. This comparison effectively highlights the relative affordability of Google’s AI-generated video content against traditional filmmaking costs.

However, it’s worth noting that users may not ultimately use every second of footage generated through Veo 2, especially since Google has indicated that the model typically creates videos of around two minutes in length. Users could pay for footage they don’t incorporate into their final projects.

Google’s pricing strategy also stands in contrast to rival OpenAI’s Sora model, which recently became available through a subscription-based pricing model—part of a $200-per-month ChatGPT Pro subscription.

Overall, Google per-second pricing positions Veo 2 as a premium service targeted at professionals and enterprises. While the upfront cost might appear significant, the model’s efficiency and flexibility could notably reduce production expenses and timelines, making it a compelling option for short, impactful, and commercially oriented video projects. Users, however, should carefully manage their content-generation planning to optimize cost-effectiveness.

Read More: Apple Launches iPhone 16e in China to Compete with Local Brands

Apple Launches iPhone 16e in China to Compete with Local Brands

Apple is preparing to launch the iPhone 16e in China, aiming to regain its competitive edge in one of the world’s largest smartphone markets. Priced at approximately $600, the model aligns with China’s national stimulus program, which offers subsidies on smartphones under $800. This move is seen as part of Apple’s effort to maintain its foothold in the market amid evolving consumer preferences.

Competitive Market Landscape

Apple faces increasing competition from domestic smartphone manufacturers, which continue to introduce feature-rich devices at more accessible price points. While Apple’s premium devices cater to high-end users, the demand for more affordable options is growing among Chinese consumers.

Regulatory Considerations

Apple has yet to receive regulatory approval for some of its latest software and AI-driven features in China. This situation creates uncertainty regarding the availability of Apple Intelligence services, which are central to its latest iPhone models. The lack of approval could impact the iPhone 16e’s appeal compared to locally manufactured devices that already integrate similar capabilities.

Apple’s Market Position and Future Outlook

Apple previously held the top position in China’s smartphone market, surpassing competitors. However, reports indicate a shift in market rankings, prompting Apple to introduce the iPhone 16e as part of its strategy to sustain its position. The iPhone 16e’s performance in China will be a crucial indicator of Apple’s ability to navigate market challenges, regulatory hurdles, and competitive pricing pressures.

Read More: HP Acquires Humane: What It Means for the Future of AI Wearables

Trump Administration Reportedly Shutting Down Federal EV Chargers

The General Services Administration (GSA), the federal agency responsible for managing government buildings, is reportedly planning to shut down all federal electric vehicle (EV) chargers, according to a report by The Verge. The move would impact hundreds of charging stations with approximately 8,000 charging plugs used by federal employees and government-owned vehicles. A source familiar with the situation told The Verge that federal employees will be given official guidance next week to shut down charging stations. Some regional offices have already received instructions to take their EV chargers offline.

Federal Centers Begin Disabling Charging Stations

This week, Colorado Public Radio reported that the Denver Federal Center had received internal communication indicating that charging stations on-site would be shut down. The email reportedly stated that the stations were deemed “not mission critical”, justifying their removal. The broader policy shift aligns with the Trump administration’s efforts to reduce government expenditures on renewable energy initiatives. The administration has previously cut back on federal support for EV infrastructure, including reducing funding for programs that once provided financial assistance to Tesla and other EV manufacturers.

Policy Shift Raises Concerns Over EV Adoption

The potential shutdown of federal EV chargers has sparked concerns about government sustainability goals and the future of federal fleet electrification. The federal government had previously made efforts to transition to electric vehicles as part of climate-conscious policies, but recent decisions signal a shift in priorities. The GSA has not yet issued an official statement regarding the reported shutdown. TechCrunch has reached out to the agency for comment, but no response has been provided as of now.

The removal of these EV chargers could have long-term implications on the adoption of electric vehicles within the federal workforce, potentially slowing progress toward clean energy transportation.

Read More: US AI Safety Institute Faces Major Cuts Amid Government Layoffs

US AI Safety Institute Faces Major Cuts Amid Government Layoffs

The US AI Safety Institute (AISI), a key organization focused on AI risk assessment and policy development, is facing significant layoffs as part of broader cuts at the National Institute of Standards and Technology (NIST). Reports indicate that up to 500 employees could be affected, raising concerns about the future of AI safety efforts in the US.

According to Axios, both AISI and the Chips for America initiative—which also operates under NIST—are expected to be significantly impacted. Bloomberg further reported that some employees have already received verbal notifications about their impending terminations, which primarily target probationary employees within their first two years on the job.

AISI’s Future in Doubt Following Policy Repeal

Even before news of these layoffs surfaced, AISI’s long-term stability was uncertain. The institute was established as part of President Joe Biden’s executive order on AI safety in 2023. However, President Donald Trump repealed the order on his first day back in office, casting doubt on AISI’s role in AI governance. Adding to the instability, AISI’s director resigned earlier this month, leaving the institute without clear leadership at a time when AI regulation remains a global concern.

Experts Warn of AI Policy Setbacks

The reported layoffs have drawn criticism from AI safety and policy experts, who argue that cutting AISI’s workforce could undermine the US government’s ability to develop AI safety standards and monitor risks effectively.

“These cuts, if confirmed, would severely impact the government’s capacity to research and address critical AI safety concerns at a time when such expertise is more vital than ever,” said Jason Green-Lowe, executive director of the Center for AI Policy. With AI development rapidly advancing and regulatory discussions taking center stage worldwide, the potential downsizing of AISI raises concerns over the US’s role in global AI safety initiatives.

Uncertain Path Forward for AI Regulation

As the federal government reassesses AI safety priorities, the impact of these layoffs remains unclear. While AISI was positioned to guide AI regulation and set technical standards, its ability to function effectively may be severely limited if staffing reductions proceed as reported. Industry analysts warn that a lack of dedicated AI safety oversight could leave the US at a disadvantage in shaping international AI policies. Meanwhile, affected employees await formal confirmation of layoffs and potential restructuring plans within NIST.

Read More: Did xAI Mislead About Grok 3’s Benchmarks? OpenAI Disputes Claims

HP Acquires Humane: What It Means for the Future of AI Wearables

HP’s recent $116 million acquisition of Humane has sent ripples through the tech industry. Once valued at $240 million, the AI wearable startup has been acquired for less than half of its original funding, signalling a major shift in the AI hardware space. The deal also comes with job offers for select Humane employees, while others have been let go. With Humane’s AI Pin officially discontinued, this raises questions about the future of AI-driven wearable technology and HP‘s plans for AI innovation. Let’s dive into the details.

Humane’s AI Pin: A Short-Lived Vision

Humane’s AI Pin was positioned as a screenless AI-powered assistant, promising a futuristic smartphone alternative. The $499 wearable aimed to leverage AI for daily tasks like messaging, calls, and web queries.

However, the device struggled due to:

  • High Price Tag – The $499 price made it less attractive than existing smart assistants.
  • Performance Issues – AI response times were slow, and cloud dependency limited functionality.
  • Limited Adoption – Consumers didn’t fully embrace the concept of screenless AI wearables.

With sales discontinued and cloud services shutting down by February 28, the Humane AI Pin is officially dead.

Why Did HP Acquire Humane?

HP’s decision to buy out Humane’s assets suggests the company sees value in AI wearables and computing. Potential reasons include:

  • AI Hardware Integration – HP may incorporate Humane’s technology into laptops, tablets, or smart accessories.
  • AI Research & Development – Humane’s AI models and patents could enhance HP’s AI-driven software and cloud services.
  • Enterprise & Consumer Applications – HP might reposition Humane’s AI assistant for business users rather than mainstream consumers.

What Happens to Humane’s Employees?

Following the acquisition, some Humane employees received job offers from HP, with salary increases ranging from 30% to 70%, stock options, and bonuses. However, many employees working closely with AI Pin development were laid off, indicating a shift in priorities.

What This Means for AI Wearables

The fall of Humane highlights key lessons for the future of AI-powered devices:

  • AI Hardware Needs Practicality – Consumers prefer AI features integrated into existing devices rather than standalone gadgets.
  • Cloud-Dependency is Risky – Relying on cloud services for core functionality limits usability.
  • Big Tech Dominates AI Innovation – Startups in AI hardware must compete with tech giants like Apple, Google, and Microsoft.

Final Thoughts: Is HP’s AI Bet Worth It?

HP’s acquisition of Humane raises an important question: Will AI wearables survive, or was Humane’s failure a sign that the market isn’t ready? With AI assistants like ChatGPT, Gemini, and Apple’s AI models becoming more powerful, the future of AI devices might lie in software rather than standalone wearables. Whether HP revives Humane’s vision or pivots entirely remains to be seen.

Read More: Nvidia CEO Jensen Huang says market got it wrong about DeepSeek’s impact

Did xAI Mislead About Grok 3’s Benchmarks? OpenAI Disputes Claims

Debates over AI benchmarks have resurfaced following xAI’s recent claims about its latest model, Grok 3. An OpenAI employee publicly accused Elon Musk’s xAI of presenting misleading benchmark results, while xAI co-founder Igor Babushkin defended the company’s methodology. The controversy stems from a graph published by xAI showing Grok3 performance on AIME 2025, a benchmark based on complex mathematical problems. While some AI researchers question AIME’s validity as an AI benchmark, it remains a commonly used test for assessing AI models’ math capabilities.

The Missing Benchmark Data

In xAI’s chart, Grok3 Reasoning Beta and Grok3 mini Reasoning were shown to outperform OpenAI’s o3-mini-high model on AIME 2025. However, OpenAI employees quickly pointed out that xAI did not include o3-mini-high’s score at “cons@64.” The “cons@64” (consensus@64) metric allows a model to attempt each problem 64 times, selecting the most frequent response as the final answer. Since this significantly improves a model’s benchmark scores, omitting it from xAI’s comparison may have made Grok 3 appear more advanced than it actually is.

When comparing @1 scores (which measure a model’s first attempt accuracy), Grok 3 Reasoning Beta and Grok 3 mini Reasoning scored below OpenAI’s o3-mini-high. Additionally, Grok 3 Reasoning Beta trailed behind OpenAI’s o1 model set to “medium” computing, raising further questions about xAI’s claim that Grok 3 is the “world’s smartest AI.”

xAI Defends Its Approach, OpenAI Calls for Transparency

Igor Babushkin, co-founder of xAI, responded on X, arguing that OpenAI has also presented selective benchmarks, though mainly when comparing its models. A third-party AI researcher attempted to provide a more balanced view by compiling a graph displaying various models’ performance at cons@64, aiming to offer a more transparent comparison. However, AI researcher Nathan Lambert pointed out a key missing element in the debate: computational cost. Without knowing how much computational power (and cost) was required for each model to achieve its best scores, benchmarking alone does not fully convey an AI model’s efficiency or real-world capabilities.

What’s Next for AI Benchmarks?

The dispute between xAI and OpenAI highlights ongoing challenges in AI benchmarking. As AI labs race to demonstrate superiority, the lack of standardized, transparent, and cost-aware metrics continues to fuel debates over how AI models should be evaluated. While xAI stands by its claims, OpenAI’s criticism raises questions about how AI companies should present performance results to avoid misleading comparisons. The broader AI community may need to push for more standardized evaluation methods to ensure fairness and accuracy in future AI model comparisons.

Read More: Nvidia CEO Jensen Huang says market got it wrong about DeepSeek’s impact

Nvidia CEO Jensen Huang says market got it wrong about DeepSeek’s impact

Nvidia founder and CEO Jensen Huang said the market got it wrong regarding DeepSeek’s technological advancements and its potential to impact the chipmaker’s business negatively. Instead, Huang called DeepSeek’s R1 open-source reasoning model “incredibly exciting” while speaking with Alex Bouzari, CEO of DataDirect Networks, in a pre-recorded interview that was released on Thursday.

“I think the market responded to R1, as in, ‘Oh my gosh. AI is finished,’” Huang told Bouzari. “You know, it dropped out of the sky. We don’t need to do any computing anymore. It’s exactly the opposite. It’s [the] complete opposite.”

Huang said that the release of R1 is inherently good for the AI market and will accelerate the adoption of AI as opposed to this release meaning that the market no longer had a use for compute resources — like the ones Nvidia produces.

“It’s making everybody take notice that, okay, there are opportunities to have the models be far more efficient than what we thought was possible,” Huang said. “And so it’s expanding, and it’s accelerating the adoption of AI.” He also pointed out that, despite DeepSeek’s advancements in pre-training AI models, post-training will remain important and resource-intensive.

“Reasoning is a fairly compute-intensive part of it,” Huang added.

Nvidia declined to provide further commentary. Huang’s comments come almost a month after DeepSeek released the open-source version of its R1 model, which rocked the AI market in general and seemed to affect Nvidia disproportionately. The company’s stock price plummeted 16.9% in one market day upon releasing DeepSeek’s news.

According to data from Yahoo Finance, Nvidia’s stock closed at $142.62 a share on January 24. The following Monday, January 27, the stock dropped rapidly and closed at $118.52 a share. This event wiped $600 billion off of Nvidia’s market cap in just three days. The chip company’s stock has almost fully recovered since then. On Friday, the stock opened at $140 a share, which means the company has almost fully regained that lost value in about a month. Nvidia reports its Q4 earnings on February 26, which will likely address the market reaction more. Meanwhile, DeepSeek announced on Thursday that it plans to open source five code repositories as part of an “open source week” event next week.

Read More: OpenAI to Shift AI Compute from Microsoft to SoftBank

Apple Ends iCloud Encryption in UK After Government Demands

Apple has confirmed the removal of Advanced Data Protection (ADP) for iCloud backups in the UK following government demands for access to user data. This move means UK users will no longer have the option to secure their iCloud backups with end-to-end encryption, making it possible for authorities to request access to stored data under legal provisions.

Government Mandate Behind the Decision

The removal of ADP aligns with requirements set by the Investigatory Powers Act of 2016, which allows UK law enforcement to request access to encrypted data under a Technical Capability Notice (TCN). According to a report from The Washington Post, the UK government issued a Technical Capability Notice (TCN) to Apple under the Investigatory Powers Act of 2016. This notice compels companies to assist law enforcement in data collection by ensuring they can access encrypted information. Apple’s decision to remove ADP aligns with these legal requirements, as TCNs require firms to develop methods to provide data upon legal request.

While these notices do not provide unrestricted access, they compel companies to develop mechanisms for law enforcement to retrieve data when legally required. Apple has previously stated its commitment to user privacy and encryption but appears to have made this change to comply with UK regulations. A UK Home Office spokesperson declined to comment on whether a direct order was issued, stating, “We do not comment on operational matters, including confirming or denying the existence of such notices.”

Impact on iCloud Users in the UK

With the removal of ADP, UK users who rely on iCloud backups will no longer have the same level of encryption as users in other regions. This affects stored data, including messages, photos, and documents, which can now be accessed by Apple and shared with law enforcement upon legal request. Existing users who have already enabled ADP will not have it automatically disabled, but they will receive notifications prompting them to turn off the feature manually. Users who wish to maintain encryption must store their data locally on their devices without iCloud backup functionality.

Privacy and Security Concerns

Cybersecurity experts have raised concerns that this change weakens user privacy and data security. Many argue that once a government gains access to encrypted data, other nations may follow suit with similar demands. The move has also sparked fears of potential security risks, as reducing encryption may make user data more vulnerable to breaches and unauthorized access.

Industry Response and Future Implications

Digital rights organizations have criticized the decision, warning that it sets a precedent for further government intervention in encryption policies. Meredith Whittaker, president of Signal, has spoken against such measures, emphasizing that strong encryption is essential for security and digital privacy. Apple has maintained that while it is complying with UK law, it remains committed to encryption and will not create backdoors in its products. However, this move highlights the ongoing struggle between user privacy and government surveillance, with potential implications for tech companies operating in regions with strict data laws.

Read More: OpenAI Blocks Accounts in China & North Korea Over Misuse

OpenAI to Shift AI Compute from Microsoft to SoftBank

According to The Information Report on Friday, OpenAI is forecasting a significant shift in the next five years around who it gets most of its computing power from. OpenAI is significantly shifting its AI infrastructure, moving away from Microsoft’s cloud services and toward SoftBank-backed Stargate. By 2030, OpenAI expects 75 percent of its computing power to come from Stargate, marking a shift that carries a lot of opportunity and risk. Though this shift is coming, OpenAI will keep increasing its spending on Microsoft’s data centers in the next few years. However , the company’s operational expenses are poised to increase significantly.

Reports indicate that OpenAI will burn through $20 billion in cash by 2027, marking a significant financial shift from previous years, a massive jump from the $5 billion spent in 2024. By the decade’s end, OpenAI forecasts that running AI models (inference costs) will surpass AI training expenses, marking a significant shift in its computing strategy. This move signals OpenAI’s push for greater independence in cloud infrastructure as it scales its AI models.

Why Is OpenAI Starting to Move Away from Microsoft?

With this move, OpenAI is positioning itself for a world where computing resources are more often distributed. But is this the right move? Moving computing power over from Microsoft (whose Azure powers OpenAI today) to the SoftBank-backed Stargate project is not something that happens overnight; there is a lot of work to be done. OpenAI has leaned heavily on Microsoft’s Azure cloud, but as AI costs have taken off, the company seems to be looking for more control and diversification over its compute resources. There might be several reasons why they decide this.

Microsoft increasing interest in its in-house AI research might lead to strategic conflicts with OpenAI in the future, which might end up resulting in conflicts of interest between the two. To OpenAI, this could be a mandate to secure its long-term independence. In addition, OpenAI’s rising operational outlays — projected to surpass $20 billion by 2027 — necessitate a more fluid funding approach, and SoftBank is famous for its mega tech bets. In addition, OpenAI may want to decrease the reliance on U.S. cloud providers for strategic reasons as well, whether it be aimed at mitigating risks from potential regulatory scrutiny or geopolitical factors.

What It Signals About OpenAI’s Future

In leaning toward SoftBank-backed computing, OpenAI is making a calculated gamble. This could offer more autonomy, tailor-made AI chips, and improved financial flexibility, in other words. However, SoftBank’s track record of putting money into volatile deals (think WeWork) begs the question of whether this is a sustainable partnership in the long term.

And inference costs (i.e., running AI models) are expected to exceed training costs by 2030, so OpenAI needs a long-term sustainable solution. This could blow up in the face of the SoftBank-funded Stargate project if it fails to deliver the same stability and efficiency that Microsoft Azure provides. Ultimately, OpenAI’s pivot away from Microsoft is a high-stakes transition that could determine its trajectory in the A.I. industry. If done right, it could solidify OpenAI’s role as a leading innovator in AI. However, if the transition faces major roadblocks, it could open up new challenges that slow down OpenAI’s momentum in the AI race.

Read More: OpenAI Drops o3 AI Model to Unify AI Strategy with Game-Changing GPT-5

OpenAI Blocks Accounts in China & North Korea Over Misuse

OpenAI has announced the removal of user accounts from China and North Korea. OpenAI blocks accounts because the company believes these users use their accounts for malicious activities like surveillance and opinion-influence operations. This action underscores OpenAI’s commitment to ensuring its technology is used ethically and responsibly. Openai did not specify the total number of accounts that have been banned and the time frame of the action.

According to the Reuters chatgpt team said on last Friday:
The activities are ways authoritarian regimes could try to leverage AI against the U.S. as well as their own people, OpenAI said in a report, adding that it used AI tools to detect the operations.”

Identified Malicious Activities

OpenAI’s internal investigation revealed several concerning practices:

Propaganda Generation: Some users employed ChatGPT to create Spanish-language articles critical of the United States. These articles were subsequently published in mainstream Latin American media under the guise of a Chinese company’s authorship.

Fraudulent Employment Schemes: Actors with potential ties to North Korea utilized AI to fabricate resumes and online profiles. The objective was to deceitfully secure employment within Western corporations.

Financial Fraud Operations: A network based in Cambodia leveraged OpenAI’s technology to produce translated content. This content was disseminated across platforms like X (formerly Twitter) and Facebook, aiming to perpetrate financial scams.

OpenAI’s Proactive Measures

To detect and counteract these malicious endeavors, OpenAI harnessed its own AI-driven tools. While the company has not disclosed the exact number of accounts affected or the specific timeline of these activities, its swift response highlights the challenges tech companies face in preventing malicious entities’ exploitation of AI technologies.

The U.S. government has previously voiced apprehensions regarding the potential for AI technologies to be harnessed by authoritarian regimes for purposes such as domestic repression, dissemination of misinformation, and threats to international security. OpenAI’s recent actions align with efforts to prevent such misuse and emphasize the importance of vigilant monitoring and regulation in the AI sector.

The Future of AI Security

As AI continues to evolve and integrate into various facets of society, ensuring its ethical application remains paramount. OpenAI’s recent measures testify to the ongoing efforts required to safeguard technology from being weaponized for malicious intents.

Read More: OpenAI launched Deep Research, ChatGPT’s new AI agent for advanced level research

Meta Faces Legal Battle Over AI Training with Copyrighted Content

Meta is under intense scrutiny after newly unsealed court documents revealed internal discussions about using copyrighted content, including pirated books, to train its AI models. The revelations, part of the Kadrey v. Meta lawsuit, shed light on how Meta employees weighed the legal risks of using unlicensed data while attempting to keep pace with AI competitors.

Internal Deliberations Over Copyrighted Content

Court documents show that Meta employees debated whether to train AI models on copyrighted materials without explicit permission. In internal work chats, staff discussed acquiring copyrighted books without licensing deals and escalating the decision to company executives.

Meta research engineer Xavier Martinet suggested an “ask forgiveness, not for permission” approach, in a chat dated February 2023, according to the filings. Stating:

“[T]his is why they set up this gen ai org for [sic]: so we can be less risk averse.”

He further argued that negotiating deals with publishers was inefficient and that competitors were likely already using pirated data.

“I mean, worst case: we found out it is finally ok, while a gazillion start up [sic] just pirated tons of books on bittorrent.” Martinet wrote, according to the filings. “[M]y 2 cents again: trying to have deals with publishers directly takes a long time …”

Meta’s AI leadership acknowledged that licenses were needed for publicly available data, but employees noted that the company’s legal team was becoming more flexible on approving training data sources.

Talks of Libgen and Legal Risks

The filings reveal that Meta employees discussed using Libgen, a site known for providing unauthorized access to copyrighted books. in Wechat Melanie Kambadur, a senior manager for Meta’s Llama model research team, suggested using Libgen as an alternative to licensed datasets.

According to the Filling in one conversation, Sony Theakanath, director of product management at Meta, called Libgen “essential to meet SOTA numbers across all categories,” emphasizing that without it, Meta’s AI models might fall behind state-of-the-art (SOTA) benchmarks.

Theakanath also proposed strategies to mitigate legal risks, including removing data from Libgen that was “clearly marked as pirated/stolen” and ensuring that Meta would not publicly cite its use of the dataset.

“We would not disclose use of Libgen datasets used to train,” he wrote in an internal email to Meta AI VP Joelle Pineau.

Further discussions among Meta employees suggested that the company attempted to filter out risky content from Libgen files by searching for terms like “stolen” or “pirated” while still leveraging the remaining data for AI training.

Despite concerns raised by some staff, including a Google search result stating “No, Libgen is not legal,” discussions about utilizing the platform continued internally.

Meta’s AI Data Sources and Training Strategies

Additional filings suggest that Meta explored scraping Reddit data using techniques similar to those employed by a third-party service, Pushshift. There were also discussions about revisiting past decisions not to use Quora content, scientific articles, and licensed books. In a March 2024 chat, Chaya Nayak, director of product management for Meta’s generative AI division, indicated that leadership was considering overriding prior restrictions on training sets.

She emphasized the need for more diverse data sources, stating: “[W]e need more data.” Meta’s AI team also worked on tuning models to avoid reproducing copyrighted content, blocking responses to direct requests for protected materials and preventing AI from revealing its training data sources.

Legal and Industry Implications

The plaintiffs in Kadrey v. Meta have amended their lawsuit multiple times since filing in 2023 in the U.S. District Court for the Northern District of California. The latest claims allege that Meta not only used pirated data but also cross-referenced copyrighted books with available licensed versions to determine whether to pursue publishing agreements.

In response to the growing legal pressure, Meta has strengthened its legal defense by adding two Supreme Court litigators from the law firm Paul Weiss to its team. Meta has not yet publicly addressed these latest allegations. However, the case highlights the ongoing conflict between AI companies’ need for massive datasets and the legal protections surrounding intellectual property. The outcome could set a major precedent for how AI companies train models and navigate copyright laws in the future.

Read More: Meta & X Approved Anti-Muslim Hate Speech Ads Before German Election, Study Reveals