The Center for AI and Digital Policy (CAIDP) has filed a complaint with the Federal Trade Commission (FTC) against OpenAI, claiming that their latest language model, GPT-4, violates FTC rules against deception and unfairness.
The CAIDP believes that GPT-4 is a risk to privacy and public safety and alleges that it makes unproven claims and is not adequately tested.
This complaint follows an open letter signed by major figures in AI, including Elon Musk, calling for a six-month pause on training systems more powerful than GPT-4.
The complaint asks the FTC to investigate OpenAI and find that the commercial release of GPT-4 violates Section 5 of the FTC Act, which provides guidance on the governance of AI and outlines emerging norms for its use.
The complaint argues that GPT-4 is biased, deceptive, and a risk to privacy and public safety.
The CAIDP cites past reports from OpenAI to argue that the company knows about the potential for disinformation and influence operations, as well as concerns about the proliferation of weapons through AI.
The complaint also notes that OpenAI has warned about the reinforcement of ideologies, worldviews, truths, and untruths, and the potential to lock them in and foreclose future contestation, reflection, and improvement.
The complaint criticizes OpenAI for allegedly not conducting safety checks aimed at protecting children during GPT-4’s testing period.
The CAIDP quotes Ursula Pachl, Deputy Director of the European Consumer Organization (BEUC), who argues that public authorities must take action if a company fails to address concerns with AI algorithms.
By quoting Pachl, the CAIDP is calling for government regulation of AI. European regulators are already considering a rules-based approach to AI.
Meanwhile, companies are looking to make money in the generative AI space, with Microsoft Bing using GPT-4 to generate ad revenue. Such companies are likely awaiting the FTC’s response to the CAIDP’s complaint.