Home Community Insights FTC Investigates OpenAI Over Handling of Users’ Data by ChatGPT

FTC Investigates OpenAI Over Handling of Users’ Data by ChatGPT

FTC Investigates OpenAI Over Handling of Users’ Data by ChatGPT

The Federal Trade Commission (FTC) has commenced an investigation on Artificial Intelligence company OpenAI, maker of ChatGPT, over the handling of users’ data by the chatbot.

In a 20-page letter to OpenAI, the FTC asked the company to respond to dozens of requests ranging from how it obtains the data it uses to train its large language models (LLM), to how it generates information about individuals.

The request from FTC, calls for descriptions of how OpenAl tests, tweaks, and manipulates its algorithms, particularly to produce different responses or to respond to risks, and in different languages.

Tekedia Mini-MBA edition 16 (Feb 10 – May 3, 2025) opens registrations; register today for early bird discounts.

Tekedia AI in Business Masterclass opens registrations here.

Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.

The request also asks the company to explain any steps it has taken to address cases of “hallucination,” an industry term describing outcomes where an Al generates false information.

FTC’s investigation on the Artificial Intelligence company is coming after it received reports that OpenAI’s ChatGPT generated statements about real individuals that are false and misleading.

Recall that in April 2023, a regional Australian Mayor Brian Hood, threatened to sue OpenAI if it does not correct ChatGPT’s false claims that he had served time in prison for bribery, in what would be the first defamation lawsuit against the AI company.

The mayor became concerned about his reputation when members of the public brought to his notice that ChatGPT had falsely accused him as a guilty party in a foreign bribery scandal involving a subsidiary of the Reserve Bank of Australia in the early 2000s.

His lawyers, therefore, sent a letter of concern to OpenAI on 21 March, giving the company  28 days to fix the errors about their client or face a possible defamation lawsuit.

A recent check of Brian Hood on ChatGPT is met with the message, “I’m unable to produce a response”, this suggests that OpenAI has taken down the defamatory info concerning the Mayor.

It is understood that publishing such misleading information could amount to defamation or libel or simply “reputational damage” as the FTC’s current letter to OpenAI reportedly calls them.

OpenAl has been upfront about some of the limitations of its products. For example, the white paper linked to GPT’s latest release, GPT-4, explains that the model may produce content that is nonsensical or untruthful in relation to certain sources.

One of the major shortcomings of ChatGPT is its tendency to produce inaccurate or incomprehensible texts in the midst of generating plausible and compelling responses. This is a widely pervasive issue with language models and ChatGPT is also not immune to this hallucination defect. The company has however continued to warn users that chatGPT can occasionally generate incorrect facts.

Notably, the FTC has issued multiple warnings that existing consumer protection laws apply to AI, even as the administration and Congress struggle to outline new regulations.

The FTC’s demands for OpenAI are the first indication of how it intends to enforce those warnings. If the FTC finds that a company violates consumer protection laws, it can levy fines or put a business under a consent decree, which can dictate how the company handles data.

FTC’s probe comes as the US government is taking its first tentative steps toward establishing rules for artificial intelligence tools. It is worth noting that the Biden administration has previously introduced a “guide” around the development of AI systems in the form of a voluntary “bill of rights” which entails five principles that companies should consider for their products.

These include data privacy, protections against algorithmic discrimination, and transparency around when and how an automated system is being used.

Meanwhile, OpenAI has struck a deal with AP on using AI systems to improve journalism.

The Associated Press announced it’s embarking on a two-year partnership with OpenAI, the generative artificial intelligence company behind ChatGPT. Though the chatbot’s text-generating capabilities have raised questions about whether such technology will ultimately enhance or eliminate newsrooms, the “AP hopes to be an industry leader in developing standards and best practices” for AI use in journalism, writes Axios. Per a joint statement, OpenAI will use stories from the newswire’s “high-quality, factual text archive” dating back to 1985 to train its algorithms, and the AP will purportedly benefit from its partner’s “technology and product expertise.”

  • The AP helped pioneer the use of automation in news gathering, leaning on AI for about 10 years now in coverage of corporate earnings and local sporting events.
  • Generative AI is not used to produce AP news stories, but other newsrooms have attempted to incorporate the technology with mixed results.
  • Authors, visual artists and source-code owners have sued generative AI companies for intellectual property violationswhen their work has allegedly been used to train AI systems.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here