Home Community Insights Google Warns Employees About the Use of Chatbots, Including Bard

Google Warns Employees About the Use of Chatbots, Including Bard

Google Warns Employees About the Use of Chatbots, Including Bard

Google has warned its employees to watch how they use Bard and ChatGPT, generative AI chatbots, amid concerns that the technology, for now, cannot be trusted.

Reuters reported, quoting four people familiar with the matter, that the warning includes advice to employees not to enter confidential materials into AI chatbots. The advice, according to the people, is based on the company’s long-standing policy on safeguarding information.

The recently released chatbots, including Google-owned Bard, took the tech world by storm with their ability to add human context while producing answers to prompts, covering a variety of issues.

Tekedia Mini-MBA edition 15 (Sept 9 – Dec 7, 2024) has started registrations; register today for early bird discounts.

Tekedia AI in Business Masterclass opens registrations here.

Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.

The warning is based on concern that human reviewers may read the chats, and researchers found that similar AI could reproduce the data it absorbed during training, creating a leak risk, according to the report.

The sources said Alphabet has advised its engineers to refrain from directly using computer code generated by chatbots. The company acknowledged that its chatbot, Bard, may provide unwanted code suggestions but stated that it still aids programmers. Google emphasized its commitment to transparency regarding the limitations of its technology.

The rest of the report highlights other major concerns from the developers of generative technology:

The concerns show how Google wishes to avoid business harm from the software it launched in competition with ChatGPT. At stake in Google’s race against ChatGPT’s backers OpenAI and Microsoft are billions of dollars of investment and still untold advertising and cloud revenue from new AI programs.

Google’s caution also reflects what’s becoming a security standard for corporations, namely to warn personnel about using publicly-available chat programs.

A growing number of businesses around the world have set up guardrails on AI chatbots, among them Samsung, Amazon.com and Deutsche Bank, the companies told Reuters. Apple, which did not return requests for comment, reportedly has as well.

Some 43% of professionals were using ChatGPT or other AI tools as of January, often without telling their bosses, according to a survey of nearly 12,000 respondents including from top U.S.-based companies, done by the networking site Fishbowl.

By February, Google told staff testing Bard before its launch not to give it internal information, Insider reported. Now Google is rolling out Bard to more than 180 countries and in 40 languages as a springboard for creativity, and its warnings extend to its code suggestions.

Google told Reuters it has had detailed conversations with Ireland’s Data Protection Commission and is addressing regulators’ questions, after a Politico report Tuesday that the company was postponing Bard’s EU launch this week pending more information about the chatbot’s impact on privacy.

WORRIES ABOUT SENSITIVE INFORMATION

Such technology can draft emails, documents, even software itself, promising to vastly speed up tasks. Included in this content, however, can be misinformation, sensitive data or even copyrighted passages from a “Harry Potter” novel.

A Google privacy notice updated on June 1 also states: “Don’t include confidential or sensitive information in your Bard conversations.”

Some companies have developed software to address such concerns. For instance, Cloudflare, which defends websites against cyberattacks and offers other cloud services, is marketing a capability for businesses to tag and restrict some data from flowing externally.

Google and Microsoft also are offering conversational tools to business customers that will come with a higher price tag but refrain from absorbing data into public AI models. The default setting in Bard and ChatGPT is to save users’ conversation history, which users can opt to delete.

It “makes sense” that companies would not want their staff to use public chatbots for work, said Yusuf Mehdi, Microsoft’s consumer chief marketing officer.

“Companies are taking a duly conservative standpoint,” said Mehdi, explaining how Microsoft’s free Bing chatbot compares with its enterprise software. “There, our policies are much more strict.”

Microsoft declined to comment on whether it has a blanket ban on staff entering confidential information into public AI programs, including its own, though a different executive there told Reuters he personally restricted his use.

Matthew Prince, CEO of Cloudflare, said that typing confidential matters into chatbots was like “turning a bunch of PhD students loose in all of your private records.”

No posts to display

Post Comment

Please enter your comment!
Please enter your name here