Google tells employees not to share confidential materials with AI chatbots including Bard
Google tells employees not to share confidential materials with AI chatbots including Bard
Google Bard’s FAQ notes that the company collects conversation history, location, feedback, and usage information when there’s an interaction with the chatbot.
In Short
- Google’s warning somewhat contradicts its earlier position with Bard.
- After the software giant rolled out Bard earlier this year to rival ChatGPT, employees were asked to the AI chatbot extensively.
- Google now wants employees to not share sensitive information with Bard.
Google is reportedly warning employees about sharing confidential information with AI chatbots, like ChatGPT and the company’s own Bard. As reported by Reuters, the warning is aimed at safeguarding sensitive information as LMM models like Bard and Google can use to train themselves and leak at a later stage. Sensitive information can also be viewed by human reviewers who act like moderators. The report highlights that Google engineers are being warned not to use codes generated by AI chatbots.
Google Bard’s FAQ notes that the company collects conversation history, location, feedback, and usage information when there’s an interaction with the chatbot. The page reads, “That data helps us provide, improve and develop Google products, services, and machine-learning technologies.”
However, the report suggests Google employees can still use Bard for other work. Google’s warning somewhat contradicts its earlier position with Bard. After the software giant rolled out Bard earlier this year to rival ChatGPT, employees were asked to the AI chatbot extensively to test its strengths and weaknesses.
Google’s warning to its employees also echoes a security standard many corporations are adopting. Some companies have banned using publicly-available AI chatbots. Samsung was among the companies that reportedly banned using ChatGPT after some employees were caught sharing sensitive information.
In a statement, Google told the publication that the company wanted to be “transparent” about Bard’s limitations. The company notes, “Bard can make undesired code suggestions, but it helps programmers nonetheless.” The AI chatbot can also draft emails, review code, proofread long essays, solve math problems, and even generate images in seconds.
Speaking over security concerns with free-to-use AI chatbots, Matthew Prince, CEO of Cloudflare, said sharing private information with chatbots was like “turning a bunch of PhD students loose in all of your private records.”
Cloudflare, which offers cybersecurity services to enterprises, is marketing a capability for businesses to tag and restrict some data from flowing externally. Microsoft is also working on a private ChatGPT chatbot, under the same name, for enterprise customers. Microsoft and OpenAI’s partnership lets the former market and create platforms with the ChatGPT moniker. The private ChatGPT chatbot is said to be built on Microsoft’s own cloud networks. It remains unclear whether Microsoft has also imposed similar restrictions on using Bing Chat, like Google for Bard.
The report citing Microsoft’s consumer chief marketing officer Yusuf Mehdi notes that “companies are taking a duly conservative standpoint.” Mehdi was referring to the company’s work on the private ChatGPT services for business customers.