Google has issued a directive to its employees regarding the cautious use of AI chatbots, including Google's Bard and OpenAI's ChatGPT. Concerns about potential leaks of confidential information have prompted the tech giant to advise its staff against entering sensitive data into these chatbot platforms. The cautionary measures are based on the fact that human reviewers may have access to the chat entries, and chatbots themselves can utilize previous interactions for training purposes. Recent incidents, such as the Samsung data leak, have underscored the need for vigilance in handling confidential information within these AI-driven platforms.
According to four anonymous sources close to the matter, Reuters reports that Alphabet has instructed its employees to refrain from sharing confidential information with AI chatbots. Alphabet expresses concern about the potential risks associated with human reviewers accessing sensitive data through these chatbot systems. Additionally, chatbots' ability to utilize past inputs for training purposes presents an additional avenue for leaks. Samsung's recent acknowledgment of internal data leaks, caused by staff utilizing ChatGPT, reinforces the validity of these concerns.
Alphabet's cautionary stance aligns with recent actions by other tech giants. In January, an Amazon lawyer urged employees not to share code or any confidential information with ChatGPT. Likewise, Apple implemented an injunction prohibiting its employees from using ChatGPT and Microsoft-owned GitHub Copilot, an AI code writer. Apple's move coincides with its ambition to develop its own large language model, evident from its acquisition of two AI startups in 2020.
Google's response to the rising demand for AI chatbots resulted in the release of Bard, powered by its in-house AI engine, LaMDA. In preparation for Bard's launch, Google CEO Sundar Pichai encouraged Googlers to test the chatbot for several hours each day. However, the release of Bard in the European Union was recently delayed due to privacy concerns raised by Irish regulators. The Irish Data Protection Commission contended that both Google and Bard fail to comply with the Personal Data Protection law.
As the development and deployment of AI chatbots advance, tech giants like Alphabet, Amazon, and Apple are taking proactive measures to protect sensitive information from potential leaks. Alphabet's directive to its employees reflects the growing awareness of the risks associated with AI chatbots and the need for caution when handling confidential data. With Bard's delayed release in the EU, it is evident that privacy concerns surrounding these chatbot platforms remain a key focus. The ongoing efforts to strike a balance between AI-driven innovation and data protection are crucial for building trust and ensuring the secure use of these powerful tools in the future