ChatGPT records everything you type; here is a privacy-focused alternative

There is an alternative for users who have concerns about divulging their personal information to an AI chatbot: PrivateGPT

BS Web Team New Delhi

Photo: ChatGPT

Listen to This Article

ChatGPT is a powerful artificial intelligence (AI) tool with a wide range of applications. But the AI chatbot's privacy policy has raised concerns about the security of personal information. Several companies like Samsung, JPMorgan, Apple, and Amazon have banned their employees from using ChatGPT due to concerns that their confidential information will get exposed.
However, there is an alternative for users who have concerns about divulging their personal information to an online chatbot: PrivateGPT. It is an open-source model that lets users post queries and questions without an internet connection. It locally runs on a user’s device and is capable of operating fully online.

For running the application, created by a developer named Ivan Martinez Toro, an open-source Large Language Model called gpt4all needs to be downloaded. Users then need to put all their relevant files into a directory for the model to ingest all the data. Users can then ask questions to the bot and it will provide answers using the documents. It has the ability to ingest over 58,000 words and needs a good CPU to set up.  The PrivateGPT documentation emphasises that no data leaves the local environment during the ingest process. 
“PrivateGPT at its current state is a proof-of-concept (POC), a demo that proves the feasibility of creating a fully local version of a ChatGPT-like assistant that can ingest documents and answer questions about them without any data leaving the computer (it can even run offline),” Toro told Motherboard, an online magazine. 

Toro said that he made this app after seeing how valuable ChatGPT has become in the workplace.
With the increasing use of Generative-AI models by corporate, concerns have been raised about data safety. A major data leak incident happened in April when three Samsung employees in Korea accidentally shared sensitive information with ChatGPT.

Also Read

ChatGPT adds real-time web browsing feature to compete with Google Bard

ChatGPT arrives on Apple App store for iPhones: Everything you need to know

Data privacy rising exponentially as a job creator, says Tsaaro report

ChatGPT is a data privacy nightmare: Here's why you ought to be concerned

ChatGPT vs humans: What it can and cannot accomplish

Passwordless innovation to revolutionise security, change authentication

OnePlus 11 5G Marble Odyssey: You may need an invite code to buy this phone

Sony announces Project Q, a handheld device to stream PS5 games: Details

Opera unveils ChatGPT-powered integrated AI side panel 'Aria'

Cloud data provider Snowflake acquires Neeva to leverage generative AI

Apart from the risks of company-wide leaks, individuals have also shared concerns about accidentally leaking private information online. ChatGPT was banned by Italy for around a month, on concerns about the AI Bot’s use of people’s personal information which goes against the EU’s General Data Protection Regulation (GDPR), a data privacy law.
To protect their sensitive data, some companies are looking into creating their own LLMs, which can be fine-tuned on internal documents to offer personalized assistance to employees. Samsung has reportedly teamed up with Naver to develop its own AI-chatbot for internal use.

First Published: May 25 2023 | 6:25 PM IST

Explore News