{"id":50562,"date":"2024-02-14T06:44:17","date_gmt":"2024-02-14T11:44:17","guid":{"rendered":"https:\/\/www.kaspersky.com\/blog\/?p=50562"},"modified":"2025-02-03T06:20:13","modified_gmt":"2025-02-03T11:20:13","slug":"how-to-use-chatgpt-ai-assistants-securely-2024","status":"publish","type":"post","link":"https:\/\/www.kaspersky.com\/blog\/how-to-use-chatgpt-ai-assistants-securely-2024\/50562\/","title":{"rendered":"How to use ChatGPT, Gemini, DeepSeek and other AI securely"},"content":{"rendered":"<p>Last year\u2019s explosive growth in AI applications, services, and plug-ins looks set to only accelerate. From office applications and image editors to integrated development environments (IDEs) such as Visual Studio\u00a0\u2014 AI is being added to familiar and long-used tools. Plenty of developers are creating thousands of new apps that tap the largest AI models. However, no one in this race has yet been able to solve the inherent security issues, first and foremost the minimizing of confidential data leaks, and also the level of account\/device hacking through various AI tools \u2014 let alone create proper safeguards against a futuristic \u201cevil AI\u201d. Until someone comes up with an off-the-shelf solution for protecting the users of AI assistants, you\u2019ll have to pick up a few skills and help yourself.<\/p>\n<p>So, how do you use AI without regretting it later?<\/p>\n<h2>Filter important data<\/h2>\n<p><a href=\"https:\/\/openai.com\/policies\/privacy-policy\" target=\"_blank\" rel=\"nofollow noopener\">The privacy policy of OpenAI<\/a>, the developer of ChatGPT, unequivocally states that any dialogs with the chatbot are saved and can be used for a number of purposes. First, these are solving technical issues and preventing terms-of-service violations: in case someone gets an idea to generate inappropriate content. Who would have thought it, right? In that case, chats may even be reviewed by a human. Second, the data may be used for training new GPT versions and making other product \u201cimprovements\u201d.<\/p>\n<p>Most other popular language models \u2014 be it Google\u2019s Gemini, Anthropic\u2019s Claude, DeepSeek or Microsoft\u2019s Bing and Copilot \u2014 have similar policies: they can all save dialogs in their entirety.<\/p>\n<p>That said, inadvertent chat leaks <a href=\"https:\/\/www.bbc.com\/news\/technology-65047304\" target=\"_blank\" rel=\"nofollow noopener\">have already occurred<\/a> due to software bugs, with users seeing other people\u2019s conversations instead of their own. The use of this data for training could also lead to a <a href=\"https:\/\/www.zdnet.com\/article\/chatgpt-can-leak-source-data-violate-privacy-says-googles-deepmind\/\" target=\"_blank\" rel=\"nofollow noopener\">data leak from a pre-trained model<\/a>: the AI assistant might give your information to someone if it believes it to be relevant for the response. Information security experts have even designed multiple attacks (<a href=\"https:\/\/www.bleepingcomputer.com\/news\/security\/openai-rolls-out-imperfect-fix-for-chatgpt-data-leak-flaw\/\" target=\"_blank\" rel=\"nofollow noopener\">one<\/a>, <a href=\"https:\/\/embracethered.com\/blog\/posts\/2023\/google-bard-data-exfiltration\/\" target=\"_blank\" rel=\"nofollow noopener\">two<\/a>, <a href=\"https:\/\/promptarmor.substack.com\/p\/data-exfiltration-from-writercom\" target=\"_blank\" rel=\"nofollow noopener\">three<\/a>) aimed at stealing dialogs, and they\u2019re unlikely to stop there.<\/p>\n<p>So, remember: anything you write to a chatbot can be used against you. We recommend taking precautions when talking to AI.<\/p>\n<p><strong>Don\u2019t send any personal data to a chatbot. <\/strong>No passwords, passport or bank card numbers, addresses, telephone numbers, names, or other personal data that belongs to you, your company, or your customers must end up in chats with an AI. You can replace these with asterisks or \u201cREDACTED\u201d in your request.<\/p>\n<p><strong>Don\u2019t upload any documents. <\/strong>Numerous plug-ins and add-ons let you use chatbots for document processing. There might be a strong temptation to upload a work document to, say, get an executive summary. However, by carelessly uploading of a multi-page document, you risk <a href=\"https:\/\/mashable.com\/article\/samsung-chatgpt-leak-details\" target=\"_blank\" rel=\"nofollow noopener\">leaking confidential data<\/a>, intellectual property, or a commercial secret such as the release date of a new product or the entire team\u2019s payroll. Or, worse than that, when processing documents received from external sources, you might be <a href=\"https:\/\/embracethered.com\/blog\/posts\/2023\/google-docs-ai-scam\/\" target=\"_blank\" rel=\"nofollow noopener\">targeted with an attack<\/a> that counts on the document being scanned by a language model.<\/p>\n<p><strong>Use privacy settings.<\/strong> Carefully review your large-language-model (LLM) vendor\u2019s privacy policy and available settings: these can normally be leveraged to minimize tracking. For example, OpenAI products let you disable saving of chat history. In that case, data will be removed after 30 days and never used for training. Those who use API, third-party apps, or services to access OpenAI solutions have that setting enabled by default.<\/p>\n<p><strong>Sending code? Clean up any confidential data.<\/strong> This tip goes out to those software engineers who use AI assistants for reviewing and improving their code: remove any API keys, server addresses, or any other information that could give away the structure of the application or the server configuration.<\/p>\n<h2>Limit the use of third-party applications and plug-ins<\/h2>\n<p>Follow the above tips every time \u2014 no matter what popular AI assistant you\u2019re using. However, even this may not be sufficient to ensure privacy. The use of ChatGPT plug-ins, Gemini extensions, or separate add-on applications gives rise to new types of threats.<\/p>\n<p>First, your chat history may now be stored not only on Google or OpenAI servers but also on servers belonging to the third party that supports the plug-in or add-on, as well as in unlikely corners of your computer or smartphone.<\/p>\n<p>Second, most plug-ins draw information from external sources: web searches, your Gmail inbox, or personal notes from services such as Notion, Jupyter, or Evernote. As a result, any of your data from those services may also end up on the servers where the plug-in or the language model itself is running. An integration like that may carry significant risks: for example, consider this <a href=\"https:\/\/embracethered.com\/blog\/posts\/2023\/chatgpt-plugin-vulns-chat-with-code\/\" target=\"_blank\" rel=\"nofollow noopener\">attack that creates new GitHub repositories on behalf of the user<\/a>.<\/p>\n<p>Third, the publication and verification of plug-ins for AI assistants are currently a much less orderly process than, say, app-screening in the App Store or Google Play. Therefore, your chances of encountering a poorly working, badly written, buggy, or even plain malicious plug-in are fairly high \u2014 all the more so because it seems <a href=\"https:\/\/embracethered.com\/blog\/posts\/2023\/chatgpt-plugin-vulns-chat-with-code\/\" target=\"_blank\" rel=\"nofollow noopener\">no one really checks<\/a> the creators or their contacts.<\/p>\n<p>How do you mitigate these risks? Our key tip here is to give it some time. The plug-in ecosystem is too young, the publication and support processes aren\u2019t smooth enough, and the creators themselves don\u2019t always take care to design plug-ins properly or comply with information security requirements. This whole ecosystem needs more time to mature and become securer and more reliable.<\/p>\n<p>Besides, the value that many plug-ins and add-ons add to the stock ChatGPT version is minimal: minor UI tweaks and \u201csystem prompt\u201d templates that customize the assistant for a specific task (\u201cAct as a high-school physics teacher\u2026\u201d). These wrappers certainly aren\u2019t worth trusting with your data, as you can accomplish the task just fine without them.<\/p>\n<p>If you do need certain plug-in features right here and now, try to take maximum precautions available before using them.<\/p>\n<ul>\n<li>Choose extensions and add-ons that have been around for at least several months and are being updated regularly.<\/li>\n<li>Consider only plug-ins that have lots of downloads, and carefully read the reviews for any issues.<\/li>\n<li>If the plug-in comes with a privacy policy, read it carefully <strong>before<\/strong> you start using the extension.<\/li>\n<li>Opt for open-source tools.<\/li>\n<li>If you possess even rudimentary coding skills \u2014 or coder friends \u2014 skim the code to make sure that it only sends data to declared servers and, ideally, AI model servers only.<\/li>\n<\/ul>\n<h2>Execution plug-ins call for special monitoring<\/h2>\n<p>So far, we\u2019ve been discussing risks relating to data leaks; but this isn\u2019t the only potential issue when using AI. Many plug-ins are capable of performing specific actions at the user\u2019s command \u2014 such as ordering airline tickets. These tools provide malicious actors with a new attack vector: the victim is presented with a document, web page, video, or even an image that contains concealed instructions for the language model in addition to the main content. If the victim feeds the document or link to a chatbot, the latter will execute the malicious instructions \u2014 for example, by buying tickets with the victim\u2019s money. This type of attack is referred to as <a href=\"https:\/\/embracethered.com\/blog\/posts\/2023\/google-bard-image-to-prompt-injection\/\" target=\"_blank\" rel=\"nofollow noopener\">prompt injection<\/a>, and although the developers of various LLMs are trying to develop a safeguard against this threat, no one has managed it \u2014 and perhaps never will.<\/p>\n<p>Luckily, most significant actions \u2014 especially those involving payment transactions such as purchasing tickets \u2014 require a double confirmation. However, interactions between language models and plug-ins create an <a href=\"https:\/\/encyclopedia.kaspersky.com\/glossary\/attack-surface\/\" target=\"_blank\" rel=\"noopener\">attack surface<\/a> so large that it\u2019s difficult to guarantee consistent results from these measures.<\/p>\n<p>Therefore, you need to be really thorough when selecting AI tools, and also make sure that they only receive trusted data for processing.<\/p>\n<input type=\"hidden\" class=\"category_for_banner\" value=\"premium-geek\">\n","protected":false},"excerpt":{"rendered":"<p>AI tools can be seen everywhere \u2014 from operating systems and office suites to image editors and chats. How do you use ChatGPT, Gemini, DeepSeek and the many add-ons to these without jeopardizing your digital security?<\/p>\n","protected":false},"author":2722,"featured_media":50564,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[9],"tags":[1140,4563,1779,4414,282,4564,1876],"class_list":{"0":"post-50562","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-tips","8":"tag-ai","9":"tag-bard","10":"tag-chatbots","11":"tag-chatgpt","12":"tag-cybersecurity","13":"tag-gemini","14":"tag-machine-learning"},"hreflang":[{"hreflang":"x-default","url":"https:\/\/www.kaspersky.com\/blog\/how-to-use-chatgpt-ai-assistants-securely-2024\/50562\/"},{"hreflang":"en-in","url":"https:\/\/www.kaspersky.co.in\/blog\/how-to-use-chatgpt-ai-assistants-securely-2024\/27067\/"},{"hreflang":"en-ae","url":"https:\/\/me-en.kaspersky.com\/blog\/how-to-use-chatgpt-ai-assistants-securely-2024\/22377\/"},{"hreflang":"ar","url":"https:\/\/me.kaspersky.com\/blog\/how-to-use-chatgpt-ai-assistants-securely-2024\/11413\/"},{"hreflang":"en-us","url":"https:\/\/usa.kaspersky.com\/blog\/how-to-use-chatgpt-ai-assistants-securely-2024\/29733\/"},{"hreflang":"en-gb","url":"https:\/\/www.kaspersky.co.uk\/blog\/how-to-use-chatgpt-ai-assistants-securely-2024\/27243\/"},{"hreflang":"es-mx","url":"https:\/\/latam.kaspersky.com\/blog\/how-to-use-chatgpt-ai-assistants-securely-2024\/27034\/"},{"hreflang":"es","url":"https:\/\/www.kaspersky.es\/blog\/how-to-use-chatgpt-ai-assistants-securely-2024\/29651\/"},{"hreflang":"it","url":"https:\/\/www.kaspersky.it\/blog\/how-to-use-chatgpt-ai-assistants-securely-2024\/28531\/"},{"hreflang":"ru","url":"https:\/\/www.kaspersky.ru\/blog\/how-to-use-chatgpt-ai-assistants-securely-2024\/36962\/"},{"hreflang":"tr","url":"https:\/\/www.kaspersky.com.tr\/blog\/how-to-use-chatgpt-ai-assistants-securely-2024\/12053\/"},{"hreflang":"fr","url":"https:\/\/www.kaspersky.fr\/blog\/how-to-use-chatgpt-ai-assistants-securely-2024\/21528\/"},{"hreflang":"pt-br","url":"https:\/\/www.kaspersky.com.br\/blog\/how-to-use-chatgpt-ai-assistants-securely-2024\/22239\/"},{"hreflang":"de","url":"https:\/\/www.kaspersky.de\/blog\/how-to-use-chatgpt-ai-assistants-securely-2024\/30940\/"},{"hreflang":"ja","url":"https:\/\/blog.kaspersky.co.jp\/how-to-use-chatgpt-ai-assistants-securely-2024\/35827\/"},{"hreflang":"ru-kz","url":"https:\/\/blog.kaspersky.kz\/how-to-use-chatgpt-ai-assistants-securely-2024\/27442\/"},{"hreflang":"en-au","url":"https:\/\/www.kaspersky.com.au\/blog\/how-to-use-chatgpt-ai-assistants-securely-2024\/33249\/"},{"hreflang":"en-za","url":"https:\/\/www.kaspersky.co.za\/blog\/how-to-use-chatgpt-ai-assistants-securely-2024\/32873\/"}],"acf":[],"banners":"","maintag":{"url":"https:\/\/www.kaspersky.com\/blog\/tag\/ai\/","name":"AI"},"_links":{"self":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/50562","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/users\/2722"}],"replies":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/comments?post=50562"}],"version-history":[{"count":9,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/50562\/revisions"}],"predecessor-version":[{"id":52957,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/50562\/revisions\/52957"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/media\/50564"}],"wp:attachment":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/media?parent=50562"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/categories?post=50562"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/tags?post=50562"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}