How to protect your digital traces online

Written by: Cohen Gabriella, Comi Sofia, Echhoff, Laura, Kaplan Jenna, Mahesh Adithi, Maranaku Erla, Phan Renee
Source: Pinterest

Introduction

When we were little kids, we were excited by the snow. We put on our jackets and ran outside to have snowball fights, step on big chunks of snow and build snowmen. The whole town was covered with children’s handprints and footprints. However, these traces were easily covered by more snow or disappeared completely with the first rays of sunshine. No one was concerned that the footprints in the snow would leave any marks.

Similarly, today we continue to leave online traces without considering whether and to what extent they will leave lasting marks. In the morning, we spend a few minutes reading the news and accepting cookies that will then take us back to the same website, we reply to friends or partners who send us their good mornings, we like photos of old classmates who travel around the world, we reply to emails, we shop online, and we navigate the chaotic city center using Google Maps. Individually, all these instinctive daily processes seem harmless as they make our lives easier and more updated. However, taking a step back and looking at the big picture, the traces we leave behind are countless and cannot be erased by snow or the first rays of sunshine.

More specifically, it is a well-known fact that our data is continuously collected and processed by social media companies, which benefit significantly from our digital traces. They collect data through our messages, searches and preferences to personalize our online experience. As a result, the ads we receive make us believe that our mobile phone is listening to us, and the appearance of friends’ profiles on Instagram seems almost karmic. Behind all this, there is no great invisible force, but rather the technological development of complex algorithms that process our data in a way that offers the most satisfying and partly predictable results for each of us.

Although at first glance this reality seems favorable for our communication and productivity, the truth is that the indiscriminate collection of personal, local, financial and social media data can make us extremely vulnerable. Studies have shown that an increasing number of social media users are falling victim to hacking, financial scams, fraud and cyberattacks. As a result, we are at risk of being completely exposed to dangers that we cannot perceive with the naked eye, simply because we leave traces online without taking precautions.

Be careful and watch your six, we were told when we went out to play snowball fights. The same goes for your digital traces. This handbook does not aim to convince you to abandon social media and navigate the internet with fear, but to show you ways in which you can stay safe and connected at the same time. Below, you will find comprehensive information about digital footprints and the tools at your disposal to safeguard your conversations on platforms such as WhatsApp, Instagram, ChatGPT and LinkedIn. Before you fall into a hacker’s trap, remember that proaction is usually more effective than reaction. Fasten your seatbelts, open your phone settings, and follow us.

Why does online safety matter?

Protect your Digital ID! These are words I hear more and more online, evoking a feeling of stress. Where can I begin? What even is a digital ID? It can feel like an impossible task to achieve especially when all you hear in the media is that the European Union (EU) are scaling back their rules on AI and data privacy. How, as an individual, are you supposed to protect yourself? 

Here we have a few answers to these somewhat mammoth questions. Let’s begin with a digital ID. Your digital identity is a replica of your physical self online. It is your digital twin or virtual self. It consists of your personal data (identifiable information about yourself) that can include things like your name, home address, IP address, email address and so on. Information that is permanently anonymous (a very challenging thing to achieve) is not personal data. When your data in the virtual world is being sent from one place to another, people can make it a little safer temporarily by de-identifying, encrypting or pseudonymising that information. However, these are all reversible steps and so are still classified as personal information under the EU’s General Data Protection Regulation (GDPR). 

Anyone can collect your data online: individuals, private or public organisations, and the list goes on. Data brokers, or traders of data, are the real ones to fear with this as they collect and aggregate your personal and non-personal data and then monetize it. They are not interested in who buys your data and what they do with it, which is slightly concerning. They mainly get your information from other companies (who are collecting data on you when you interact with their products and services) and from public sources too. The most common third parties to buy your data off of them include advertisement companies, insurance companies, and even law enforcement and political campaigners.  

They can use your digital footprint, the tracks and traces of what you are doing online, sometimes even to maybe even predict what you will want next. It is what you post on social media to what song you just listened to on Spotify to where you bought that present for your friend and at what time and with what credit card. 

To take it even further, your digital fingerprint can be tracked too. This refers more to the hardware that you use, such as the font installed on your laptop and the type of keyboard you use (e.g., QWERTY). All together, this creates a detailed image of who you are. Sometimes you are aware of the data being collected. For example, you may agree to Zara’s cookies when you click on their website. Often, data collection is subtler. App developers insert something called Software Development Kits (SDKs) into mobile apps so information can be gathered and sent to servers beyond the app. For example, if you own an AppleWatch, it can gather the details on your heart rate and quality of sleep to send to data brokers. 

Nowadays, almost everyone engages  with social media one way or the other. It seems like a  scary data ecosystem out there but, with some simple steps, you can push back on this data tracking to preserve your online space a little more. The following sections showcase all you need to know to protect yourself on some of the most mainstream social media applications of our time. Buckle up and prepare to become the most protected online version of yourself. 

WhatsApp

Prima facie,  WhatsApp is just a messaging app to stay in touch with family and friends. Most interactions with other users on the platform are not public, since they are either direct conversations or conversations as part of a group chat. The platform itself doesn’t push content to one, per se. Hence, you might be lulled into thinking that the established fears and threats on other platforms are not as worrying on WhatsApp, but that is not necessarily the case. 

Using WhatsApp, like any social media platform, requires that we also take steps to protect ourselves from scams, phishing, and malicious attacks, as well as protect the safety of our information from malevolent actors, businesses, and advertisers. 

The following guide will introduce you to basic steps that can be taken to protect your WhatsApp account both from hacking and harassment (“How to stay SAFE on WhatsApp”) and from privacy violations (“How to stay PRIVATE on WhatsApp”).

How to stay SAFE on WhatsApp

Remember to always lock the front door. Use a strong, unique password and Two-Factor Authentication (2FA).  This is currently the only foolproof way to prevent unauthorised entities from accessing your WhatsApp account and data. It also helps the app recognize that it is indeed you trying to access your account, no matter which device.

A strong, unique password is at least 16 characters long, contains upper and lower case letters, numbers and special characters. Any random password generator available for free online can generate a password of this kind for you. 

A strong password is merely the first step when securing your protection on WhatApp. Next, you must make sure to  set up a PIN for your WhatsApp account using the app’s settings. 

Tap Account > Two-step verification > Turn on or Set up PIN >Enter a six-digit PIN of your choice > confirm it

It may seem simple but a lot of people often fall into the same traps. Remember to not make your pin obvious (e.g. a birthday or anniversary). You can generate a random combination of six digits yourself, or use a generator. Provide an email address you can access or tap Skip if you don’t want to add an email address. We recommend adding an email address because it allows you to reset two-step verification, and helps safeguard your account.
Confirm the email address and tap Save or Done. Enter the six-digit verification code sent to your email, tap Verify and you are ready to go. 

Kick out unknown devices & apps:

  • WhatsApp Web is only allowed to run on only up to four devices at a time. Regularly log out of infrequently used or unknown devices – so no one else has access to your chats. 
  • While there is a plethora of third party WhatsApp-Web-style apps, make sure to only use the official WhatsApp app. 

Beat phishing & social engineering:

  • Learn common phishing calls (e.g. urgent messages, requests for codes, “verification” DMs). Spam messages on WhatsApp frequently use this method to generate a sense of fear and urgency, and to get users to reveal critical information. 
  • No company or entity asks for one-time-passwords (OTPs) via WhatsApp, so that is a very obvious tell. 
  • When in doubt, report and block

If hacked:

  • If you see the message “Your phone number was registered with WhatsApp on a new device,” tap Log back in > Continue. If prompted, enter your phone number and tap Next > Ok.
  • Back up chats, unlink all linked devices, enable 2FA, and check what devices are connected.
How to stay PRIVATE on WhatsApp

The first thing to do is to change privacy settings. By default, most users can see your last seen and online, profile photo, ‘about’ information, status updates or add you to groups. In order to maximize your privacy, it is important to  change who can do this, thus avoiding  getting added into pesky spam groups, or worse, using this information for something malicious. You can change this by doing the following:

Go to WhatsApp Settings > Privacy > Tap the privacy setting you’d like to change > Select Everyone, My contacts, My contacts except…

In your privacy settings, you can also change the following settings:

  1. Change your read receipts –whether people can or cannot see if you’ve read a message – whether it’s one specific ex, or everyone at large. 
  1. Set your disappearing messages on or off by default – this helps on chats where sensitive information is shared. The messages automatically get deleted after a set period of time and cannot be accessed by other users, whether on a direct message or a group.
  1. Silence unknown callers – this helps prevent people you don’t know from trying to contact you via WhatsApp call. 
  1. Turn off live location sharing – this means that you control when or where you share your location with your loved ones, and it isn’t displayed by default. 
  1. Who can see the links you share on your profile
  1. Advanced settings, such as how to protect your IP address in calls, disable link previews, and the ability to block high volumes of unknown messages from unknown accounts.

When Advanced chat privacy is enabled for a chat, people in the chat

  • Cannot save media to their device gallery automatically.
  • Can’t ask Meta AI to answer questions, or to create images or summaries in that particular chat; minimize the likelihood of your messages being used to train Meta AI. 
  • Cannot export the chat.

You must enable Advanced chat privacy for each individual or group chat you want to apply this setting to. Once it’s turned on, Advanced chat privacy applies to all messages in the chat.

Even if you implement all these measures, you may receive messages from strangers. To keep strangers out of your DMs you must block unknown account messages. What this means is that accounts not known to you, i.e. not in shared WhatsApp groups or in your contacts, cannot message you. 

Tap Settings > Privacy > Advanced > Turn Block unknown account messages on or off

How to keep strangers from adding you to unknown groups by changing group privacy settings

  1. Go to WhatsApp Settings.
  1. Tap Privacy > Groups.
  1. Select one of the following options:
  • Everyone: Everyone, including people outside of your phone’s address book contacts, can add you to groups without your approval.
  • My Contacts: Only contacts in your phone’s address book can add you to groups without your approval. If a group admin who’s not in your phone’s address book tries to add you to a group, they’ll get a pop-up that says they can’t add you and will be prompted to tap Invite to Group or press Continue, followed by the send button, to send a private group invite through an individual chat. You’ll have three days to accept the invite before it expires. This is a good way to stay safe but still access important WhatsApp groups (e.g. for a new class or workplace) 
  • My Contacts Except…: Only contacts in your phone’s address book, except those you exclude, can add you to groups without your approval. After selecting My Contacts Except… you can search for or select contacts to exclude. If a group admin you exclude tries to add you to a group, they’ll get a pop-up that says they can’t add you and will be prompted to tap Invite to Group followed by the send button to send a private group invite through an individual chat. You’ll have three days to accept the invite before it expires. This is the safest option
  1. If prompted, tap Done.

Protecting yourself on WhatApp doesn’t by default mean that Meta does too. It has been proved that the company has been using data to promote ads and other interests. It is therefore important to limit Meta’s data collection, ads, and businesses. WhatsApp considers chats with businesses that use the WhatsApp Business app or manage and store customer messages themselves to be end-to-end encrypted by default. Once the message is received, it will be subject to the business’s own privacy practices. The business may designate employees or other vendors to process and respond to messages, and it may also use the chats it receives for its own marketing purposes, including advertising on Meta’s other platforms (e.g. if you engage with a seller on WhatsApp, they may use this data to market their products to you from their Instagram storefront).

However, some businesses can choose optional services to interact with customers.

For example, some can choose WhatsApp’s parent company, Meta, to securely store messages and respond to customers. Meta will not automatically use the messages you send a business to inform the ads you see. However, businesses will be able to use chats they receive for their own marketing purposes, including advertising on Meta.

In addition, some can choose to have AI from Meta to assist them in responding to messages sent from customers. Meta will receive these chats to improve its AI quality and generate message responses. This is the data you should be cautious about sharing. 

When businesses use these optional services, WhatsApp displays this clearly in the chat and does not consider messages with these businesses to be end-to-end encrypted. The encryption status of an end-to-end encrypted chat can’t change without being visible to the user. Read WhatsApp’s Encryption Overview to find out what kinds of chats are/are not end-to-end encrypted. 

Note:

  • There are optional services that a business or you can choose to use where Meta receives limited information. For example, you can choose to start a chat with a business after interacting with their ad on Facebook and Instagram or interact with offers and announcements a business may send you on WhatsApp. You’ll see an Single Chevron (right chevron within circle) icon in the chat or on the business profile for these services, which you’ll be able to tap to learn more about how this works.
  • In addition, certain companies choose to use AI from Meta to assist them in responding to messages sent from customers. Meta receives these chats to improve its AI quality; when this happens we will let you know by highlighting “uses AI from Meta” under the business name.

And that’s it! Now you know how to use WhatsApp safely, how to keep snoopers out, how to keep your data safe from AI, while still being able to roll your eyes at all the jokes your dad sends on the family group chat – some problems, even technology cannot solve!

Instagram

While we may associate Instagram with photos of our friends’ vacations, our friend-of-a-friend’s dog, or lots of tiny pumpkins, Instagram is not just a photo sharing platform and a way to connect with our friends, distant acquaintances, and celebrities and influencers. It is also a valuable store of our personal data, containing everything from clues about our personal preferences to our location data and relationship networks. It is therefore a prime target for cyberattackers seeking to gain access to our personal data or sensitive content. Cyberattacks on social media have risen by about 50% in recent years, with identity impersonation of influencers and everyday users alike becoming a common entry point for fraud and extortion. Hacking groups routinely target Instagram accounts of all sizes, though larger accounts tend to be more attractive targets, and sometimes manage to permanently take them over.

It is therefore imperative that, no matter our influence, we take steps to protect ourselves on Instagram. We must focus both on securing our accounts from unauthorized intrusions, such as account takeovers or data theft, and protecting ourselves from harassment and spam as well as ensuring that our information remains private, both from advertisers and potentially malicious actors alike. 

The following guide will introduce you to basic steps that can be taken to protect your Instagram account both from hacking and harassment (“How to stay SAFE on Instagram”) and from privacy violations (“How to stay PRIVATE on Instagram”).

How to stay SAFE on Instagram

As with WhatsApp, you must make sure to lock the front door. If your password is “insta123” or “[yourname]instagram,” you may be at risk. Use a long, unique password that you don’t reuse anywhere else, ideally stored in a password manager. Instagram itself repeatedly recommends pairing that with two-factor authentication (2FA), either via SMS or an authenticator app, because most compromised accounts started with a stolen or reused password.

Tap Settings & privacy Accounts Center > Password & security > Two-factor authentication > Turn it on > Opt for an authentication app over SMS for 2FA

Meta now also offers a Security Checkup that walks you through reviewing login locations, recovery details, and security settings; it’s designed to help you spot and fix issues you may otherwise miss. If anything ever feels “off” with your account, run this feature first. You can also run it as part of a routine security check on your account. 

Protecting yourself on Instagram also means to protect yourself from other devices and apps as well. Therefore, make sure to kick out unknown devices and apps from your account. Think of Login activity as the guestbook to your account and check it periodically:

Tap Settings & Privacy Accounts Center > Security > Login activity

FYI: If you see a device or location you don’t recognize, log it out and take additional security measures such as changing your password.

Next, review which apps and websites you’ve connected to your Instagram under Settings & privacy > Website permissions > Apps and websites. Analytics tools, giveaway apps, or “who viewed my profile” services are both unnecessary and risky; third-party access can be a route into accounts and the personal data they contain. Remove anything you don’t actively use or that sounds too good to be true.

As cyberattacks have become a common practice, it is of  great importance to beat  phishing and social engineering before any attackers have access to your liked posts and meme exchanges with your best friend. Most Instagram hacks begin outside the app, often in your email or DMs. Instagram explicitly warns about phishing emails that pretend to be copyright strikes, verification offers, or “you’ll lose your account in 24 hours” ultimatums.

Here are two simple rules help you out:

  1. Verify emails in-app: Go to Settings & privacy > Security > Emails from Instagram, which shows official emails sent in the last 14 days. If the message in your inbox isn’t listed there, treat its links as a threat.
  1. Know the classic tells of scams and malicious activity: urgency, requests for codes, links to weird domains, or “verification” DMs from accounts that don’t have the verified badge and don’t match Instagram’s official handle.

When in doubt, don’t click. Go directly to the app or website instead and see if the same message is present.

The same tactics can be used to shut down harassment and spam. Safety is not just about hackers; it’s also about the people who show up in your comments and DMs and whose presence may compromise your safety or comfort.

Instagram’s Hidden Words feature lets you automatically filter offensive or spammy comments and message requests, and Meta has expanded it to cover story replies and more languages. Turn it on, then add your own custom bad-word list for things that are of concern to you.

If you’re facing a pile-on, use Limits to temporarily restrict comments and DMs from recent or non-followers. For chronic offenders, you can Restrict, Block, or Report them. Restrict is the “quiet” option: they can still comment, but only they see their comments unless you approve them. This can be a good choice if you don’t want to risk an escalation or other issues.

Sometimes, even if you take all the necessary measures, you may end up being hacked. Don’t worry, we’ve got your back in any possible scenario. As with most things nowadays, speed matters. If all of your other security measures fail, as they sometimes do, you may experience an attack or hacking attempt. If you suspect compromise, perform the following measures as quickly as possible:

1. Change your Instagram and email password.
2. Turn on or re-secure 2FA.
3. Check Login and log out unknown devices.
4. Review Apps and revoke anything suspicious.
5. Use Meta’s Account Recovery / Help Center flow to regain control and report the incident.

Getting hacked is not fun, but immediate action is necessary. Acting in the first hour can make the difference between a hacking scare and losing your account, with all its memories, connections, and data, for good.

How to stay PRIVATE on Instagram

Sharing is caring. But sometimes sharing too much can prove harmful for your own sake. Instagram is a fun place to share all your favorite everyday moments but beware to make your account private (or, at least, less exposed) to avoid any unwelcomed guests.  A private account is like pulling the curtains: you’re still online, but random passersby can’t peer into your living room.

Settings & privacy > Account privacy

FYI: Consider a private main account and a separate public one if you need visibility for work or projects.

Then trim what others can see or do:

  1. Turn Activity Status off so people can’t see when you were “last active”: Settings & privacy > Messages & story replies > Show activity status (toggle off).
  1. Under Privacy > How others can interact with you, control who can tag, mention, or comment on you, and combine that with Hidden Words.
  1. Tighten message-request filters so strangers can’t slide directly into your main inbox.

Oversharing might feel harmless, but cybercriminals routinely mine social media posts and comments for clues about identity, location, and security-question answers.

One of the most recent features that has sent Instagram users into a frenzy is sharing location. It is nice to keep an eye on your friend and exchange locations for future plans but that doesn’t make you safer online. On the contrary, it might prove more dangerous than one expects. Location and routine information can be combined with other data to build disturbingly accurate profiles of where you live, when you’re out, and what you own. Real-time data can also be used by a malicious actor to track your location and cause potential harm. Therefore, make sure to tighten activity and location exposure.

There are three quick things you can do now to protect your privacy on Instagram:

  1. In your phone’s App permissions, disable Precise Location for Instagram unless you absolutely need it.
  1. Avoid posting real-time location data (e.g., tagging your home, your school, or a small local café while you’re still there).
  1. Ensure that your “Instagram Map” feature is disabled, or shared only with those you have selected and trust. This can be done through your phone’s settings (see point 1), or in-app. 
Direct Messages (DMs) > Settings (top right) > Under Who can see your location?, select No one or a select group of accounts > Tap Done or Update

In case you haven’t read the news, Instagram falls under the umbrella of Meta. This means that Instagram and its parent company, Meta, build detailed profiles on users to target better ads. While that’s partly how the service stays free and not an inherent cybersecurity risk, you have more control over how your data is collected and used than most people use. To avoid such a thing, you can limit data collection and ad profiling.

Accounts Center > Ad preferences, you can: 
Limit the info used for targeting (such as interests and off-Meta activity).Turn off categories or topics you don’t want to be profiled on.Hide individual ads that seem shady.

Also review Contacts syncing so your entire phonebook isn’t continuously uploaded, and periodically remove integrated apps and websites so old integrations aren’t siphoning extra data from your account. Remember, however, that depending on where you are located, your exact options may be different in accordance with local laws.

A quick checklist of what you should do today (and why!)

  • Turn on 2FA (ideally with an authenticator app).
  • Update your recovery email and phone number.
  • Make your account Private (or at least review privacy settings) and switch off Activity Status.
  • Turn off or personalize your location sharing preferences. Ensure that your Instagram Map is not publicly viewable. 
  • Audit Login activity and Apps & websites; remove unknown devices or log-ins.
  • Use Emails from Instagram before trusting any “urgent” email or DM.

ChatGPT

Generative AI models such as ChatGPT, Claude, Gemini, or Copilot are often framed as productivity tools that help us write, code, or summarize information, with most discussions focusing on how to get better results or avoid obvious factual mistakes. Yet every interaction with these systems is also an act of disclosure: whatever is typed, pasted, or uploaded becomes part of someone else’s infrastructure. High‑profile incidents, such as the 2023 ChatGPT bug that briefly exposed other users’ chat titles, personal data, and some billing information, or Samsung employees inadvertently leaking proprietary source code and meeting transcripts into ChatGPT, illustrate how quickly seemingly harmless prompts can turn into real data‑exposure events.

At the same time, most public AI services reserve the right to store user prompts and use them to improve or train their models by default, unless users actively opt out or switch to more privacy‑preserving modes. This raises particular concerns in academic, professional, and institutional contexts, where inputs may contain personal information, confidential documents, or sensitive research data. This chapter examines the main privacy pitfalls when using AI chatbots and sets out concrete steps you can take to reduce your risk, focusing on OpenAI’s ChatGPT as the leading example. While the configuration paths and labels differ slightly across tools, the underlying principles and most of the practical recommendations apply equally to other mainstream models, including Claude, Gemini, Perplexity, and similar systems.

What AI Tools Do With Your Data

Before going straight to changing settings or checkboxes, it helps to understand what AI tools actually collect. Public chatbots like ChatGPT store whatever you type or upload as “input data”, including personal data and attachments, and link it to basic account, device, and usage information by default. Most providers also log how you interact with the tool (for example, which features you use and when) because this telemetry is treated as part of the service rather than “extra” data collection.

Most mainstream tools now offer user controls, such as turning off chat history or Memory, using temporary or incognito chats, deleting past conversations, and opting out of training via settings like Improve the model for everyone. These switches meaningfully reduce how your prompts are stored and reused, but they do not create a zero‑logging mode: even with history disabled, providers often keep short‑term logs (for example, for several weeks) for abuse monitoring, fraud detection, and debugging. In practice, you can reduce how long and how widely your data is used, but you cannot fully prevent it from ever being stored or reviewed. In the case of ChatGPT, chats are by default stored and can be accessed through the History sidebar until manually removed. When conversations are deleted, they are permanently removed from OpenAI’s storage after a 30-day retention period, but they may be preserved for longer if they are covered by a legal order or regulatory requirement. 

Notably, there is a distinction between consumer accounts and institutional or enterprise offerings. Company or university deployments of tools like ChatGPT typically come with stricter contractual guarantees (such as no training on your data, shorter and configurable retention periods, dedicated storage, and formal data‑processing agreements) because they are designed to satisfy corporate and GDPR‑level compliance requirements. By contrast, personal or free accounts offer far fewer assurances and put more of the privacy risk back onto your individual choices and behaviour.

Concretely, this means that anything you paste into a public AI chatbot should be treated as potentially stored, inspected, and used to improve the service, even if you have flipped all available privacy switches. There are good ways to lower your exposure, but a residual risk remains. For sensitive material (e.g., health information, detailed CVs, client files, internal reports, or unpublished research data), the safest option is either to keep it out of public AI tools entirely, or to use a vetted institutional or private model where you have clear contractual guarantees about how your data is processed.

What EU Data‑Protection Principles Offer

In addition to increasing privacy risks when using AI, legislation has introduced meaningful ways to circumvent these pitfalls, especially within the framework of the EU. The following GDPR’s core principles directly challenge many public AI practices, such as scraping web data for training or retaining prompts indefinitely:

  • Purpose limitation: Data must be collected only for specified, legitimate purposes.
  • Data minimization: Only process what is strictly necessary.
  • Storage limitation: delete data once the purpose is fulfilled.

These principles shift responsibility onto providers to justify processing personal data in prompts, and they empower users (or their institutions) to demand alternatives like enterprise deployments with data‑processing agreements (DPAs).

The EU AI Act, now fully in force, adds requirements for transparency in “limited‑risk” systems like chatbots: providers must disclose AI use, watermark synthetic outputs, and publish summaries of training data sources to prevent deception and ensure accountability. Regulators like the European Data Protection Board (EDPB) have repeatedly flagged ChatGPT for potential violations, such as lacking a legal basis for training on personal data. Consequently EDPB formed task forces to enforce compliance, often resulting in opt‑outs, suspensions, or fines for non‑compliant processing. For EU users, especially in academic or professional settings, this means you can reference these rules to justify stricter controls, request institutional tools, or even challenge providers directly for transparency about your data. However, continuing violations of these standards by AI models proves that while privacy protection is increasingly improving in the European Economic Area (EEA), there remain instances where legislation cannot fully protect our privacy.

What Steps You Can and Should Take

To put it in a nutshell, using AI not only risks cultivating misleading information, but poses a potential leak for private matters. Therefore, it is important to be both aware and conscientious when continuing using generative models such as ChatGPT. We are not saying you should abandon  AI entirely, but we do stress the need to treat these systems as disclosure channels rather than trusted confidants: understand their data practices, apply the available controls, and know your limits.

With the following checklist,we outline immediate, practical steps to secure your account and minimize data exposure. Each item includes the exact path in ChatGPT’s interface (as of early 2026) and the rationale behind it. This is a really short list, no excuses! 

Checklist

  • Secure your account: Use a strong, unique password (at least 16 characters, can be generated via a password manager) plus two-factor authentication (2FA). Go to Profile > Settings > Security > Multi-factor verification and enable an authenticator app over SMS for stronger protection. Weak or reused passwords account for most compromised accounts, so opt for a unique password.
  • Review and revoke: Check login activity and connected apps or devices regularly. Go to Settings > Security > Login activity to log out unfamiliar devices; then review Settings > Connected apps and revoke unused third-party access. Unknown logins are a common entry point for data theft and should thus be treated as liabilities.
  • Disable Memory and history: Turn off persistent context storage. Go to Profile > Settings > Personalization > Memory > Off, and use Temporary Chats (toggle in the chat composer) for anything remotely sensitive; these bypass history and reduce retention.
  • Opt out of training: Prevent your prompts from improving future models. Go to Settings > Data controls > Turn off Improve the model for everyone. Temporary Chats skip training by default!
  • Never paste sensitive data: Avoid real names, addresses, work documents, research notes, personal identifiers, or confidential info entirely. Assume all inputs could be logged, moderated, or retained short-term even with opt-outs.
  • Prefer institutional tools: For academic or professional work, use your university/employer’s enterprise ChatGPT deployment instead of free consumer versions. These include no-training guarantees, configurable retention, and GDPR-compliant data processing agreements.

As generative AI tools become increasingly integrated in our everyday lives as well as academic and professional environments, there exists a certain temptation to treat digital companions like ChatGPT as omniscient collaborators rather than engineered systems with commercial incentives. Nonetheless, these platforms are fundamentally extractive: they thrive on user inputs to refine algorithms, detect abuse, and fuel growth, often at the expense of transparency about retention or downstream use. In short, the tools we are coming to rely on unavoidably use our inputs for their own benefit. Past breaches, ranging from ChatGPT’s own title‑leak bug to careless corporate disclosures, serve as stark reminders that no toggle fully protects against infrastructure‑level vulnerabilities or human error in prompt design.

European frameworks like GDPR and the AI Act mark meaningful progress, codifying user rights to challenge opaque processing and demand institutional alternatives, yet enforcement lags behind innovation pace, leaving individuals to bridge the gap through vigilance. Ultimately, responsible AI use demands a mindset shift: curate inputs ruthlessly, prioritise enterprise safeguards for sensitive work, and view every query as a calculated disclosure rather than casual conversation. By applying the controls and principles outlined here, users can harness these tools’ power while preserving the autonomy and confidentiality that digital life increasingly demands.

LinkedIn

As the epicenter of online networking to aid your professional career, Linkedin boasts 1 billion members all over the world. One aspect of using the platform includes building your profile. To do so, you will need to share some information about yourself. Whether that be your email address, the city you live in, or previous job experience, these pieces of information are both necessary to maximize the benefits that the platform provides for jobseekers or to simply share about your professional background online. However, cybersecurity experts have noted lately that because these pertinent, yet personal pieces of information are made available online, cybercriminals have recently focused efforts on the once secure platform. Now, users are at high risk of being taken advantage of, similar to its tactics on other social media networks. Phishing attacks have become more common, and now more than ever, it is important to protect your Linkedin account from becoming exploited by fraudsters.

Before we dive in on how to protect yourself, it is crucial to understand what is happening on the platform:

Are you vulnerable to fraudsters on Linkedin? 

  • Public profiles mean that your information is made available to all users, including those with ill intentions.
  • Your data, such as your previous job experiences and current geographical location, makes it easier for fraudsters to tailor messages to each individual user, making it more convincing and credible. This makes attacks on Linkedin more successful and effective than on other platforms.
  • Fake job offers and/or fake recruiters can take advantage of your profile, such as if you indicate that you are #opentowork, a status that you can select on Linkedin.
  • The feature of private messages between users makes it easy for malicious links to be sent to each user personally.

In what ways do phishing attacks work? 

As aforementioned, some ways that fraudsters can take advantage of the platform includes sending highly personalized messages to users based on their background. It’s incredibly effective, likely because it includes a psychological aspect: here you are thinking you finally got a job offer out of the blue, you won’t spoil it by suspecting a scam. Because Linkedin is a professional networking platform, the attacks are more often a gradual process rather than a first-time-contact direct link, which typically alerts users to identify whether a message is dangerous. Since the messages are personal and mimic professional career services, different formats such as video conferencing, internal company contact forms, or recruitment platforms to upload CVs make it easier to fall victim to the scams. This is due to the fact that these formats, not often present on other platforms that are more social-media oriented, are common in a job recruiting process. Moreover, users are more likely to feel secure and trust in using the Linkedin platform, especially with direct messages, reinforcing a feeling of safety that is in fact, not there. 

These attacks can result in various, catastrophic consequences. In some cases, identity theft can occur, hacking into company accounts, malware installation on user devices, sharing information from compromised accounts to other users, and or financial loss or damage to user reputations. As of late, it has been reported that there is a new wave of phishing scams involving downloading PDF readers through Linkedin DMs, when links are eventually sent. Because of the specific design of the Linkedin platform, it makes it difficult for the tool to filter or automatically detect attacks without links sent in the first message of contact, making subtle, progressive messages to easily pass through filters and then once users respond back, deeming trustworthy, a malicious link gets sent. The moderation system of Linkedin can easily make cybercriminals slip through under the radar, resulting in individual users needing to monitor and regulate and weigh the safety of direct messages for themselves. 

Now that we have established the risks of using Linkedin, what can you do to protect yourself on Linkedin?

  • Try not to emphasize reading and responding to your DMs. If a company recruiter wishes to contact you, more often than not you will be contacted via your email, where there are more filters. Perhaps there is a wave of phishing occurring via DMs in the past few years, and then it will subside. Either way, stay cautious!
  • Be hyper vigilant. Echoing the aforementioned sentiment, overall be wary of any links, messages, and/or other forms of outreach contact on the platform. 
  • Look into the sender profile, company, and background. If you feel compelled to reply to a message, try to do your due diligence on the sender. Check to see if they have a verified user badge on their profile, and to search if the company or recruiting agency they represent are real. 
  • Update your profile privacy settings. Click on You at the top of your LinkedIn homepage > Preferences and Privacy in the drop-down menu. Afterward, you can customize your account profile, such as visibility settings and/or account preferences. Your privacy settings can also be customized, such as who can send you messages and which type of personal information can be made visible to users with or without accounts alike. 
  • Enable Multi-factor Authentication. This is made available under user settings as well to ensure the devices that try to access your account first need verification via other means, such as a code sent to your email or an approval on another device that is logged into your account. 
  • Don’t Click on Links via DM. As mentioned above, there is a new wave of phishing scams where malicious links are sent via DMs. The golden rule is: avoid clicking on them if you deem it untrustworthy. 

One final note

While this article mainly focuses on phishing attacks, it is also becoming increasingly relevant to learn about AI integration into the platform. Linkedin has drawn criticism online for announcing that it will automatically use user data to train its own AI models. Don’t worry! Just remember that you can opt out by going to your Settings > Privacy, so that your information is not used to train AI.

Conclusion

Long gone might be the days when, before walking out the door, our parents would yell across the hall: remember, stranger danger! And yet, adults as we might be, this is little mistrusting advice as relevant as ever in the digital world. Because, truth is, there are no more defined boundaries between our physical selves and our digital presence. And whatever we say, search, engage with, post, and type privately amounts to a continuous stream of data which is then used by systems and companies to get to know us. Best case scenario, the information turns into an oddly accurate ad. Worst case scenario, finances, reputations, relationships, and even personal security can be affected.

As digital infrastructure becomes the infrastructure of our daily lives, cybersecurity is no longer just an IT problem; it is a problem in our everyday lives. Social media platforms like Instagram are among the most personal public representations of who we are, and they double as rich personal data stores for anyone, from advertisers to criminals, who can gain access. Cybercrime is growing in scale and sophistication, and as a result, social-media security is becoming a core element of modern cybersecurity. As social media users, it is one that we must take seriously on an individual level.

As shown in this neat, little Handbook, cybersecurity threats do not necessarily look like hackers in dark rooms. They can also arise from ordinary interactions such as accepting cookies, joining a group chat, replying to a recruiter, or asking ChatGPT to summarise a document for you. It is through these actions that it is possible for data brokers to profile us, for advertisers to target us, and for malicious actors to dupe us. 

Social media platforms are a perfect example of how online safety and privacy are intertwined: on WhatsApp, being careless in selecting our privacy settings do not only determine who can contact us, but also locate us. On Instagram, frequent sharing on a public account can reveal our routines and whereabouts. On Linkedin, professional visibility can turn us into the recipient of targeted fraud. When using ChatGPT, an incautious prompt can easily lead to sensitive or even confidential data being shared and stored. While social media and AI tools do not pose a threat in themselves, it is also true that their main goal is to grow by increasing engagement. On the other hand, while the GDPR and the EU AI Act do protect our right to transparency and accountability, it is important not to rest on these laurels. 

If there is one message we would like you to take from our work today, it is that the responsibility of online safety as users rests on our shoulders. Online platforms are not necessarily designed to protect us, and human behavior and carelessness emerge as one of the main liabilities. Just because you scroll through social media from your phone in your room, it does not mean that, digitally, you are not entering (and maybe also displaying yourself in) a public space. 

The good news, though, is that being safe online does not mean you should learn code to bypass cookies, or just stay offline altogether. We are not saying you should be paranoid, but we do strongly advise you to be mindful and aware of your digital presence. Use strong authentication systems, verify messages, adjust privacy settings, ask yourself what information you’re about to disclose before you post that Instagram story or give ChatGPT that prompt. If you don’t already, implementing these simple steps turns you from a goldmine of information to an aware partaker in digital life. 

Well, it’s official, we’ve reached the end! We sincerely thank you for reading this Handbook, and we hope you learned a thing or two on how to keep yourself safe on the internet. If you haven’t already, we highly recommend you read our other articles written by our wonderful Publications Team and our friends here at the SciencesPo Cybersecurity Association.

And before you go, remember, stranger danger!