Is AI safe? Security, privacy and ethics with Artificial Intelligence

Once a niche academic field, AI has quickly evolved into an overwhelming technology, transforming industries and becoming a part of daily life, impacting people and the planet. And while it offers significant advantages, it also comes with potential risks that could have serious consequences. It’s crucial to understand these risks and find solutions to ensure AI develops safely and ethically.
As with any new and developing technology, people have a tendency to focus on worst-case and extreme scenarios. And while these can be legitimate concerns that need to be addressed, there are less sensational, but very real concerns that affect the here and now. Because of this, we propose taking a step back to focus on AI safety and security today.
Safety tips for daily AI users
Artificial Intelligence is an extremely helpful tool for productivity and writing, but remember that AI can have biases and don’t neglect fact-checking.
A common misconception is that since AI is a computer system, it is unbiased. The truth is, however, that AI is not born in a vacuum; AI is trained on large amounts of data, created by many people. While AI tools don’t have their own opinions, they are only as unbiased as the data and individuals behind them.
Bias in AI is when an AI’s output reflects the biases, beliefs and prejudices of the data it is trained on. The two main types of bias in AI are social bias and statistical bias. Social bias is when human-created biases and assumptions make their way into AI systems, which then reflect them. Statistical bias refers to systematic errors in data used to develop and train AI. This can be when the data is incomplete, wrong, or invalid.
AI tools trained on data containing biases can become dangerous when deployed in society, especially if we choose to fully rely on them. As these mechanisms become more integrated into our lives, it's crucial to think twice about whether the information provided is accurate, or helpful.
A little bit of skepticism and an independent check usually does the trick.
AI chatbots: privacy and security concerns
Once an AI system is up and running, it can become a target for hackers looking to exploit vulnerabilities or launch cyberattacks. And if the model is poorly secured, it may be vulnerable to adversarial machine learning attacks, where AI models are intentionally tricked into making incorrect or misleading predictions. All in all, it’s not very different from the security concerns we already face with any software or platform.
Proper cybersecurity measures need to be followed, not only by the companies developing AI, but also the public at large, in order to ensure safety. And the same applies to privacy concerns with AI.
In many cases, when you provide information to an AI, in a chatbot or any other venue, the content is then used to further train the models and improve them with time. Unfortunately, some AI companies’ privacy policies are vague on who can access this collected information. At the end of the day, when you share data with AI, you want to be as cautious as you would be with any third-party system.
With that in mind, it’s worth paying attention to any tools that state they’re powered by AI. While they offer advantages in the workspace and make lives easier to some extent, it’s important to remember the risks of losing intellectual property and trade secrets.
AI privacy and safety tips
1. Look for an incognito mode option. Not all AI tools may offer this feature, but it’s worth checking the settings to see if it’s available.
2. Read reviews and look for trusted, well- known platforms that prioritize security and privacy.
3. When using AI, never put anything in writing that would be embarrassing if made public.
4. Research the AI company’s data-handling practices and be cautious until certain how it protects the privacy of its users.
5. AI doesn’t always consider every outcome when giving recommendations. Treat AI answers as suggestions, rather than definitive solutions.


AI environmental issues
All computer technology consumes resources, but AI systems, particularly those using deep learning, can consume almost 10 times more electricity than a standard non-AI operation, like a Google search. Oftentimes just training an AI model can use as much energy as several dozen homes in a full year, and emits a similar amount of carbon dioxide as driving a car around the planet multiple times. Now, with major tech companies investing tens of billions quarterly in AI, the increase in power consumption is exponential.
Research from The Washington Post and the University of California shows that ChatGPT with GPT-4 uses around 519 milliliters of water (about the amount in a 16.9-ounce bottle) to write a 100-word email. This happens because water is needed to cool AI servers. It is estimated that by 2027, global demand for artificial intelligence will account for up to 6.6 billion cubic meters of water consumption, which is more than 6 times the annual water usage in Denmark. This high water usage can make drought conditions worse, especially in already dry areas.
Still, the downsides of AI may be mitigated, or completely outweighed by the benefits. AI is showing great potential in combating the climate crisis, by improving weather forecasting and predicting natural disasters. It’s being used to design energy-efficient buildings, plan low-emission transport routes, and improve carbon capture methods - collectively making a big contribution to reducing carbon footprints across industries. AI is also great for optimizing renewable energy systems, as it analyzes weather data and energy demand patterns, which leads to a more efficient energy production and a more stable and reliable supply of clean energy.
AI’s powerful effect on social media
An echo chamber is an environment where a person only encounters beliefs or opinions that match their own, further reinforcing their current views and ignoring other ideas. AI used in social media algorithms is known to create echo chambers by showing its users only what aligns with their views, which is what traps masses in a cycle of biased information. Many TikTok users have mentioned that they appreciate how the platform’s For You page insulates them from content they can’t relate to.
In the long run this results in polarization, making it harder for people to consider different viewpoints and leading to more extreme opinions.
AI also contributes to the spread of misinformation. It’s not a surprise that content is usually curated to keep the audience engaged, rather than ensure accuracy. AI is also a powerful imaging tool, frequently used to alter pictures, making anyone appear as they wish, or even creating scandalous and misleading visuals. This tool is currently within arm's reach of anyone with internet access. Spreading misinformation and manipulating public opinion has never been so simple, all with little accountability.
With that in mind, here are some tips:
1. Keep an open mind and make an effort to diversify the content consumed.
2. Retain some healthy scepticism and never trust anything without a thorough fact-check. This applies doubly for information that reinforces your biases.
3. If you catch yourself falling into the echo-chamber of social media, set up time limits on the app you use. You can usually do this in your device's settings, and free apps for this are also available.
What now?
AI holds significant potential, both good and bad. Whether the benefits will outweigh the problems mostly comes down to the humans using, or controlling AI.
As research continues, the possibility of AI helping to solve major global issues is real, but so are the risks. These dangers must be addressed through responsible development, governance, and regulation. This includes tech companies tracking and disclosing the impact of AI, to ensure that future models are safe and trust-worthy.
And as individuals using AI, the most important things we can do are stay informed, remain vigilant about bias, and use AI responsibly.
Related articles
You deserve a better browser
Opera's free VPN, Ad Blocker, and Flow file sharing. Just a few of the must-have features built into Opera for faster, smoother and distraction-free browsing designed to improve your online experience.


Press Contacts!
Our press team loves working with journalists from around the world. If you’re a member of the media and would like to talk, please get in touch with us by sending an email to one of our team members or to press-team@opera.com