The difference between what’s right and what’s wrong has forever been the cause of misunderstandings, debates, and even full-fledged wars. The understanding of science has always been dynamic, the more we learn, I guess. The three latest innovations in science, technology, and related fields are artificial intelligence, machine learning, and deep learning. What’s the difference between the three?
Let’s break it down in layman’s terms: If three of them combined are a high-end car, then
- Machine learning (ML) is the support structure or body shell, the uneven part of the car.
- Deep learning is the furnishings, decor, leather seats, etc.
- And, artificial intelligence (AI) is the roof that covers the other two.
Imagine how human brains work – a bunch of neural networks sending signals among themselves to keep the body functioning and alert. Similarly, deep learning is a branch of AI and ML that uses large data sets to train artificial neural networks to learn and make decisions themselves. This is deep learning’s primary capability, and it is what makes applications in various fields, like
- Self-driving cars
- Facial recognition
- Gaming
- Language translation, possible.
But just like any other technological marvel, deep learning too has its potential limitations. The huge amount of data it can handle makes it impossible for humans to decipher and monitor. So, even the smallest bug in the system can have disastrous results.
Coming back to my first sentence, the discussion of right and wrong. So, data input is extremely crucial for artificial intelligence. Why? Wrong data, wrong output, right data, and right output. So anyone working with AI needs to understand the importance of transparency and regulations.
7 Dark Truths About AI and Machine Learning
In the next part of this article, we will discuss some of the darker aspects of AI and ML, including their ethical, social, and other potential risks.
1. Job Replacement & Economic Imbalance
By now, we all know that adopting AI and ML leads to task automation that has the capacity to disrupt global labour markets and replace human jobs. A report by Goldman Sachs states that AI can replace the equivalent of 300 million full-time jobs, and also around two-thirds of jobs in Europe and the US are exposed to some or the other form of AI automation.
In 2025, industries like customer support, transportation, and manufacturing will already be witnessing the downside of AI automation, resulting in job displacements and financial instability for workers. As per MIT and Boston University’s report, AI can displace approximately 2 million workers in manufacturing by 2025. Job descriptions of receptionists, customer service representatives, bookkeepers, accountants, salespeople, and warehouse workers can all be in grave danger.
Also, the benefits of AI and ML are most concentrated in rich organizations, adding to widespread economic instability and increasing the already widening gap between the rich and the poor.
2. Surveillance & Privacy
Today, the popularity of AI-driven systems and data analytics is immense. But it can be threatening to privacy and personal data. Newer technologies, like facial recognition systems, predictive analysis, biometrics, etc., constantly put people under monitoring. This privacy encroachment on our daily lives brings forward concerns about mass surveillance, government interference, and misuse of personal data.
Large Language Models (LLMs), like virtual AI assistants and chatbots, require a huge amount of data for their training. Web crawlers source this data from several online websites, mostly without users’ consent. This collection of information can contain personally identifiable information (PII), such as phone and, social security numbers, email, and others. Some AI systems can deliberately collect this information to offer a personalized user experience.
This private information collected by AI can be dangerous, as hackers can easily access it using technologically advanced tricks. As per IBM’s Cost of a Data Breach report, in 2024, ransomware attacks cost an average of $5.68 million for data breaches!
3. Bias & Discrimination
Bias and discrimination are rapidly becoming one of the major concerns with AI-powered systems. Just a few sentences ago, we discussed how AI uses large datasets to train. These datasets can contain built-in biases that can lead to discriminatory results and biased decisions.
Around 2016, Microsoft had to shut down its AI-handled Twitter (now X) account as it started making racial comments. This AI-driven account was designed to go through user comments and develop responses based on those. And we all know how the comments section of Twitter, as a matter of fact, any social media can get.
Many facial recognition systems have exhibited gender or racial discrimination, which resulted in wrongful and unfair identification of people from certain communities.
4. Using Data Ethically
A large amount of data collection, storage, and usage by AI and ML systems can be ethically concerning. How? It can raise questions about privacy, ownership, and consent. Many organizations across industries collect vast amounts of data without proper consent, which can lead to privacy violations and data misuse.
As per UNESCO’s Gabriela Ramos, Assistant Director-General for Social and Human Sciences, “In no other field is the ethical compass more relevant than in artificial intelligence.” UNESCO introduced the Global AI Ethics and Governance Observatory, which provides global organizations in government, regulations, academics, private sector with global resources to get solutions for AI-related challenges.
Additionally, in 2021, UNESCO developed the first-of-its-kind global standard on AI-related ethical practice, Recommendation on the Ethics of Artificial Intelligence. This standard is applicable to all of its 194 member states. The foundation of which is protecting human rights and dignity in the AI age.
5. Social Exploitation & Information Warfare
Social media uses AI algorithms and recommendation systems to make the user experience feel more curated. But it has a downside too; social media platforms and content providers can manipulate any information, including public opinion, urgent and political news, to spread misinformation that can aggravate societal divisions. To reach the maximum people and increase click-through rates and engagement, these algorithms can choose sensational information over practical ones. This can lead to filter bubbles, echo chambers, and the propagation of fake news. As a result, individuals in particular and societies as a whole can suffer massive implications.
AI in information warfare can target global democracies and their core principles by spreading hostile communications through social media channels and influencing public opinion. This can be easily done with AI-powered tools that enable automation in strategies, like false campaigns, targeted propaganda, news manipulation, and many other dangerous ways.
With the advent of generative AI, even chatbots and AI assistants can be efficiently developed to industrialize the negative use of disinformation. Unknowingly, you can have an authentic conversation with an AI assistant and can be subtly fed with manipulative content that is tailored to your preferences and psychology.
Let’s give you a short example: When the COVID-19 pandemic struck the world, the Chinese media utilized information warfare to display the Chinese government’s efficiency in handling the situation, and Western governments’ failed attempts in doing the same. This was a part of a much bigger strategy to deflect blame from China by positioning it as a leader in pandemic management and gaining vaccine supremacy. The Chinese media went on to spread doubt about Western-made vaccines and promote their in-house Sinovac and Sinopharm vaccines.
6. AI-Powered Weapons & Autonomous Systems
AI-powered weapons and autonomous systems raise questions related to moral implications. Global military powers, like the USA, Russia, China, India, and others, are competing to dominate the technological scenario. With this, the importance of understanding the ethical, societal, and strategic risks associated with AI-driven warfare is becoming increasingly crucial.
AI in warfare is used extensively in logistics and targeting.
One of the best examples of this is the Pentagon’s Project Maven, and what it does is striking. Computer vision models scan drone footage that includes radar signals, videos, and satellite images to monitor battlefields, anomaly movements, and objectives. The software, though useful in multiple ways, has its limitations. The results of its use in the Russia-Ukraine war were not satisfactory. Especially when the targets are camouflaged by foliage and snow.
What’s most concerning is that with technology as powerful as AI, even the smallest mistake can cause tragic fatalities. In 2021, a UN Panel of Experts on Libya stated that a Turkish-made Kargu-2 drone allegedly carried out autonomous attacks on retreating troops of Libya’s Civil War without human clearance.
These are just a few tiny images in the bigger picture of what an AI application in technological warfare can do and to what immense extent.
7. The Rise of the Superintelligence
In his book ‘Superintelligence’, Nick Bostrom, the author, argues that in the future, AI will be threatening to humans. He also adds that highly intelligent AI can display convergent behaviour, like gathering information and protecting itself from being powered off – all these can be scary and dangerous for humankind.
On one hand, AI-driven advancements can provide solutions for some of the most vital problems, like diseases, poverty, climate change, ecological disasters, etc. On the other hand, it can also be existentially risky if not controlled well or aligned with human ethics and values.
The rise of superintelligence can have disastrous consequences, which can include lost human control, human subjugation, and even the extinction of the human race.
How Dark Can AI Truly Get & How Can We Prevent It?
Though we could discuss only some of the potential risks of deep learning and AI, with its widespread usage, the risk factors increase every day. By leveraging deep learning, AI, and ML, our generation is witnessing technological marvels that wouldn’t have crossed our grandparents’ minds.
AI simplifies technology for general users, presenting advantages like personalized recommendations, time saving, accelerating growth in industries, ushering in health-related innovations, and many others. Also, the same algorithms can magnify errors, embed secret biases, and provide adversaries in powerful ways.
AI is going to stay in our lives, and there’s no denying or resisting it. But the dark side of AI and ML needs to be carefully considered, and proactive measures must be taken to monitor associated risks and prevent danger. This can be successfully achieved by establishing regulations, promoting ethical use, and emphasizing transparency. If ethical and humane principles drive the future of AI, then this innovation will only benefit individuals, global communities, and might even save the planet!