Ethical AI: A Future We Trust

Yousef Idress   ☁️   December 18, 2024   ☁️  

Table of Contents

During the last decade, countless of advancements have reshaped the tech world, from faster connectivity to the development of precise and advanced AI algorithms. Among these innovations, Artificial Intelligence stood out as one of the most transformative sectors in the tech world. Technologies like self-driving cars, conversational AI, and even Augmented Realities – once confined pages of sci-fi novels like Fahrenheit 451, or blockbuster films like The Matrix – are no longer figments of imagination. They are part of our reality.

Who would have thought our lives would be so dependent and intertwined with AI to the point that not a day goes by without any of us merely utilizing one or two of those tools? In 2024, AI tools are heavily integrated in our workspaces and projects, enhancing productivity by generating stunning visuals on-demand, and condensing complex information making us achieve more than ever before. This all sounds so surreal, right? Speaking loudly about such innovations make AI sound – flawless, almost too good to be true.

Yet, as is often the case, hidden things lie in the fine print. AI is not without flaws. Like everything created by us, AI bound to be constrained to our ingenuity, intelligence, and most importantly our values and morals. These systems are trained to act like us, talk like us, think like us, and even take decisions to solve problems like us. But what happens when biases, oversights, or ethical lapses creep into their design?

As Kate Crawford wisely put it: “Like all technologies before it, artificial intelligence will reflect the values of its creators. So, inclusivity matters – from who designs it to who sits on the company boards and which ethical perspectives are includes.”

Risk of Unethical AI

Recent discussions around AI often emphasize on its benefits – enhanced productivity, efficiency, and innovation across myriad sectors. While these advancements are transformative and even revolutionary in some cases, it is equally important to break the cycle and expose the darker side of things – the ethical risks of AI’s misuse, biased training, and unintended consequences due to personal gains. Below, we will be exploring some of those unethical practices.

Bias and Discrimination:

AI models have been increasingly used in the in the workplace – facial recognition security mechanisms, resume checkers for HR filtering and recruitments, smart assistants, performance analysis practices, and the list goes on. Unfortunately, unchecked biases in datasets used to train the AI models can lead to discriminatory outcomes, causing societal inequalities and racial consequences.

A case of unethical training of AI models involves the facial recognition technologies. A study by the National Institute of Standards and Technolgy revealed many systems misidentifies people of color at significantly larger rates compared with people with white skin colors- leading to false arrests, privacy violations, and faulty claims. According to American Civil Liberties Union (ACLU), the criminal justice system portrays some racial biases – quoting: “… because of racial biases in the criminal justice system”. Amazon’s Rekognition tool that was trained on mugshot dataset, incorrectly identified 28 members of Congress to mugshot images disproportionately affecting people of color.

Privacy Invasion:

Imagine a multi-million-dollar company using your photos to train a facial recognition AI algorithm without your consent; AI systems often rely on vast amounts of personal data, raising significant concerns about consent and privacy. Misuse of this data can lead to ethical breaches and violations of trust.

For instance, Clearview AI faced global criticism for scraping billions of photos from social media platforms without user consent; the company built a dataset for a facial recognition algorithm based on those photos and sold the trained AI models to law enforcement agencies worldwide, raising alarms about surveillance and misuse. They were later fined with 30$ million in Europe for violating people’s privacy.

Misuse of AI:

We got across several cases on the unethical training of AI models, but how about the unethical use of AI models for “our personal gains”? We always go back to the remarkable benefits of AI in our lives; but these same systems are so powerful that are capable to be used for malicious purposes. We’ve all read about the misuse of AI systems with voice alterations, deep fake videos, and even prompting AI to do our work/personal studies for you.

Deepfake videos, in particular, has been misused the most for fraud and misinformation manipulation. A notable example is the March 2019 UK scam where scammers defrauded a UK-based energy company. The scammers used deepfake audio that convincingly imitated the CEO’s voice defrauding them with a whopping 243000$. This incident demonstrated how convincing AI-generated media could be used for identity imitation and how it can be exploited for financial and reputation harm.

Autonomy Risk:

Autonomous activities have been circulating our workspaces for years now; CI/CD practices, autonomous driving vehicles, automation in Manfacturing, and many other cases. These resulted in myriads of advancements and improvements and in efficiency. On the other hand, such technologies introduced new safety and ethical challenges; unintended deployments of code versions, safety hazards on civilians, and even a huge curve in the unemployment and laying off rates.

One example of such hazards would be in 2018, an Uber autonomous vehicle tragically struck and took the life of a pedestrian in Arizona due to a failure in object recognitions. Investigators revealed that software limitations and the delay in braking configurations were key contributors in the incident as Uber prioritized “smooth driving” over pedestrian’s safety.

Data Misuse:

Data breaches and unconsented use of personal information are recurring events in the unethical world of AI. These breaches are used to exploit personal information to scam people, spread false information, and the list goes on and on. Scandals of such sorts happened in numerous of occasions and events, including the U.S. elections with the Cambridge Analytica Scandal.

In 2016, Cambridge Analytica was involved in exploiting personal information from the social media platform Facebook, taking advantage of its, at the time, weak data privacy policies to target U.S. voters. During the 2016 presidential elections, about 87 million Facebook profiles were improperly accessed through an app created by the GSR (Global Science Research). The data was used to create psychographic profiles, deliver highly targeted advertisements, and promote voting campaigns. This breach of trust raised serious concerns about consent and the ethical use of AI in influencing public opinion.

Proposed Solutions

Addressing the ethical dilemmas stemming for unethical uses of AI requires a multi-stakeholder approach involving developers, companies, governments, and the society as a whole. Here are some practical solutions to tackle some of the key issues:

    Diversifying Training Data– ensuring datasets are representative of diverse populations is a must to minimize bias. This includes systematic checks and audits, review practices and development strategies, and utilizing tools like IBM’s AI Fairness 360 toolkit which detect and mitigates bias in machine learning models or Amazon’s SageMaker Clarify to detect, provide explanation during data processing, and supply with predications against biases in datasets.

    Safeguarding Privacy – this might sound abstract, yet we will try to dig deeper into it. One way to ensure privacy in AI is incorporating privacy-preserving measures such as differential privacy to ensure data cannot be traced or linked to people within large dataset or using AWS’ Macie to filter out personal information automatically in S3 Buckets. Another way would be promoting transparency – companies must clearly communicate to how personal data is used in their AI systems, provide user-friendly processes with comprehensive consent mechanisms to give the users a choice to share their data or not. An example would be with Apple’s privacy labels on apps to allow users to see how their personal data is collected and used.

    Preventing Misuse – this can be done by creating ethical policies, governments and organizations must define clear guidelines about what constitutes acceptable AI usage and setting fines against any breaches. Educating the public and raising awareness about AI related fraudulent activities such as deepfake videos is important to increase the chances of identifying scams and not get defrauded – MIT have created online courses for detecting deepfake videos that can be accessed by the public.

    Managing Autonomy Risks – Developing a comprehensive testing protocols are a must in order to avoid any unintentional AI acts. Simulating environments and undergoing extensive real-world testing to identify edge cases and ensuring safety are one way to manage autonomy. Requiring real-time monitoring of autonomous systems need to be implemented to adhere to safety standards and avoid any tragic incidents. Tesla and Waymo strengthened their testing protocols and introduced a more robust detection system to their vehicles.

Our Call For Ethical AI Practices

As both advocates and daily users of AI tools, at Digico Solutions, we bear the responsibility to opt ethical AI practices. The potential of AI is limitless, paving the way for the future that we always envisioned; it’s potential to transform industries and enhancing the quality of lives is undeniable. However, its deployment should always respect human dignity, fairness, and societal well-being. At the end of the day, we are the ones experiences both the benefits and the pitfalls of its application.

Addressing risks proactively and committing to transparency, accountability, and continuous improvements we can create a future where AI serves humanity responsibly without compromising our safety and morality. Cloud providers including AWS and Azure, has taken steps toward promoting ethical AI through tools that enhance fairness, transparency, and security in AI systems.

To Wrap it all up, Ethical AI is not just the responsibility of developers and organizations but of all stakeholders, including governments, educators, and us – the users. Collaboration across these entities is essential to building a future where AI serves humanity responsibly and inclusively.

Together, by acting on the ethics of today, we can reshape a better tomorrow.