The concern that artificial intelligence will increase in capability faster than its human controllers’ ability to understand or control it, has become increasingly popular with the advancement of AI systems over the past few years. Many have developed a ‘healthy’ fear of the technology hypothesizing that it may evolve in directions not entirely foreseen by its creators—for good or bad.
This article is aimed at providing a closer look into the ethical challenges faced when dealing with AI and what software engineers should keep in mind when working with these systems.
Ethical Challenges of Artificial Intelligence
Lack of transparency of AI tools: The decisions made by artificial intelligence software are not always intelligible to humans. The software typically uses knowledge representation to store what it knows or hears; automated reasoning to use the stored information to answer questions and draw new conclusions; and machine learning to adapt to new circumstances and to detect and extrapolate patterns. This means that determining the reasoning behind the output of the system may not always be possible.
Coupling the lack of transparency with the lack of knowledge among the general population about what AI tools are capable of and how they work, allows for a better understanding of person’s reluctance towards AI. Moreover, only recently have there been improvements in regulations and laws governing the use of artificial intelligence. This was a critical issue that needed to be addressed as the advent of AI brought forth a lot of legal issues. For instance, OpenAI (the company that created ChatGPT) is currently facing a class-action lawsuit alleging that the artificial-intelligence company massively violated the copyrights and privacy of countless people when it used data scraped from the internet to train its tech.
Software such as ChatGPT (GPT-4) which can interact conversationally and DALLE-2 which can create realistic images and art from a description in natural language are trained on data produced by humans. This could possibly translate into an issue of copyright and intellectual property rights as many feel that they deserve the right to opt out of their creativity being used to train these machines.
It should be noted though, that there is an opportunity for technical experts and public officials to educate and encourage citizens about this rapidly growing sector. A lot of focus has been placed on the seemingly negative aspects of artificial intelligence when really, the field has the potential to be a great source of innovation for humans and make our daily lives much easier.
AI is not neutral: Artificial intelligence-based decisions are susceptible to inaccuracies, discriminatory outcomes and embedded or inserted bias depending on the dataset used to train the machine. So it could be possible, even inadvertently, to manipulate the training of intelligence by corrupting the dataset.
Biased data = biased AI systems
- The quality of data used is a major concern and development teams should be vigilant in identifying and mitigating these biases to ensure that the systems are inclusive and accessible to all users. Amazon’s AI recruiting tool that discriminated against women is a good example of a poorly trained AI tool. The system was trained on resumes submitted to the company over ten years, mostly from men, so the algorithm learned to penalize resumes that included words associated with women. This exemplifies how biased data can lead to biased AI systems.
Data privacy and security: Artificial intelligence relies on large quantities of data, and developers must ensure that this data is collected, stored, and used securely and responsibly. Users are understandably wary about automated technologies that obtain and use their data, which may include sensitive information so the systems must protect users’ privacy and sensitive information and comply with data protection regulations. It should be noted that some products and services need data, but they do not need to invade anyone’s privacy.
However, by addressing these challenges, development teams can ensure that their AI-powered systems are effective, ethical, and can derive real value for users and organizations. This will allow for consumer’s to be assured that the tools they are using are built responsibly and securely for their peace of mind.
“The potential benefits for artificial intelligence are huge, but so are the dangers.” -Dave Waters