Fair and Square: AI Ethics in a Nutshell and How It Impacts Your Business

Article

Artificial Intelligence (AI) is changing the way businesses operate and interact with their customers. It allows for the automation of repetitive operations, discovering insights hidden in data, and helps adopters make better and more informed decisions.


However, as AI becomes more integrated into the IT mainstream, concerns about its possible misuse are mounting.


The 2022 AI Index highlighted indications that AI ethics — the study of the fairness and bias in AI systems, among other things — is gaining traction.


According to a study of AI impact and progress at the Stanford University’s Institute for Human-Centered Artificial Intelligence, research on fairness and transparency in AI has exploded since 2014, with a fivefold increase in publications on related topics over the past four years. 


In addition, the tech sector has increased its involvement in AI ethics, with 71 percent more publications affiliated with the industry at top conferences from 2018 to 2021. 


Mark Zuckerberg, Meta founder and chief executive of Facebook, said that AI is probably "the most important foundational technology of our time." 


Whether it's determining the contents of a social media feed or judging whether someone is eligible for a bank loan, every algorithm has real-world implications and the capacity to harm as much as it can benefit.  


Policymakers, technology companies and academics are all debating how to best handle this unavoidable fact. 


To address the ethical issues that can arise from non-human data analysis and decision-making, an increasing number of businesses are focusing on how to prevent AI from making potentially damaging conclusions.

6378685260283566111968639 (1).png


Ethics In AI — What Is It In Practice?

According to political philosopher Michael Sandel, who is a lecturer on moral, social, and political implications of new technologies at Harvard University, AI presents three major areas of ethical concern for society: privacy and surveillance, bias and discrimination, and the role of human judgment, which is perhaps “the most difficult philosophical question of the era”.


AI ethics is a set of beliefs, principles, and  procedures that apply widely recognized standards to govern moral behavior in developing and using AI technologies.


These ethics motivate morally acceptable practices and dictate the essential duties and obligations necessary to create ethical, fair, and safe AI applications.


So, in practice, what does this mean? Ethical AI ensures that the AI initiatives of the organization maintain human dignity and do not in any way cause harm to people. This includes aspects such as justice, anti-weaponization, and liability.

What Do Ethical AI Practices Mean for Businesses?

In "Ethics of Artificial Intelligence and Robotics", published in the Stanford Encyclopedia of Philosophy, Vincent Müller defines the ethics of artificial intelligence as a field of technological ethics particular to artificially intelligent systems.  It can be separated into two parts: concerns with the moral behavior of people as they design, build, utilize, and treat artificially intelligent systems, and concerns with machine behavior.


Historically, the role of ethics in society was restricted to humans. Machines, on the other hand, are newcomers to the scene. As artificial intelligence evolves at a rapid pace, businesses and society as a whole must address the ethics of intelligent machines.


However, many companies are not embracing ethical AI practices in their operations. According to data from PriceWaterhouseCoopers (PwC), only 20 percent of the organizations had an AI ethics framework in place as of 2021, and just 35 percent had plans to strengthen the governance of AI systems and processes.

How Do We Build Responsible and Ethical AI?

Recently, business AI ethics has been a trending issue in the tech industry. Google’s ethics in the artificial intelligence research unit came under media scrutiny as top AI ethics researcher, Timnit Gebru was abruptly fired, followed by the termination of her co-worker Margaret Mitchell. Although Google denied their version of events, the firing of the two employees has created a significant backlash in the tech industry.


Google has worked for years to position itself as a responsible steward of AI. Its research lab hires respected academics, publishes groundbreaking papers, and steers the agenda at the field’s most prominent conferences. But its reputation has been badly, perhaps irreversibly tarnished, at a time when the business is attempting to put a politically acceptable face on its data empire.


Companies developing AI technology should take moral issues into account when they are planning their projects. "They must create the products with an ethical attitude. Ethics cannot simply be a checkbox exercise at the conclusion of a product rollout," said Anand Rao, global AI lead at PwC.


In building a responsible and ethical AI, recognizing that algorithms will be prejudiced is the first step. It is vital to be skeptical about AI advice rather than simply accepting it because "the computer said so."


Furthermore, if less biased algorithms are desired, more training data on protected classes such as race, gender, or age may be required. Reviewing an algorithm's recommendations for protected classes may be an effective technique to detect discrimination.


AI and deep learning technologies are increasingly being entrusted with increasingly important judgments, such as medical diagnosis, criminal sentencing, driving automobiles, analyzing job prospects, and determining who is credit-worthy. The algorithms that govern these decisions are usually obscure, making it difficult to hold algorithm developers accountable.

Why Should Businesses Care About AI Ethics?

While enterprises' AI capabilities advance, the route to AI maturity is not without hurdles. The management must be able to assess their organization's AI maturity, identify roadblocks to growth, and review the organizational and cultural changes that they must implement following the introduction of AI.


At Laiye, we specialize in combining the power of artificial intelligence with robotic process automation to drive Intelligent Automation. To ensure that our end-to-end IA solutions continue to remain relevant to our customers and partners, we keep a close eye on industry expectations and technical trends, such as AI ethics.


“It is important to empower companies to be more human-centric by using humans to empathize with human needs while using AI-capable digital workers to deliver the desired outcomes,” said Peter Pang, Head of Sales, APAC, at Laiye.


To learn more about Ethics in AI and the road ahead, check out this piece — Are Rules The Tools to Make AI A Force for Good? — where Gary Saw, Laiye APAC General Manager, discusses regulations and ethics in tech and the challenges ahead.

Related articles

No items found.

Ready to explore the RPA world ?