The adoption of artificial intelligence in many industries has been regarded by some as a threat to low-skilled workers, as it will drastically cut down reliance on the human workforce.
Besides unemployment, there are also concerns about rising economic inequality caused by AI-driven companies. With fewer employees, these companies would gain a disproportionate advantage than most conventional companies that still depend on normal shift-based systems.
AI bots, for example, have achieved the capability to interact with humans and build relationships through conversations.
This would eventually enable these bots to affect human behaviour and possibly trigger certain actions. Such intelligent machines, however, are not immune to mistakes and confusion, and can be manipulated to fulfil certain ends.
Also, Al cannot be totally expected to be fair and neutral, since it is dependent on human programmers who have their own interests and preferences.
Recent developments have unveiled AI’s ability to recreate image, imitate voice and even generate fake videos of a person, which could be used for fraud. At this level, even voice recognition software can no longer identify such fraud.
Also, its ability to collect huge amount of data without consent, and put humans under surveillance can be a threat to privacy.
Even in cases where AI could potentially eradicate disease or disasters, its approach and method could still cause undesirable consequences that humans may not approve of.
AI raises great security concerns, especially in the form of autonomous cyber-attacks. Even though AI is currently under human control, the growing rate of AI’s ingenuity is exponential. AI experts fear that it might surpass human intelligence and potentially lead to a technological singularity which then becomes a threat to humanity.
Therefore, the problem now is how to programme an AI with morals. The previous “Laws of Robotics”, introduced by Isaac Asimov, would not be enough since there are more tricky questions about what counts as ethical.
It will be difficult to answer questions such as, “Should a self-driving car just hit another person to save the driver?” or “Is it allowed to programme military robots to kill a terrorist?” or “Should a trading bot inform everyone of a predicted disaster, or just keep it a secret?”.
An AI framework that can take into account human desires, goals, preferences and ethical codes is indispensable. Currently, the AI framework that adopts “reinforcement learning” is extensively studied and developed by some researchers to ensure that artificial agents act ethically.
More work is still in progress to improve this technology of ethics.
Besides, to ensure that the rise of AI in all industries is advantageous, we need to have an ethical code for the AI that does not compromise humankind’s potential and its survival.
Recently, the United Kingdom government published a report —AI in the UK: ready, willing and able?—in which five core principles are outlined. The first principle proposed that AI should be developed for the common good and benefit of humanity. The second dictates that AI operates within parameters of intelligibility and fairness. Third, AI should not be used to lessen the privacy or data right of individuals, families or communities. Fourth, all people should have the right to be educated and flourish alongside artificial intelligence. The fifth principle opposes the use of AI as killer robots with the autonomy to hurt, destroy or deceive humans.
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, for instance, has launched a crowd-sourced global treatise entitled, “Ethically Aligned Design: A Vision for Prioritising Human Wellbeing with Autonomous and Intelligent Systems”.
The report takes into account various well-established “classical ethics”, including religious-and-culture-based ethical systems.
As the global community continues to work together on the ethics of AI, there are still vast opportunities to offer ethical inputs, including the ethical principles based on Islamic teachings.
This is in line with Islam’s encouragement for its believers to convey beneficial messages, including to share its ethical principles with society.
In Islam, ethics or akhlak (virtuous character traits) in Arabic, is sometimes employed interchangeably in the Arabic language with adab, which means the manner, attitude, behaviour, and etiquette of putting things in their proper places. Islamic ethics cover all the legal concepts ranging from syariah (Islamic law), fiqh ( jurisprudence), qanun (ordinance), and ‘urf (customary practices).
Adopting and applying moral values based on the Islamic ethical concept or applied Islamic ethics could be a way to address various issues in today’s societies.
At the same time, this approach is in line with the higher objectives of syariah (maqasid alsyariah) that is aimed at conserving human benefit by the protection of human values, including faith (hifz al-din), life (hifz alnafs), lineage (hifz al-nasl), intellect (hifz al-‘aql), and property (hifz al-mal). This approach could be very helpful to address contemporary issues, including those related to the rise of AI and intelligent robots.
As there are so many ethical questions in the age of AI, we need to discuss and address the issues by taking into account various perspectives from different disciplines.
It is hoped that, through this approach, both scientists and religious scholars would work together to address all the issues in harmony, and put an end to the existing practice of separation where scientists and religious scholars operate in two separate universes
Published in: New Straits Times, Friday 30 November 2018