As Artificial Intelligence grows in impact all around us, does ethics have a role in how we design, adopt and also apply AI? | Hamad Bin Khalifa University
As Artificial Intelligence grows in impact all around us, does ethics have a role in how we design, adopt and also apply AI?

HBKU Talks: AI and Ethics

As Artificial Intelligence grows in impact all around us, does ethics have a role in how we design, adopt and also apply AI?

HBKU Talks: AI and Ethics

In this episode of ‘HBKU Talks’, Dr. Hassan Sajjad, Senior Scientist at Qatar Computing Research Institute, and Dean Susan L. Karamanian at the College of Law, share their views.  

Q: What role do algorithms play in the selection of information and news that people receive and what are the ethical concerns around this?

A: Machine learning algorithms rely on massive amounts of data, to learn from and predict likely outcomes. Because we live in an imperfect world,  there are prejudices in the data, which the algorithms learn and reflect in their predictions. For example, Amazon’s recruiting tool was found to be biased against women since the training data was dominated by male candidates. There are numerous anecdotes along these lines, and we have not yet found a solution to tackle this. The discrimination is not limited to gender but targets any class which is mis-represented, under-represented or over-represented in the data. Our reliance on such systems raises serious concerns about ethical decision-making and fairness of judgements, which are key pillars of our society.

Q: The development of future technologies is in the hands of technical experts. How can we ensure methods are developed to ensure ethical reflection, responsibility and reasoning in the design process?

A:  We need to work across various dimensions such as data curation, model training and evaluation. This will in turn allow us to select data for the training wisely. It is a tough problem and there is no “perfect data”. The creation of new datasets, and having more than one standard dataset per task will result in dataset diversity.

Similarly, we should make informed decisions when including or excluding a dataset for training. Simple checks like vocabulary coverage, removal of content words etc., are helpful to detect dataset diversity and the presence of simple cues that a model may learn to solve the task.

Lastly, there is a dire need to develop standard evaluation methods that go beyond measuring performance, and assess AI models against ethical concerns such as gender bias, demographic discrimination, and fairness of a decision.

Q: How can we make search engines more equitable and accurate against biases?

A: There is no one-size-fits-all solution. I consider the explanation of why a search engine or any AI system has reached a certain decision the essential step in this direction. On a different line, instead of relying on the prediction of one system, aggregated results of multiple systems will provide diverse results to users, and will give them the liberty to select the information they like to use.

Q: What efforts are you doing to address gender bias in AI and to target ethical AI?

A: At QCRI, we have developed the NeuroX toolkit, which is aimed at bringing transparency to AI, an essential ingredient to ensure fairness, trust and ethical decision-making. We have developed methods that provide an explanation of a model’s decision and highlight biases if learned and used by the model. NeuroX illuminates components of an AI model that are responsible for particular ethical issues and enables users to eradicate them.

 

Q:  How is AI used in the courts?  

A: Let me answer this question by first asking three questions that reflect recent developments. Should judges use algorithms to determine the length of a jail sentence for a person convicted of a crime? Should the results from a computer program, modeled to analyze legal precedent, be substituted for the studied view of a judge? Are we prepared to have a pre-programmed computer model, as opposed to a court, resolve differences between parties to a commercial contract?  

Technology is fundamentally changing the role of courts. To date, AI, in which machine learning devices gather data in real time, process it, and adapt based on the data and previous decisions, has only had a limited impact on court decisions.  

Yet algorithms that lack the adaptive feature are increasingly influencing judicial decisions. The most pronounced area in which this is occurring is in criminal law.  For example, four states in the United States (US) use an algorithmic aide to guide decisions as to how much bail a person should be ordered to pay to secure their re-appearance for a criminal trial. The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) has been adopted by various US states for pre-trial, sentencing, and parole decisions. Behind these programs is data reflecting a range of categories, such as age, nature of offenses, prior sentences, drug use history, etc., which are given certain weight, and then used to predict behavior.  

Q. Explain how AI could help in terms of efficiency and fairness.  

A: Courts throughout the world are congested, and in some states reaching a full resolution of a matter can take years, and the process is costly. If machine learning can reduce the time a judge spends on sentencing, that would result in cost savings. Also, risk assessment algorithms could mitigate concerns about judicial bias.  

Q: But AI could undermine fairness and the rule of law. Can you explain?

A: The unbridled use of this type of technology however, does have substantial ethical implications as it could undermine fairness and the rule of law. An individual facing a criminal sanction should be judged based on his or her conduct and unique qualities. Risk assessment algorithms, if used without the human element, remove individual consideration in sentencing, which rests at the heart of human dignity.  

Second, concerns about the lack of transparency regarding the algorithms raise questions about their accuracy and reliability. Their complexity and the purported lack of willingness of the program originators to share the details, due to concerns about protecting intellectual property, raise fundamental due process concerns.  

Q: What safeguards or legal guidelines should be in place to prevent the latter? 

First, the human component must always guide the use of algorithms, so that technology should guide but not bind judges. Second, full transparency as to risk assessment algorithms is essential. Finally, I would advocate that in addition to full disclosure, there be informed consent of all relevant parties.