Fairness and Bias in AI
Track Editors
Roberta Calegari, University of Bologna (Contact for Enquiries)
Andrea Aler Tubella, Umeå University
Virginia Dignum, Umeå University
Michela Milano, University of Bologna
Overview
We invited scholars to submit their original research on fairness and bias in AI for consideration in this special track. The track's primary focus is to highlight the importance of responsible and human-centered approaches to addressing these issues. This special track includes a curated selection of papers in extended form from the 1st AEQUITAS Workshop on Fairness and Bias in AI, held in Kraków in October 2023, in conjunction with ECAI 2023.
AI-based decision support systems are increasingly deployed in industry, in the public and private sectors, and in policymaking to guide decisions in important societal spheres, including hiring decisions, university admissions, loan granting, medical diagnosis, and crime prediction. As our society is facing a dramatic increase in inequalities and intersectional discrimination, we need to prevent AI systems from amplifying this phenomenon and rather employ AI to mitigate it. As we use automated decision support systems to formalize, scale, and accelerate processes, we have the opportunity, as well as the duty, to revisit the existing processes for the better, avoiding perpetuating existing patterns of injustice, by detecting, diagnosing and repairing them. To trust these systems, domain experts and stakeholders need to trust the decisions. Despite the increased amount of work in this area in the last few years, we still lack a comprehensive understanding of how pertinent concepts of bias or discrimination should be interpreted in the context of AI and which socio-technical options to combat bias and discrimination are both realistically possible and normatively justified. The topic focuses on fairness and bias in AI; including, but not limited to:
- Bias and fairness by design
- Fairness measures and metrics
- Counterfactual reasoning
- Metric learning
- Impossibility results
- Multi-objective strategies for fairness, explainability, privacy, class-imbalancing, rare events, etc.
- Federated learning
- Resource allocation
- Personalized interventions
- Debiasing strategies on data, algorithms, procedures
- Human-in-the-loop approaches
- Methods to audit, measure, and evaluate bias and fairness
- Auditing methods and tools
- Benchmarks and case studies
- Standard and best practices
- Explainability, traceability, data and model lineage
- Visual analytics and HCI for understanding/auditing bias and fairness
- HCI for bias and fairness
- Software engineering approaches
- Legal perspectives on fairness and bias
- Social and critical perspectives on fairness and bias
Types Submissions
This special track includes two types of articles:
- Regular journal articles, aiming to advance the state of the art of fairness and bias in AI.
- Viewpoints (short articles of up to 2000 words, dedicated to technical views and opinions on fairness and bias in AI in which positions are substantiated by facts or principled arguments)
Regular journal articles in the domain of fairness in AI present innovative research that contributes to the field. The novelty may stem from various characteristics, including: i) the development and presentation of new AI techniques specifically designed for fairness in AI, ii) the application of existing AI techniques to previously unexplored domains within fairness in AI, iii) conducting novel experimental comparisons of different fairness AI techniques through computational experiments or user studies, or iv) introducing fresh analyses, theories, or models that enhance our understanding of fairness in AI.
Viewpoint papers are dedicated to technical and critical views and opinions on the field of fairness and bias in AI and present a novel viewpoint on a problem, or on a novel solution to a problem. They do not need to contain primary research data, but should be substantiated by facts or principled arguments to provide new insights or opinions to a debate.
Status
The track is closed for new submissions. Accepted submissions will be added to this page on publication.