Language Technology and Resources for a fAir, Inclusive, and Safe SociEty

LateRAISSE at LREC'22

Workshop Date: 25th June 2022 (Saturday)

Call is now closed.

For more information, see Workshop Schedule and Accepted Papers

This one-day workshop will provide a forum to present and discuss research work on creation and use of language resources and tools focusing on identifying and raising awareness of bias and discrimination in social computational systems on one hand and harassment and bullying in the online spaces on the other hand. This workshop is intended to bring together technical and non-technical experts in the computing and social science sub-disciplines to focus on the issue of social inclusion and safety from different perspectives. The aim of the workshop is to provoke and solicit research work that implements relevant state-of-the-art machine learning and natural language processing technologies for a fair, inclusive and safe society. The workshop also aims to promote research and the development of unbiased and inclusive language technologies. The workshop also aims to promote research and the development of unbiased and inclusive language technologies.

For more information see Call for Papers 

Topics of Interest

While this is not exhaustive, we encourage authors to submit academic work on the intersection of the following:

1. Bias and Discrimination in Recruitment and Workplace

Hiring is critical to society because it determines who can access economic opportunities to support themselves and their families (Bogen and Rieke, 2018). Studies have shown that hiring decisions are not always objective, therefore giving room to bias, discrimination, and unfair decisions influenced by look, gender, race, sexuality, etc. (Bendick Jr and Nunes, 2013; Gaucher et al., 2011). However, studies have established a strong correlation between diversity and inclusiveness in the workplace and an increase in innovation, productivity, and profitability of businesses, thereby making the elimination of bias not only a moral issue but equally an economic one (Zhang, 2020). In today’s changing society, there is an increasing need for equality of opportunity and inclusiveness in the workplace.

We expect papers which use NLP and Artificial Intelligence technologies to illustrate or mitigate bias in Human Resources (HR) and workplace.

2. Bias and Discrimination in Legal Decision Making

Similarly, studies have established the negative influence of bias and stereotypes in the application of police force and criminal justice system, especially towards racial minorities (Goff et al., 2016; Yang, 2015). Text analytics on court documents have proven implicit racial bias in appellate court opinions from US states and federal courts (Rice et al., 2019). Moreover, studies have found that gender attitudes may play a role in the judicial process and court decisions by female judges are more likely to be overturned (Ahola et al., 2009; Ornaghi et al., 2019). The ultimate motto of the legal profession is ‘equality before the law’, and in fact, society will not be safe if justice were easily miscarried or trampled on. Therefore, there is an increasing need to develop techniques and tools to promote equality and fairness in the legal decision making process.

We expect papers with application of NLP tools in the legal domain for developing a fair legal decision-making system towards various social groups

3. Exclusive and Offensive Language Identification especially in low-resource language settings

On the other hand, the prevalence of harmful content and abusive behaviour on social media has become more concerning with the pandemic that has forced more people to be online now than at any other time. More than ever, many people undergo emotional trauma and heightened mental health issues due to overexposure to harmful contents on the internet. For instance, 44% of pre- adolescents have encountered more cyberbullying incidents during the COVID-19 lockdown (Armitage, 2021). While there have been solutions to identify harmful content online, their impacts and performances have been limited due to data and annotation challenges. Moreover, most works have been applied to English and other widely spoken EU languages with plenty of resources. Systems that can provide real-time analysis of online content to detect harmful content must be designed if we are to make the internet a better and more respectful virtual space. However, for a global impact, it is equally important to develop data and tools for many widely spoken but not well- documented languages, e.g. African and Asian languages.

We expect research papers focusing on building NLP tools and resources targeting the low resource languages, e.g. classification and/or information extraction models, datasets, and multilingual Transformer- based language models fine-tuned for detecting a wide range of antisocial online behaviours, such as those already highlighted in the literature (Nadali et. al., 2013; Slonje et. al., 2013, and Bauman, S. 2015) for non-English societies.

© LateRAISSE