Systematic Review on AI in Gender Bias Detection and Mitigation in Education and Workplaces

Authors

  • Dinesh Deckker Wrexham University
  • Subhashini Suumanasekara University of Gloucestershire

Keywords:

Artificial intelligence, AI bias, gender discrimination, fairness in AI, , education and workplace AI, bias mitigation

Abstract

Gender bias in artificial intelligence (AI) systems, particularly within education and workplace settings, poses serious ethical and operational concerns. These biases often stem from historically skewed datasets and flawed algorithmic logic, which can lead to the reinforcement of existing inequalities and the systematic exclusion of underrepresented groups, especially women. This systematic review analyses peer-reviewed literature from 2010 to 2024, sourced from IEEE Xplore, Google Scholar, PubMed, and SpringerLink. Using targeted keywords such as AI gender bias, algorithmic fairness, and bias mitigation, the review assesses empirical and theoretical studies that examine the causes of gender bias, its manifestations in AI-driven decision-making systems, and proposed strategies for detection and mitigation. Findings reveal that biased training data, algorithm design flaws, and unacknowledged developer assumptions are primary sources of gender discrimination in AI systems. In education, these systems affect grading accuracy and learning outcomes; in workplaces, they influence hiring, evaluations, and promotions. Mitigation approaches can be categorized into three main categories: data-centric (e.g., data augmentation and data balancing), algorithm-centric (e.g., fairness-aware learning and adversarial training), and post-processing techniques (e.g., output calibration). However, each approach faces implementation challenges, including trade-offs between fairness and accuracy, lack of transparency, and the absence of intersectional bias detection. The review concludes that gender fairness in AI requires integrated strategies that combine technical solutions with ethical governance. Ethical AI deployment must be grounded in inclusive data practices, transparent protocols, and interdisciplinary collaboration. Policymakers and organizations must strengthen accountability frameworks, such as the EU AI Act and the U.S. AI Bill of Rights, to ensure that AI technologies support equitable outcomes in education and employment.

References

S. O’Connor and H. K. Liu, "Gender bias perpetuation and mitigation in AI technologies: Challenges and opportunities," AI & Society, vol. 38, pp. 917–933, 2023. doi: 10.1007/s00146-023-01675-4. Access Article

S. Shrestha and S. Das, "Exploring gender biases in ML and AI academic research through systematic literature review," Frontiers in Artificial Intelligence, vol. 5, 2022. doi: 10.3389/frai.2022.976838. Access Article

A. L. Hunkenschroer and C. Luetge, "Ethics of AI-enabled recruiting and selection: A review and research agenda," Journal of Business Ethics, vol. 182, pp. 243–261, 2022. doi: 10.1007/s10551-022-05049-6. Access Article

X. Ferrer, T. V. Nuenen, J. M. Such, M. Cot, and N. Criado, "Bias and discrimination in AI: A cross-disciplinary perspective," IEEE Technology and Society Magazine, vol. 40, no. 1, pp. 72–80, 2021. doi: 10.1109/MTS.2021.3056293. Access Article

J. Dastin, "Amazon scraps secret AI recruiting tool that showed bias against women," Reuters, Oct. 10, 2018. [Online]. Available: https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/ Access Article

P. Terhörst et al., "A comprehensive study on face recognition biases beyond demographics," IEEE Transactions on Technology and Society, vol. 2, no. 4, pp. 199–212, 2021. doi: 10.1109/TTS.2021.3111823 Access Article

H. Liu, J. Dacon, W. Fan, H. Liu, Z. Liu, and J. Tang, "Does gender matter? Towards fairness in dialogue systems," in Proc. Int. Conf. Computational Linguistics (COLING), Barcelona, Spain, Dec. 2020, pp. 4405–4415. doi: 10.18653/v1/2020.coling-main.390. Access Article

Z. Slimi and B. Villarejo-Carballido, "Navigating the ethical challenges of artificial intelligence in higher education: An analysis of seven global AI ethics policies," TEM Journal, vol. 12, no. 2, pp. 548–554, 2023. doi: 10.18421/TEM122-02. Access Article

L. Cheng, K. R. Varshney, and H. Liu, "Socially responsible AI algorithms: Issues, purposes, and challenges," Journal of Artificial Intelligence Research, vol. 71, pp. 1089–1121, 2021. doi: 10.1613/jair.1.12814. Access Article

F. Kamalov, D. S. Calonge, and I. Gurrib, "New era of artificial intelligence in education: Towards a sustainable multifaceted revolution," Sustainability, vol. 15, no. 16, pp. 12451, 2023. doi: 10.3390/su151612451. Access Article

R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, "Grad-CAM: Visual explanations from deep networks via gradient-based localization," in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Venice, Italy, Oct. 2017, pp. 618–626. doi: 10.1109/ICCV.2017.74. Access Article

M. Mitchell, S. Wu, A. Zaldivar, P. Barnes, L. Vasserman, B. Hutchinson, et al., "Model cards for model reporting," in Proc. Conf. Fairness, Accountability, and Transparency (FAT), Atlanta, GA, USA, Jan. 2019. doi: 10.1145/3287560.3287596. Access Article

K. Holstein, J. W. Vaughan, H. Daumé III, M. Dudik, and H. Wallach, "Improving fairness in machine learning systems: What do industry practitioners need?" in Proc. 2019 CHI Conf. Human Factors Comput. Syst., Glasgow, Scotland, May 2019, pp. 1–16. doi: 10.1145/3290605.3300830. Access Article

S. Guo, J. Wang, L. Lin, and R. Chen, "The impact of cognitive biases on decision-making processes in high-stress environments," Journal of Cognitive Psychology, vol. 33, no. 5, pp. 567–580, 2021. Access Article

M. J. Page, J. E. McKenzie, P. M. Bossuyt, I. Boutron, T. C. Hoffmann, C. D. Mulrow, et al., "The PRISMA 2020 statement: an updated guideline for reporting systematic reviews," BMJ, vol. 372, no. n71, pp. 1–9, 2021. doi: 10.1136/bmj.n71. Access Article

E. Ntoutsi, P. Fafalios, U. Gadiraju, V. Iosifidis, W. Nejdl, M. Vidal, et al., "Bias in data-driven artificial intelligence systems: An introductory survey," Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 10, no. 3, pp. e1356, 2020. doi: 10.1002/widm.1356. Access Article

M. Raghavan, S. Barocas, J. Kleinberg, and K. Levy, "Mitigating bias in algorithmic hiring: Evaluating claims and practices," in Proc. Conf. Fairness, Accountability, and Transparency (FAT), Barcelona, Spain, Jan. 2020, pp. 469–481. doi: 10.1145/3351095.3372828 Access Article

S. J. Yang, H. Ogata, T. Matsui, and N. Chen, "Human-centered artificial intelligence in education: Seeing the invisible through the visible," Cognitive and Affective Computing, vol. 2, no. 1, pp. 1–14, 2021. doi: 10.1016/j.caeai.2021.100008. Access Article

A. Küchling and M. C. Wehner, "Discriminated by an algorithm: A systematic review of discrimination and fairness in algorithmic decision-making in HR recruitment and development," AI and Ethics, vol. 1, pp. 1–17, 2020. doi: 10.1007/s40685-020-00134-w. Access Article

A. Thieme, D. Belgrave, and G. Doherty, "Machine learning in mental health: A systematic review of the HCI literature to support the development of effective and implementable ML systems," ACM Trans. Comput.-Hum. Interact., vol. 27, no. 5, pp. 1–53, 2020. doi: 10.1145/3398069. Access Article

A. Paullada, I. D. Raji, E. M. Bender, E. Denton, and A. Hanna, "Data and its (dis)contents: A survey of dataset development and use in machine learning research," Patterns, vol. 2, no. 11, pp. 100336, 2021. doi: 10.1016/j.patter.2021.100336. Access Article

A. Asatiani, P. Malo, P. R. Nagbl, E. Penttinen, T. Rinta-Kahila, and A. Salovaara, "Challenges of explaining the behavior of black-box AI systems," Journal of Management Science and Quantitative Economics, vol. 6, no. 1, pp. 1–23, 2020. doi: 10.17705/2msqe.00037. Access Article

V. Hassija, V. Chamola, A. Mahapatra, A. Singal, D. Goel, K. Huang, et al., "Interpreting black-box models: A review on explainable artificial intelligence," Cognitive Computation, 2023. doi: 10.1007/s12559-023-10179-8. Access Article

A. Nguyen, H. N. Ngo, Y. Hong, B. Dang, and B. T. Nguyen, "Ethical principles for artificial intelligence in education," Education and Information Technologies, vol. 27, pp. 13573–13593, 2022. doi: 10.1007/s10639-022-11316-w. Access Article

M. Mirbabaie, F. Brünker, N. Frick, and S. Stieglitz, "The rise of artificial intelligence: Understanding the AI identity threat at the workplace," Electronic Markets, vol. 31, pp. 895–913, 2021. doi: 10.1007/s12525-021-00496-x. Access Article

P. Budhwar, S. Chowdhury, G. Wood, H. Aguinis, G. J. Bamber, J. R. Beltran, et al., "Human resource management in the age of generative artificial intelligence: Perspectives and research directions on ChatGPT," Human Resource Management Journal, vol. 34, no. 1, 2023. doi: 10.1111/1748-8583.12524. Access Article

A. Caliskan, P. A. Pimparkar, T. Charlesworth, R. Wolfe, and M. R. Banaji, "Gender bias in word embeddings: A comprehensive analysis of frequency, syntax, and semantics," in Proc. 2022 AAAI/ACM Conf. AI, Ethics, and Society (AIES '22), Oxford, UK, 2022, pp. 172–182. Access Article

M. Roshanaei, "Cybersecurity preparedness of critical infrastructure: A national review," Journal of Critical Infrastructure Policy, vol. 4, no. 1, Article 4, 2023.

S. Popenici, "The critique of AI as a foundation for judicious use in higher education," Journal of Applied Learning & Teaching, vol. 6, no. 2, pp. 378–384, 2023.

N. Meade, E. Poole-Dayan, and S. Reddy, "An empirical survey of the effectiveness of debiasing techniques for pre-trained language models," in Proc. 60th Annu. Meeting Assoc. Comput. Linguistics (ACL), Dublin, Ireland, May 2022, pp. 1878–1898. Access Article

B. Booth, L. Hickman, S. K. Subburaj, and S. K. D'Mello, "Bias and fairness in multimodal machine learning: A case study of automated video interviews," in Proc. 2021 ACM Conf. Fairness, Accountability, and Transparency (FAccT '21), Virtual Event, Mar. 2021, pp. 279–289. Access Article

I. D. Raji, A. Smart, R. N. White, M. Mitchell, T. Gebru, B. Hutchinson, J. Smith-Loud, D. Theron, and P. Barnes, "Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing," in Proc. 2020 Conf. Fairness, Accountability, and Transparency (FAT), Barcelona, Spain, Jan. 2020, pp. 33–44. doi: 10.1145/3351095.3372873. Access Article

European Commission, ""Proposal for a regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act),"" European Commission, Brussels, Belgium, 2021. [Online]. Available:https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence Access Article

White House Office of Science and Technology Policy (OSTP), "Blueprint for an AI Bill of Rights: Making automated systems work for the American people," Washington, DC, USA, 2022. [Online]. Available: https://www.whitehouse.gov/ostp/ai-bill-of-rights Access Article

L. Floridi, J. Cowls, M. Beltrametti, R. Chatila, P. Chazerand, V. Dignum, C. Luetge, R. Madelin, U. Pagallo, F. Rossi, B. Schafer, P. Valcke, and E. Vayena, "AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations," Minds and Machines, vol. 28, no. 4, pp. 689–707, 2018. doi: 10.1007/s11023-018-9482-5. Access Article

Downloads

Additional Files

Published

07/01/2025

How to Cite

Deckker, D., & Sumanasekara, S. (2025). Systematic Review on AI in Gender Bias Detection and Mitigation in Education and Workplaces. International Journal of Research in Computing, 4(II), 1–11. Retrieved from https://ijrcom.org/index.php/ijrc/article/view/154