I am a final-year PhD student in the Artificial Intelligence (AI) Group of the Department of Computer Science at the University of Toronto. I study the interplay of language, morality, and AI. I take a highly interdisciplinary approach to explore two related lines of inquiry:
1) How do morals vary over human history and across cultures? Morality is not stationary but changes over time and from culture to culture. My research characterizes this variation through developing psychologically inspired computational frameworks. For example, my work introduces the Moral Association Graph (MAG), a cognitive model based on human semantic memory that reflects people’s intuitive moral associations (e.g., smoking->disgusting,unhealthy,addiction). My recent work also shows that MAG can be extended to historical time points using graph neural networks and large-scale diachronic corpora that date back hundreds of years. I am also interested in extending this computational framework to model cultural universals and variation, answering questions such as: Why do some cultures moralize practices like smoking or divorce, while other cultures do not, and can we predict such cultural moral variation by studying people’s mental representations of word meanings?
2) How do AI systems “perceive” human morality and its variation? My research builds on discussions on the development of ethical AI. Particularly, my work raises critical questions regarding cultural moral variation and how biases in AI and other computational methodologies prevent us from understanding human morality at global scale. My work in this domain has pioneered the use of global ethical surveys for LLM cultural evaluations, and finds that AI systems exhibit a bias in capturing a more accurate representation of moral norms in Western cultures and wealthy nations, while their representation of non-Western moral standards contains harmful stereotyping. My recent work suggests that this lack of accurate cultural representation in LLMs is deeply intertwined with how AI systems capture and interact with ethics and human ethical standards. Extending this line of work, I am interested in studying how morality grows in AI systems in comparison to children’s moral development over time, and exploring AI moral perception through multimodal input (like speech, vision, and text).
Awards
- Schwartz Reisman Institute for Technology and Society Graduate Affiliate, 2022-2025.
- Cognitive Science Society Disciplinary Diversity & Integration Award, 2024, 2025.
- Schwartz Reisman Institute for Technology and Society Graduate Fellowship, 2021-2022.
- Iran’s National Elites Foundation: Recognized as elite member, 2016-2020.
Publications
Ramezani, A., and Xu, Y. The discordance between embedded ethics and cultural inference in large language models. Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP) pdf, code.
Zhu, W., Ramezani, A., and Xu, Y. Visual moral inference and communication. In Proceedings of the 47th Annual Meeting of the Cognitive Science Society. Disciplinary Diversity and Integration Award. pdf, code
Ramezani, A., Stellar, J.E., Feinberg, M., and Xu, Y. Evolution of the moral lexicon. Open Mind (2024). pdf, code
Ramezani, A., Liu, E., Lee, S., and Xu, Y. Quantifying the emergence of moral foundational lexicon in child language development. PNAS Nexus (2024). pdf, code
Ramezani, A., and Xu, Y. Moral association graph: A cognitive model for automated moral inference. Topics in Cognitive Science. Disciplinary Diversity and Integration Award. pdf, code. Shorter version appeared in Proceedings of the 46th Annual Meeting of the Cognitive Science Society
Ramezani, A., and Xu, Y. Knowledge of cultural moral norms in large language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL 2023). pdf, code
Ramezani, A., Stellar, J.E., Feinberg, M., and Xu, Y. Evolution of moral semantics through metaphorization. In Proceedings of the 44th Annual Meeting of the Cognitive Science Society. pdf, code
Ramezani, A., Liu, E., Ferreira Pinto Jr., R., Lee, S., and Xu, Y. The emergence of moral foundations in child language development. In Proceedings of the 44th Annual Meeting of the Cognitive Science Society. pdf
Ramezani, A., Zhu, Z., Rudzicz, F., and Xu, Y. An unsupervised framework for tracing textual sources of moral change. Findings of the Association for Computational Linguistics: EMNLP 2021. pdf, code
Education
- Ph.D. in Computer Science, University of Toronto, September 2020 – Present
- B.S. in Computer Engineering, Sharif University of Technology, September 2016 - July 2020
Work Experience
- Intern of Technical Staff, Cohere, 2025
- Research and Development Intern, Microsoft + Nuance communications, 2023
