The Digital Translation of Soundscapes: AI Assistant Gender Performativity and the Reconstruction of Sound Meaning

Authors

  • Xinran Feng School of Communication, East China Normal University, Shanghai, 200241, China Author

DOI:

https://doi.org/10.71222/54vncm04

Keywords:

AI voice assistant, gender performativity, technofeminism, digital soundscape, voice ethics

Abstract

Artificial intelligence voice assistants increasingly shape how gender is experienced and understood through digital sound. The design of feminized AI voices reflects and reinforces cultural expectations that link femininity with service, politeness, and emotional labor. Through an interdisciplinary framework grounded in gender performativity theory, technofeminist analysis, and sound studies, this paper examines how vocal features such as pitch, tone, rhythm, and speech patterns encode social roles within AI systems. These vocal characteristics are not neutral but function as carriers of symbolic meaning, aligning technological outputs with long-standing gender hierarchies. The widespread adoption of female-voiced assistants in domestic and service-oriented applications illustrates how gendered labor is reimagined in digital form. At the same time, limited representation of non-binary and culturally diverse voices highlights a structural gap in current AI voice design. Ethical concerns emerge around the standardization of voice as both a technical product and a social interface. A more inclusive approach to voice technology requires recognition of sound as a material practice that shapes identity, agency, and interaction. Rather than treating voice as a passive output, AI systems should be developed with attention to cultural specificity, user diversity, and the symbolic implications of auditory design. Understanding voice as a site of power and representation offers a critical pathway toward more equitable and reflective technological development.

References

1. A.-J. Berg and M. Lie, “Feminism and constructivism: Do artifacts have gender?,” Sci. Technol. Hum. Values, vol. 20, no. 3, pp. 332–351, 1995, doi: 10.1177/016224399502000304.

2. H. Bergen, “‘I’d blush if I could’: Digital assistants, disembodied cyborgs and the problem of gender,” Word Text J. Lit. Stud. Linguist., vol. 6, no. 01, pp. 95–113, 2016.

3. X. Luo, “Immersive digital modeling and interactive manufacturing systems in the textile industry,” J. Comput. Signal Syst. Res., vol. 2, no. 5, pp. 31–40, 2025, doi: 10.71222/jyctft16.

4. X. Luo, “Reshaping coordination efficiency in the textile supply chain through intelligent scheduling technologies,” Econ. Manag. Innov., vol. 2, no. 4, pp. 1–9, 2025, doi: 10.71222/ww35bp29.

5. S. Mohsenin and K. P. Munz, “Gender-ambiguous voices and social disfluency,” Psychol. Sci., vol. 35, no. 5, pp. 543–557, 2024. doi: 10.1177/09567976241238222.

6. D. Pal et al., “Intelligent attributes of voice assistants and user’s love for AI: A SEM-based study,” IEEE Access, vol. 11, pp. 60889–60903, 2023. doi: 10.1109/ACCESS.2023.3286570.

7. K. Seaborn et al., “Voice in human–agent interaction: A survey,” ACM Comput. Surv., vol. 54, no. 4, pp. 1–43, 2021. doi: 10.1145/3386867.

8. K. Seaborn and P. Pennefather, “Neither ‘hear’ nor ‘their’: Interrogating gender neutrality in robots,” in Proc. 17th ACM/IEEE Int. Conf. Hum.-Robot Interact. (HRI), 2022, pp. –, doi: 10.1109/HRI53351.2022.9889350.

9. W. Seymour et al., “A systematic review of ethical concerns with voice assistants,” in Proc. AAAI/ACM Conf. AI, Ethics, Soc., 2023, pp. –, doi: 10.1145/3600211.3604679.

10. L. Vágnerová, Sirens/cyborgs: Sound technologies and the musical body, Columbia University, 2016.

11. A. Schlichter, “Do voices matter? Vocality, materiality, gender performativity,” Body Soc., vol. 17, no. 1, pp. 31–52, 2011.

12. M. G. Sindoni, “The feminization of AI-powered voice assistants: Personification, anthropomorphism and discourse ideolo-gies,” Discourse Context Media, vol. 62, p. 100833, 2024, doi: 10.1016/j.dcm.2024.100833.

13. G. Abercrombie et al., “Alexa, Google, Siri: What are your pronouns? Gender and anthropomorphism in the design and per-ception of conversational assistants,” arXiv preprint arXiv:2106.02578, 2021.

14. A. Danielescu et al., “Creating inclusive voices for the 21st century: A non-binary text-to-speech for conversational assistants,” in Proc. CHI Conf. Hum. Factors Comput. Syst., 2023, pp. –, doi: 10.1145/3544548.3581281.

15. S.-Y. Ahn et al., “How do AI and human users interact? Positioning of AI and human users in customer service,” Text Talk, vol. 45, no. 3, pp. 301–318, 2025, doi: 10.1515/text-2023-0116.

16. A. Borkowski, “Vocal aesthetics, AI imaginaries: Reconfiguring smart interfaces,” Afterimage, vol. 50, no. 2, pp. 129–149, 2023, doi: 10.1525/aft.2023.50.2.129.

17. M. C. Lingold, D. Mueller, and W. Trettien, Digital sound studies, Duke University Press, 2018.

18. F. Nasirian, M. Ahmadian, and O.-K. Daniel Lee, “AI-based voice assistant systems: Evaluating from the interaction and trust perspectives,” 2017.

19. S. Subhash et al., “Artificial intelligence-based voice assistant,” in 2020 Fourth World Conf. Smart Trends Syst., Secur. Sustain. (WorldS4), 2020, doi: 10.1109/WorldS450073.2020.9210344.

20. S. Natale, To believe in Siri: A critical analysis of AI voice assistants, University of Bremen, 2020.

21. S. Malodia et al., “Why do people use artificial intelligence (AI)-enabled voice assistants?,” IEEE Trans. Eng. Manage., vol. 71, pp. 491–505, 2021, doi: 10.1109/TEM.2021.3117884.

22. A. Soofastaei, Ed., Virtual Assistant, BoD–Books on Demand, 2021.

23. M. Mekni, Z. Baani, and D. Sulieman, “A smart virtual assistant for students,” in Proc. 3rd Int. Conf. Appl. Intell. Syst., 2020, doi: 10.1145/3378184.3378199.

24. D. R. Ford, “Postdigital soundscapes: Sonics, pedagogies, technologies,” Postdigit. Sci. Educ., vol. 5, no. 2, pp. 265–276, 2023, doi: 10.1007/s42438-022-00354-9.

25. A. Mahmood and C.-M. Huang, “Gender biases in error mitigation by voice assistants,” Proc. ACM Hum.-Comput. Interact., vol. 8, no. CSCW1, pp. 1–27, 2024, doi: 10.1145/3637337.

26. C. Schumacher, “Raising awareness about gender biases and stereotypes in voice assistants,” 2022.

27. L. M. Assink, Making the Invisible Visible: Exploring Gender Bias in AI Voice Assistants, MS thesis, Univ. of Twente, 2021.

28. J. Ahn, J. Kim, and Y. Sung, “The effect of gender stereotypes on artificial intelligence recommendations,” J. Bus. Res., vol. 141, pp. 50–59, 2022, doi: 10.1016/j.jbusres.2021.12.007.

29. P. Tubaro and A. A. Casilli, “Human listeners and virtual assistants: Privacy and labor arbitrage in the production of smart technologies,” in Digital Work Planetary Market, 2022.

30. W. Hutiri, O. Papakyriakopoulos, and A. Xiang, “Not my voice! a taxonomy of ethical and safety harms of speech generators,” in Proc. 2024 ACM Conf. Fairness, Accountability, Transparency, 2024, doi: 10.1145/3630106.3658911.

31. J. Gao, M. Galley, and L. Li, “Neural approaches to conversational AI,” in Proc. 41st Int. ACM SIGIR Conf. Res. Develop. Inf. Retr., 2018, doi: 10.1145/3209978.3210183.

32. L. Liao, G. H. Yang, and C. Shah, “Proactive conversational agents,” in Proc. 16th ACM Int. Conf. Web Search Data Mining (WSDM), 2023, doi: 10.1145/3539597.3572724.

Downloads

Published

26 July 2025

How to Cite

Feng, X. (2025). The Digital Translation of Soundscapes: AI Assistant Gender Performativity and the Reconstruction of Sound Meaning. Pinnacle Academic Press Proceedings Series, 4, 189-202. https://doi.org/10.71222/54vncm04