For this week’s exercise, I listened to various applications of voice including popular singers, animated movie characters, automated voice assistants, and movie clips. It was interesting to listen to these voices, such as Grammy Award-winning, Adele and beloved Disney robot, Wall-E, with a more critical and technical approach. Each offered different ways of mediating voice to explore its role in creating one’s identity and either facilitating or destabilizing a particular culture.
To expand on the issue of ‘destabilizing,’ it is important to first recognize how voice is not only useful in delineating a singular identity but is significant in emulating a larger collective or culture. For example, when I listened to Q, the first genderless voice, I was made aware of how advances in technology attempt to collaborate with contemporary culture and society. As cultural awareness over LGBTQ+ issues rises, we see how companies reframe their products to be more inclusive and reach a wider audience. Compared to Amazon’s Alexa and Apple’s Siri, which are both female-dominated adult voices, Q‘s voice carried little gender or age markers. In a way, Q‘s voice blatantly complicates the expectations and assumptions of voice as a conveyor of gender. Essentially, Q acts as this singular material force, while bringing into question a larger culture of queer identities that have been silenced and pressured to fit certain categories of vocal and physical expression.
Additionally, context plays a huge role in understanding this voice as I don’t believe this push for genderless technological assistants would find momentum if it weren’t for the rising support for the queer community in modern society. Listening to this genderless voice helped me realize that society as a whole places a lot of weight on one’s gender and gender roles. Why is it that current vocal assistants are mainly designed to be recognized as female? It is clear our conventional expectations of females being supportive, compassionate, and obliging individuals plays a part in companies’ decisions to produce a recognizable feminine voice.
Also, I found it interesting how the voice, despite being automated, carried an uncanny human character. In relation to the video clip of Wall-E, a highly automated and robot-sounding voice, the vocal assistant was designed to feel more human, attending to the fact it is of practical use around one’s home, business, or personal spaces. Wall-E is noticeably charming in his inflection and timbre, speaking with childlike tones and higher pitches; however, there is no denying the synthesized and digital qualities of his voice. Attending to these automated voices, I realized I had a tendency to familiarize myself with them as I would a real human. I thought of an experience I had with my brother, in which he yelled at our home Alexa device and suddenly apologized to it as if it were a friend with real human emotions. Speaking to a larger culture of human psyche and socialization, I think as humans we are conditioned to want to understand and identify with the voices we hear, whether that be in listening to music with strong emotional undertones like that of Sia or Adele or finding the human in the nonhuman. Disney, singers, and technological companies all design their media to cap in on this aspect of human empathy and identification, allowing us to relate to singers’ strifes, digital platforms, and even a cartoon robotic garbage-cleaner with ease and satisfaction.