The Future of Digital Assistants Is Queer


Queering the wise wife can mean, in its simplest form, giving digital assistants a variety of personalities that more accurately represent the many versions of femininity that exist around the world, as opposed to the pleasing, subservient personalities chosen by many. adoption companies.

The Q may be a fair case of what these devices look like, Strengers added, “but that’s not the only solution.” Another option can bring out masculinity in different ways. One example is Pepper, a humanoid robot created by Softbank Robotics that is often accused of being pronouns, and recognizing human faces and basic emotions. O Jibo, another robot, introduced in 2017, that also uses masculine pronouns and is marketed as a social robot for the home, although it has been given a second life as a device focused on health care and education. Because of the “gentle and feminine” masculinity they have with Pepper and Jibo — for example, the former answers questions politely and always offers flirtatious looks, and the latter always acts weird and approachable. of users with attractive behavior — Strengers and Kennedy saw them as positive steps in the right direction.

Queering digital assistants can also result in the creation of bot personalities to replace human technological ideas. If Eno, the Capital One baking robot launched in 2019, was asked about its gender, it would be a playful answer: “I’m binary. I don’t mean I’m two, I mean I’m really one and zero. Think I’m one. ka bot.

Similarly, Kai, an online banking chatbot created by Kasisto — an organization that builds AI software for online banking — is abandoning human traits. Jacqueline Feldman, the Massachusetts -based writer and UX designer who created Kai, explained that the bot was “designed without sex.” Not by acquiring a non-binary identity, as Q does, but by acquiring a robot-specific identity and using “it” pronouns. “From my perspective as a designer, a bot can be beautifully designed and attractive in new ways that are bot-specific, without it pretending to be human,” he says.

When asked if this person is real, Kai would say, “A bot is a bot is a bot. Next question, please,” clearly signaling to users that it is not human or pretending. And if asked about gender, it would respond, “As a bot, I’m not human. But I’ve learned. That’s machine learning.”

A bot identity does not mean Kai is abusive. A few years later, so was Feldman talking about deliberately designing Kai to have the ability to deflect and shut down harassment. For example, if a user repeatedly harasses the bot, Kai will respond with something like “I think white sand and hammock, please try me then!” “I really did my best to give the bot and dignity,” Feldman said told the Australian Broadcasting Corporation in 2017.

However, Feldman believes there is an ethical requirement for bots to identify themselves as bots. “There is a lack of transparency when it comes to designing companies [bots] make it easy for the person interacting with the bot to forget that it’s a bot, ”he says, and genderizing bots or giving them a human voice makes it even more difficult. Because of the many consumer experiences of chatbots can be frustrating and a lot of people want to talk to someone, Feldman thinks that giving bots human qualities could be a case of “over -design.”



Source link

admin

Leave a Reply

Your email address will not be published. Required fields are marked *