What if the most important measure of the work of your life has nothing to do with your lived experiences, but only with your involuntary generation of a realistic digital clone of yourself, a copy of ancient man for the fun people from the year 4500, long after your departure? this deadly coil? This is the least horrible question raised by a recently granted Microsoft patent for an individual-based chatbot.
First noticed by the Independent, The U.S. Patent and Trademark Office confirmed to Gizmodo by email that Microsoft does not yet have permission to manufacture, use or sell the technology, only to prevent others from doing so. The patent application was filed in 2017, but was approved last month.
Hypothetical Chatbot You (predicted in detail here) would be formed into “social data,” which includes public posts, private messages, voice and video recordings. It could take a 2D or 3D shape. It could be a “past or present entity”; a “friend, family member, acquaintance, [ah!] a celebrity, a fictional character, a historical figure ”and, ominously,“ a random entity ”. (The last one, we might assume, could be a machine-generated version of the photorealistic portrait library ThisPersonDoesNotExist.) Technology could allow you to register at a “certain stage in life” to communicate with your young people in the future.
Personally, I enjoy the fact that my chatbot would be useless thanks to my limited text vocabulary (“omg” “OMG” “OMG HAHAHAHA”), but Microsoft’s minds considered this. The chatbot can form opinions you don’t have and answer questions you’ve never been asked. Or, in Microsoft’s words, “one or more conversational data warehouses and / or APIs may be used to answer user dialogue and / or questions about which social data does not provide data.” Employer feedback can be guessed through crowded data from people with aligned interests and opinions or demographic information such as gender, education, marital status, and income level. You could imagine your opinion on a problem based on “crowd perceptions” of events. “Psychographic data” appears in the list.
In short, we’re seeing a Frankenstein machine learning monster, which brings the dead back to life by collecting highly personal, uncontrolled data.
G / O Media may receive a commission
“That’s creepy,” said Jennifer Rothman, a law professor at the University of Pennsylvania and author The right to advertising: privacy reinvented for a public world he told Gizmodo by email. If it is a tranquility, this project seems like a legal agony. He predicted that this technology could attract disputes over the right to privacy, the right to advertising, defamation, misdemeanor, trademark infringement, copyright infringement and false support “to name a few. just a few, ”he said. (Arnold Schwarzenegger has drawn the territory with this head.)
It continued:
It could also violate biometric privacy laws in states, such as Illinois, that have them. Assuming that the collection and use of data is authorized and that people affirmatively choose to create a chatbot in their own image, technology is still concerned if these chatbots are not clearly defined as imitators. One can also imagine a number of technology abuses similar to those we see with the use of deepfake technology, probably not what Microsoft would plan, but nonetheless, this can be predicted. Convincing but unauthorized chatbots could create national security issues if a chatbot, for example, intends to speak for the president. And one can imagine that chatbots of unauthorized celebrities could proliferate in ways that could be sexually or commercially exploited.
Rothman noted that while we have realistic puppets (deepfakes, for example), this patent is the first he has seen that combines this technology with data collected through social media. There are some ways in which Microsoft can mitigate concerns with varying degrees of realism and clear disclaimers. An incarnation like the Clippy clip clip, he said, could help.
It is unclear what level of consent would be required to collect enough data even for the thickest digital wax, and Microsoft did not share potential user agreement guidelines. But additional likely laws regulating data collection (the California Consumer Privacy Act, the EU’s General Data Protection Regulation) could trigger the keys to chat creations. On the other hand, Clearview AI, which is notoriously providing facial recognition software to law enforcement and police companies, is currently litigating its right to monetize its deposit. billions of avatars extracted from public social media profiles without user consent.
Lori Andrews, a lawyer who has helped inform guidelines on the use of biotechnology, he imagined an army of rogue evil twins. “If I were a candidate for office, the chatbot could say something racist like me and eliminate my election prospects,” he said. “The chatbot could access multiple financial accounts or reset my passwords (based on conglomerated information, such as a pet’s name or the mother’s maiden name, which are often accessible from social media). A person could be deceived or even harmed if their therapist took a two-week vacation, but a chatbot that mimicked the therapist continued to provide and bill services without the patient knowing the switch.
Hopefully, this future will never happen and Microsoft has recognized that technology is creepy. When asked for comments, a spokesman directed Gizmodo to a tweet by Tim O’Brien, general manager of AI programs at Microsoft. “I’m studying this: the application date (April 2017) predates the AI ethical reviews we do today (I sit on the board) and I don’t know of any construction / shipping plans (and yes, it’s disturbing) . ”