Photographer Annie Leibovitz says she isn’t worried about the potential threat artificial intelligence poses to photography.
Many artists and even some tech industry insiders have been alarmed by the recent rapid proliferation of AI tools that can generate convincing images from text prompts, potentially infringing on artists’ copyrighted work and eliminating the need for actual human photographers.
Related:
AI was used to “complete” Keith Haring’s last work. People hated it.
They called it “disrespectful” and a “desecration” of the celebrated gay artist’s final work.
Major AI software companies, including Midjourny and Stability AI, have already been sued by a group of visual artists who claim that the companies illegally used their art to train their AI systems. They say that users are able to generate art with the software that is “indistinguishable” from their original works.
Stay connected to your community
Connect with the issues and events that impact your community at home and beyond by subscribing to our newsletter.
Leibovitz, however, told Agence France-Presse that she’s unbothered by the potential risks posed by the technology.
“That doesn’t worry me at all,” the out photographer said in an interview timed to her induction into the French Academy of Fine Arts this week.
In fact, Leibovitz seems eager to embrace AI as a tool in photography. “With each technological progress, there are hesitations and concerns,” she said. “You just have to take the plunge and learn how to use it.”
“Photography itself is not really real,” she added. “I like to use PhotoShop. I use all the tools available.”
At the same time, critics say AI can be misused to create convincingly realistic images and video of celebrities and politicians saying and doing things they never, in fact, said or did. Experts, lawmakers, and public figures have warned of the danger that AI-generated “deepfakes” present in spreading misinformation, as well as the technology’s ability to create fake explicit images and videos of celebrities and even children.
In 2019, congressional lawmakers introduced the “DEEP FAKES Accountability Act,” which would require creators to digitally watermark deepfake images. Another bill, the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act, introduced in January, would allow victims to sue people who create deepfake images of them without consent.
In congressional testimony last year, Sam Altman, the gay CEO of OpenAI, said he was concerned about the technology’s ability “to manipulate, to persuade, to provide sort of one-on-one interactive disinformation.” Altman said he supported the creation of a government agency that could help set safety standards and audits to prevent AI from breaking copyright laws, instructing people how to break laws, illegally collecting user data, and pushing false advertising.
He wouldn’t, however, commit to re-tooling OpenAI to avoid using artists’ copyrighted works, their voices, or their likenesses without first receiving artists’ consent.