The public release of AI art tools, like Midjourney and DALL-E 2, has ignited contentious debates among artists, designers, and art fans alike. Many are critical of the fact that the technology’s rapid progress was fueled by scraping the internet for publicly posted art and imagery, without credit or compensation to the artists who had their work stolen. “I think the current model of AI art generators is unethical, because of how they collected their data—against the knowledge of, basically, everybody involved,” says Jared Krichevsky, a concept artist who designed the memeable AI-bot for the M3GAN movie. Several artists continue to express anger about their original craftsmanship powering AI generators without informed consent. “Their works are being inputted into a machine against their will,” says Krichevsky. “This machine is specifically designed to replace us.” Companies behind AI generators will soon be in court to defend against claims of copyright infringement. Despite the legal challenges, widespread use of AI art tools continues to cause confusion. When one digital artist recently posted their work on Reddit, they were accused by an r/Art moderator of posting an image generated with AI assistance. Is it still possible to tell, either way, at just a glance? “For the average person, I feel there isn’t that much time left before they won’t be able to tell the difference,” says Ellie Pritts, an artist who embraces multiple forms of generative AI in their artwork. People often joke online that you can’t look too closely at the hands in AI art, or you’ll discover bizarre finger configurations. “The eyes can be a little bit funky as well,” says Logan Preshaw, a concept artist who denounces the use of current AI tools. He says, “Maybe they’re just kind of dead and staring out into nowhere, or they have strange structures.” Logan doesn’t expect the small cues an average viewer can use for AI art identification to stick around very long either. Multiple artists we interviewed agreed that such telltale signs will become less evident as the technology progresses, and the developers behind those tools adjust them to address common complaints like dead eyes and too many fingers. Dan Eder, a 3D character artist, thinks viewers should consider the overall design of a piece when trying to spot an AI image. “Let’s say it was a ‘fantasy warrior armor’ type of situation. At a glance, the artwork looks beautiful and highly detailed, but a lot of the time there’s no logic behind it,” he says. “When a concept artist creates armor for a character, there are things you have to take into account: functionality, limb placement, how much is that going to stretch.” However, what does it even mean to “spot the difference” when an artist leans into the weirdness of AI? Pritts describes their work, which was exhibited in San Francisco, as “AI collaborative art.” Pritts accompanies AI-generated visuals with AI-generated audio, morphing old clips from when they were physically able to play the cello. “As the technology expands, I am always looking for new ways to incorporate it into my practice,” Pritts says. In the near future, Melenciano believes, most viewers will not be able to identify AI art consistently without computer assistance. “As this progressively goes out into the world, I think the most important thing is being able to detect what’s real and what’s not,” she says. “Not so much by the human eye, but by services.” Synthetic media detection is likely to be a hot topic of discussion as AI generators continue to proliferate. Although most of the attention now is on rapid developments in image and text generators, tools producing AI audio and AI video are not far behind. Creative people who work in any medium will soon be forced to reckon with what exactly separates the artist from the machine. Krichevsky says, “It’s an existential crisis, for people who are prone to existential crises anyways.”