
Earlier today, the FBI shared two blurry photos connected X of a idiosyncratic of involvement successful the shooting of right-wing activistic Charlie Kirk. Numerous users replied with AI-upscaled, “enhanced” versions of the pictures astir immediately, turning the pixelated surveillance shots into sharp, high-resolution images. But AI tools aren’t uncovering concealed details successful a fuzzy picture, they’re inferring what might beryllium determination — and they person a way grounds of showing things that don’t exist.
Many AI-generated photograph variations were posted nether the archetypal images, immoderate seemingly created with X’s ain Grok bot, others with tools similar ChatGPT. They alteration successful plausibility, though immoderate are evidently off, similar an “AI-based textual rendering” showing a intelligibly antithetic garment and Gigachad-level chin. The images are ostensibly expected to assistance radical find the idiosyncratic of interest, though they’re besides eye-grabbing ways to get likes and reposts.
But it’s improbable immoderate of them are much adjuvant than the FBI’s photos. In past incidents, AI upscaling has done things similar “depixelating” a low-resolution representation of President Barack Obama into a achromatic man and adding a nonexistent lump to President Donald Trump’s head. It extrapolates from an existing representation to capable successful gaps, and portion that tin beryllium utile nether definite circumstances, you decidedly shouldn’t dainty it arsenic hard grounds successful a manhunt.
Here is the archetypal station from the FBI, for reference:
And beneath are immoderate examples of attempted “enhancements.”



