AI generated this moody, entertaining animated music video

While on a surfing trip, technologist Aza Raskin sent his romantic partner, the singer-songwriter Zia Cora, a surprise gift: a complete music video for her new song “Submarines.” “He left for a surf vacation for a week, and then 48 hours later he sent me this whole music video,” she says. “I don’t think I could think of anything more surreal and poignant and beautiful.” Raskin does have an extensive design background—he’s a former design lead for Mozilla Firefox, and he’s been called the creator of infinite scroll interfaces—but he didn’t use any traditional design or video-editing software to create the video. Instead, he used artificial intelligence tools that allowed him to type in phrases describing what he wanted for various parts of the video, using descriptors like “sad and dark night sky with stars and the moon” and “dead face, charcoal sketch.” He says he spent about 12 hours studying and learning the tools he used, including CLIP from OpenAI, an AI system that creates pairings of text and images, and then set the video to render on a cloud server. It took about 36 hours to fully render at a cost of about $3 in computing resources, in a process Raskin compares to watching film develop. “I had to go surfing, so I wouldn’t just stare at my computer the whole day,” he says. This is the least weird our world will be in our lifetimes.”The resulting video largely avoids the unsettling, uncanny valley effects common in earlier AI-generated films and looks as though it could have been created by human animators. The couple has received inquiries about showing it at festivals and conferences, and Raskin and Cora say they’ve received plenty of compliments from people who have seen it. Raskin’s video isn’t a deepfake—it’s not using machine learning to create a false, realistic image—though it still has a perhaps surprising power to stir emotions. He says that the technology that makes the project possible has really only existed in its current form for the past few months and argues that the project raises questions about the nature of creativity and people’s ability to have computers generate emotionally stirring, perhaps even manipulative, content. “This is the least weird our world will be in our lifetimes,” he says. “The speed at which this technology is progressing is just blowing past everyone’s expectations.” He imagines that the tools he used—and future AI-powered ones—will have positive uses in creating, say, background music or video images to play behind lectures and TED-style talks. AI software could make it possible for anyone to generate videos, music, graphics, and even websites for a particular purpose without needing expertise in design or the use of finicky illustration tools. “Anything that can have a visual, will have a visual,” he says. That may soon include additional videos for Cora’s music, the couple suggests. “There’s a lot more music that we want to share,” says Cora.

AI generated this moody, entertaining animated music video

While on a surfing trip, technologist Aza Raskin sent his romantic partner, the singer-songwriter Zia Cora, a surprise gift: a complete music video for her new song “Submarines.”

“He left for a surf vacation for a week, and then 48 hours later he sent me this whole music video,” she says. “I don’t think I could think of anything more surreal and poignant and beautiful.”

Raskin does have an extensive design background—he’s a former design lead for Mozilla Firefox, and he’s been called the creator of infinite scroll interfaces—but he didn’t use any traditional design or video-editing software to create the video. Instead, he used artificial intelligence tools that allowed him to type in phrases describing what he wanted for various parts of the video, using descriptors like “sad and dark night sky with stars and the moon” and “dead face, charcoal sketch.”

He says he spent about 12 hours studying and learning the tools he used, including CLIP from OpenAI, an AI system that creates pairings of text and images, and then set the video to render on a cloud server. It took about 36 hours to fully render at a cost of about $3 in computing resources, in a process Raskin compares to watching film develop.

“I had to go surfing, so I wouldn’t just stare at my computer the whole day,” he says.

This is the least weird our world will be in our lifetimes.”

The resulting video largely avoids the unsettling, uncanny valley effects common in earlier AI-generated films and looks as though it could have been created by human animators. The couple has received inquiries about showing it at festivals and conferences, and Raskin and Cora say they’ve received plenty of compliments from people who have seen it.

Raskin’s video isn’t a deepfake—it’s not using machine learning to create a false, realistic image—though it still has a perhaps surprising power to stir emotions. He says that the technology that makes the project possible has really only existed in its current form for the past few months and argues that the project raises questions about the nature of creativity and people’s ability to have computers generate emotionally stirring, perhaps even manipulative, content.

“This is the least weird our world will be in our lifetimes,” he says. “The speed at which this technology is progressing is just blowing past everyone’s expectations.”

He imagines that the tools he used—and future AI-powered ones—will have positive uses in creating, say, background music or video images to play behind lectures and TED-style talks. AI software could make it possible for anyone to generate videos, music, graphics, and even websites for a particular purpose without needing expertise in design or the use of finicky illustration tools. “Anything that can have a visual, will have a visual,” he says.

That may soon include additional videos for Cora’s music, the couple suggests. “There’s a lot more music that we want to share,” says Cora.