Researchers at Carnegie Mellon University's Robotics Institute (RI) have developed a system comprising of a robotic arm and a paintbrush taped to it, which is allowing the device to create art on its own following text or visual prompts from users.
It's part of an effort dubbed FRIDA, a double entendre as a reference to the famed artist Frida Kahlo as well as for Framework and Robotics Initiative for Developing Arts, which is being led by Peter Schaldenbrand, a CMU School of Computer Science Ph.D. candidate at RI, as well as RI faculty members Jean Oh and Jim McCann.
The team said FRIDA can produce art from several inputs, be it a direct text description or by submitting works of art or original photographs to serve as inspiration. FRIDA's creators are also experimenting with the use of audio as an input device and said that it has listened to ABBA's "Dancing Queen" and produced a painting inspired by the hit single.
"FRIDA is a project exploring the intersection of human and robotic creativity," McCann said in a prepared statement. "FRIDA is using the kind of AI models that have been developed to do things like caption images and understand scene content and applying it to this artistic generative problem."
These AI models FRIDA is using aren't all that different from the recently popularized generative artificial intelligence tools from Open AI, the makers of ChatGPT and DALL-E 2. With its paintbrush, FRIDA then creates art based on its input and will incorporate machine learning techniques to evaluate its progress and make amends for any errors.
According to the researchers, FRIDA will spend an hour or more following a prompt to get comfortable with its paintbrush. It then incorporates large vision-language models that have been trained on tranches of internet data sets to decide what it's going to paint. These models have included data from China, Japan, Korea, Mexico, Nigeria, Norway, Vietnam and other countries to try and prevent any American or Western bias in FRIDA's work.