FAQ about the future of generative art
My arguments at the CEVU Conference, organized and attended by engineers from Netflix, Adobe, Amazon, Meta, Runway, and several prestigious universities, to talk about the future of generative art.
I have written this article for those at the bleeding edge of generative art. The CVEU conference on October 16th brought people from academia, artists, and A.I. filmmakers together to have conscious contemplation about the future of generative art. Among many exciting discoveries, Amazon Prime showed us how they use ML to track real-time athletes and coherently put items in movies. Meanwhile, the Weizmann Institute of Science showed us a more exact way of transferring a style while coherently segmenting the foreground and background. Please find Youtube links to all the keynotes from the CVEU Conference and panels at this link.
During the conference, I gave a keynote and attended two panels about bringing academia and the arts together and about generative art used in filmmaking. Please find below a summary of the points I took to the discussion.
What is the distinction between generative art for non-professionals and generative art for filmmakers?
We are currently in the “visual magic” stage of creative A.I. for artists and amateurs. When William Dafoe becomes Julia Roberts in Pretty Woman, it broke the internet. When Paul Trillo made us stare at a Mediterranean Revival Art Deco and had it grow, we felt elated. When Runway demonstrated that we could magically remove an electric pole from a video with the swing of a hand, we felt empowered. This novelty of being the first of something will continue in art and amateur circles for a while. However, the film industry does not give a damn about being the first in anything. They resist change. All they care about is the impeccable implementation of a director’s vision. Rightfully so.
There are numerous ways to tell a story. Still, for it to be cinematic, modern technology should be controllable and should provide the filmmaker with an angle, a point of view, that they could not gain with previous technologies. A great example would be the Campanile movie by Paul Debevec in 1997, which brought “more reality” into virtual reality and ended up being the leading technology behind the Oscar-winning “Bullet Time” in The Matrix.
Filmmaking tools should encourage directors to imagine new states of perception while providing them with complete control. Meanwhile, for artists, twenty percent of that control would be enough since human creativity can fill in the gaps, and for amateurs, just the sheer joy of saying, “I made this,” would be a success.
What needs to be invented next in generative art to serve filmmaking?
I cannot go into detail as to what needs to be invented next. However, since our company, Seyhan Lee’s, very essence is to identify and generate solutions for it, I can outline the basics. The point of A.I. is to take the burden off people’s backs. Let us begin with the filmmaking process’s difficulties or fewer fun components: innumerable edits; directors changing their minds all the time; attitudinal aversion to new technology such as volume stages; the high cost of VFX and its repetitive chores, to name a few. Our creative A.I. tools need to be the remedy to these hardships.
Another point of view for finding the next big thing in filmmaking would be to follow creativity since the new needs of the creators emerge while they are using the tools. But who are these creators? The needs of advertising agency creatives differ from those of an advertising agency’s production company. A producer’s requirements differ from those of a director, storyboard artist, and VFX artist. If you go granular, a VFX artist that works for a gaming company has different demands than a VFX artist working for a virtual production stage. A compositor and shading VFX artist’s needs differ from a lightning and texture VFX artist’s. And the insane part is that they do not know what they need from creative A.I. until you show them. Who knew five years ago that we could not live without text-to-image models?
The mind of an inventor differs from the mind of a producer or director. While Paul Debevec invented a novel way of doing photorealistic virtual cinematography, John Gaeta brought it into the cinema with bullet time. James Clerk Maxwell predicted the existence of radio waves, but Guglielmo Marconi sent and received his first radio signal in Italy in 1895. Alexander Mordvintsev discovered that you could make artistic shapes with feature visualization, but I made art and fashion patterns with his discovery. As Misha Tenebaum so eloquently stated throughout the CVEU conference, we must free the inventor from the invention; only then can they serve as a vessel for the unexplored.
How could we do better to keep artists in the loop in developing new models?
We should not be keeping artists in the loop. We should be placing the artist at the center of research, invention, and testing. The Dalle2 posts of Paul Trillo have gone viral since he unlocked features of the tool that even the inventors did not know, whereas when I gave the tool to my auntie, she did not know what she could make with it since she did not have experience with imagination. Where I am getting at is that artists and creatives have the power to connect the sample with the audience. The technology is the sample, but what creators make is the emotional connection and inspiration for the audience.
Another vital point to consider is that big technology companies such as Google and Facebook have been contributing to advancing the generative art field. No GIF creator received compensation when Giphy was sold for $400 million in 2020, and the website would not have been successful without the creativity of its users. Just like we are learning to be better humans, businesses are also learning to be in the service of humanity instead of the financial gains of their shareholders. Each time we generate stories or images, we contribute to the brand value of the tech giants that invent these models.
Creators freely adding value to conglomerates while receiving no compensation in return is an imbalance in the collective that needs to be restored.
I have a proposal for the future of generative tool development: Facebook, Netflix, Open AI, and NVIDIA, to form a group of around twenty outstanding generative talents. The creative talents will come from various industries, including art, animation, filmmaking, storyboarding, etc. The research labs develop new models while the artist consortium regularly tests them. Once the tools are ready, they give these creators the first right to use them while continuing to support them financially. These top creators become inspirations for the rest of the public before the tools are released.
I understand that, with the increased competition in the generative art field about who will release the latest model, it is hard to make time for artists to play with new generative tools pre-launch. However, like in any other industry, if this competitive mindset is the reason for holding artists at bay, it is time for businesses to reevaluate their goals. In the age of A.I., unity is needed more than ever.
What happens if the generated art pieces accidentally copy the works of other people (in the training data) while the users don’t even know about it?
Of course, we need art to generate art. So, of course, all these gigantic models are based on the look and feel of other artists and creators. I don’t see a problem with copyright issues as long as the result does not remind you of another artist. Meanwhile, I know of several parallels between the collective unconscious and the massive models’ databases. We develop ideas by tapping into our subconscious, where everything we have seen, heard, read, and experienced lies. It becomes a problem, however, when the outcome resembles the work of a certain artist. The silver lining of this process is that it forces A.I. artists and corporations to be better educated in art history. Can I please LOL? Thank you!
It is the first time in human history that we have encountered a tool that generates rather than supports humans in production. Hence, such problems are standard at the beginning. I am more interested in discovering what Wes Anderson will do when everyone starts making movies like him. This copyright issue is similar to the sampling issue in the music industry and the base image copyright issue in the graphic arts. Maybe there is something to learn from Andy Warhol’s Prince portrait dispute or the case of Lichtenstein and Dave Gibbons, whose art he painted without his consent.
Do you worry about accidentally copying someone else’s art? How to avoid accidentally stealing someone else’s intellectual contribution?
No, I am not, because I know worrying will only deliver me the outcome I would hate to have. Instead, I spend my time educating myself and our team better in art and film history so that we make sure we do not step on anyone's toes.
What is the billboard advice for artists / ML practitioners / entrepreneurs who find generative AI models mind-blowing?
I suggest overcoming any insecurities and inferiority complexes when working with A.I. I look at the A.I. ethics question from a deeper level than the average sexual/racial bias question. The most significant bias in A.I. is the bias of humanity’s inferiority complex towards data. Many companies, artists, and individuals brag about how much data and computation power it takes to create a piece of art, but we rarely talk about the innovative teams and unique minds that make our tools. Seyhan Lee has a chief wisdom officer, someone who masters conscious awareness and the inner mechanics of the subconscious. Her name is Derya, and she is involved in all our projects. She helps us communicate from a place of mastery over A.I. When I say master, please do not understand it as a master-slave dynamic but as mastering a tool. I advocate for each technology business working with or considering working with A.I. to include a voice of wisdom in their teams. In this way, we would ensure to position A.I. solutions to alleviate the burden from people’s shoulders, helping their elation and easing their lives.
Identifying unconscious decision-making is much more subtle than identifying conscious decision-making. Considering ourselves unworthy of excellence or perceiving machines or other people as superior to ourselves are all indications of low self-esteem. If a group of people creates A.I. models from a place of low self-esteem, they will communicate to the rest of the world that A.I. is superior to humans. This action will position A.I. at the center of research, where inventions will come to life for the sake of innovation. Alternatively, recognizing our power would place the human spirit at the center of all A.I. research. The result will be that technology will serve humanity rather than humanity serving technology.
Thank you, dear reader, for taking the time to read this article. When you engage with my words socially, this conscious worldview spreads to others, so thank you for subscribing to my channel on Medium and my mailing list for your regular dose of science and spirit. With love, Pinar