The Good, The Bad, and The Ugly: Reviewing Top Real-World Generative A.I. Applications
Every VC is in a frenzy to invest in generative A.I. to remain relevant. Entrepreneurs have already started a venture with A.I. or are about to do so. Just when we thought Web3 was about crypto, it turned out to be about generated A.I. content. We have started to get comfortable having A.I. butlers around that write, sing, and draw for us; at the same time, A.I. is used to birth, optimize, and augment our favorite films, brands, and platforms, but how many of these applications are consciously executed? Do they serve humanity or service themselves? What is the anterior motive, or what unconscious belief systems are being coded in our subconscious?
I have prepared his article with reviews of generative A.I. use cases from financial fraud to Metallica, from ghostbusting to coding, and from K-Pop to Matrix. This piece is there to provide you The Good, The Bad and The Ugly filter when consciously contemplating the A.I. around you.
Audi started to use GAN as inspiration to “reinvent the wheel”
Our company, Seyhan Lee’s R&D department, has been experimenting with Mustang and Porsche car designs, and it simply made my day when I learned that a car giant like Audi is using A.I. to be inspired for their wheel design. Audi has given “FelGAN” as the name for their process. It is a mash-up of the German word for “rim” (Felge) and “GAN,” the latter being an acronym for Generative Adversarial Networks (the genesis model of generative A.I. created in 2014 by Ian Goodfellow).
I'm happy to read that Audi designers see A.I. as a tool to cooperate with and co-create with. In this article, they explain that they feed the algorithm their designs, other designs, and similar and different options. I love Audi as a brand and everything they stand for, but I wish they would not title their research “How artificial intelligence learned to design wheels" since humans teach A.I. and design wheels inspired by A.I.’s outputs. It is of paramount importance that we use the power of humans at the center of our research and language so we can keep on marveling at ourselves, humans, and not machines. At the end of the day, aren't we who made the machines?
Stitch Fix stylist uses DALL-E 2 to generate styling personalization
Stitch Fix is an online personal styling service that uses computer algorithms and real-life stylists to suggest clothes, shoes, and accessories. Once the algorithm is taught your style, Stitch Fix sends you a box, which they call a “fix,” filled with different clothes they think you may like. You keep the ones you like and send back those that are not a fit.
The company uses artificial intelligence to suggest clothes to customers and is experimenting with DALL-E 2 to make visuals of clothes based on customers' preferences for color, fabric, and style. I honestly do not mind that the algorithm shows me clothes that “do not exist” to decipher my unique style. At the end of the day, I am the decision-making agent, and the algorithm serves me.
Chat GPT to build a guitar pedal with code
On the other hand, this example is unique in that it leverages OPEN AI’s Chat GPT to create a function for a homebrewed mini-tool that works and is engaging. “Burned guitarist” on YouTube has created a Tube Screamer pedal VST plugin that he may use in a digital audio workstation. VST is an abbreviation for “virtual studio technology.” To play guitar in VST, you don’t need to know how to play or even possess a guitar! This is because, with guitar VST plugin software and a computer, guitar VST instruments can make many tones and sound natural.
This example has proven that Chat GPT is not only replacing the Google search engine but also becoming a digital 3D printer for many. I would applaud any person who combines their various talents to jump between tools and services to produce a unique product, or service, with or without the involvement of A.I.
NeRFs used in the VFX of Matrix Resurrections
I do not have idols and look up to very select people in life, but Wachowskis are two of them. The Matrix Trilogy was the jumpstart to my awakening, which is why I wanted to be involved in making movies. When researching for this article, I was thrilled to discover that Matrix Resurrection used AI-based VFX, including NeRFs and Deep Fakes. VOLUCAP explains in this article that “In The Matrix Resurrections, we developed our NeRF capture rig. Our temporally consistent NeRF enabled the creation of complete moving scenes, overcoming the limitations of conventional static NeRF scenes. This allowed us to create more realistic and dynamic visuals for the movie.”
For five years, I have been a loud advocate of the film industry’s embrace of generative A.I. tools, not only because of their time and effort-saving side but also their potential to widen our visualization capability. Avid readers of my blog know that our company Seyhan Lee recently came out with Cuebric, a tool that brings text-to-image generation and segmentation to the heart of virtual production. That is why I am giving a standing ovation to VOLUCAP and the other VFX teams involved in making Matrix Resurrections.
Eternity, a generated K-Pop band
Do you remember Hatsune Miku? The Japanese digital pop star, based on Saki Fujita’s voice, was invented in 2007. Hatsune had music samples in over 100,000 songs globally. She has her toys, anime, and holographic concert. Eternity, the generated K-Pop band, is a different version of this virtual musician, which I find landing on the industry's sinister side.
Pulse9, the firm behind Eternity’s Deep Fake technology, created 101 dream faces and divided them into four categories based on their appeal: attractive, seductive, innocent, and clever. They then invited fans to score the alternatives before completing the characters. The avatar faces are projected onto anonymous humans: singers, actors, and dancers for live discussions, videos, and online fan meetups.
This all sounds innovative, OK I get it, but then my eyes widen reading the answer of Park Jieun, the woman who founded Eternity, when asked for her motivation behind the generated pop band in this BBC article: “The advantage of having virtual artists is that while K-pop stars often struggle with physical limitations, or even mental distress because they are human beings, virtual artists can be free from these.” This is why I wake up at 4 am to write articles and give public lectures about ethical A.I… Advanced technologies can only serve humanity when the elation and elevation of humans play a central role in their development, not their annihilation. The underlying message behind her motivation says that humans are frail creatures because they have emotions. This worldview is the mindset behind the opening scene of the Matrix Trilogy, where humans let machines take over, and machines place humans in pods to harvest their God-giving life force, essence, and spark to become machines’ batteries.
Review of Metallica’s Master of Puppets written by GPT
Fraser Lewr, a writer for The Classic Rock magazine, has written this article, in tandem with Chat GPT. The author expresses his awe at the accuracy of the generated article and displays the novelty of his job is done by a machine. This is perfectly fine, and I would have done the same myself. I am using this example of a professional magazine article written by GPT to raise a broader issue.
Writing articles, reviews, and school papers with GPT is no news anymore in January 2023. In the cutting-edge obsolescence reality powered by the exponential developments in generative A.I., it is no surprise that in one month, one million people started incorporating generative A.I. in their writing workflow. I am 100% supportive of all unnecessary sweat that generative A.I. is taking off our shoulders. However, I would like to highlight a severe danger peeking behind the bushes: standardization of mediocrity.
Just because a tool generates, it does not mean it produces quality. Using generative A.I. tools as assistants in our creative process is terrific but relying on them to create results is apathy. This dependency has the danger of ridding ourselves of any “effort,” which is a required ingredient for creative transcendence and quality production.
Swedbank used GANs to detect financial fraud
Generative A.I. used in finance lands outside of my expertise. Still, this use case made it to this article since it is essential to give a broad view of different professional challenges being overcome by generative A.I.
In this article, NVIDIA explains that: “GANs are a natural choice for financial fraud prediction as they can learn the patterns of lawful transactions from historical data. For every new financial transaction, the model computes an anomaly score; financial transactions with high scores are labeled as suspicious transactions.
I know from the picture-based creative use of A.I. that GANs are complete beasts. They require much more effort and custom datasets to produce decent results than other models. Apparently, it is the same case for GANs used in finance. I was pleased to learn that A.I. software + NVIDIA hardware + human guidance architecture helped Swedbank and other large institutions to save $150 million annually in fraud detection.
Go Fund Me’s Help Changes Everything campaign
Another standing ovation for using generative A.I. in film production goes to director Paul Trillo and AKQA agency for this beautiful canvas they made for Go Fund Me’s latest campaign. As an A.I. director myself, in the films and projects we produce at Seyhan Lee, I am always looking for the conceptual standpoint of the necessity of using A.I.; and Paul has done a marvelous job of pairing the artistic, dreamy look and feel of generative A.I. with Go Fund Me’s capacity of “making our dreams come true.” Those who toy with text-to-image or picture-based generative A.I. know that it takes a lot of effort to “tame the machine” like Paul and his team succeeded in this spot. 2022 was the year of awe and discovery for generative A.I., and 2023 will mark the year of its standardization, so expect to see more of these beautiful creative experiments made in collaboration with A.I.
TikTok’s anime filter and the ghost-hunting trend
Yes, you read my words right :) TikTok recently released a new AR filter, “Anime Style,” that uses style transfer generative A.I. to turn a still from the user’s videos into scenes from anime comics. This all sounds wonderful until the results stated making random people appear in scenes and the users interpret as their homes being populated with ghosts.
I am no authority to tell if their homes have ghosts or not, but I can technically explain what is going on in the algorithm to produce random people. I am pretty sure TikTok’s anime filter’s original dataset of manga comics contains many people. I am confident of what I say since finding anime scenes without people involved is complicated. This results in the filter trying to “re-create” the scenes it is trained with into the images of TikTok users. Even though there may not be anyone in a room, on an open field, or in an environment, if the algorithm is trained with enough similar settings, it is automatically trying to place a character in the result even if it does not appear in the input image.
I hate to be the person who demystifies the mystic. I belong more to the other side of naively believing anything and making up magical stories about unexplained phenomena. Generative A.I. is my expertise, and I understand how artificial neural networks behave when it comes to picture-based productions. Here is another example below that proves my point. See below one of my experiments with A.I. — I made this pattern. The input image was a hexagonal-shaped Buckminister fuller molecule that I downloaded from Getty Images. The algorithm was based on Deep Dream, where I played with some basic parameters, and the tool gave me a pattern made of turtles. Turtle’s back shells are hexagonal, just like my input image. This does not mean ghost turtles are hiding in the Getty Images library. It simply means that the A.I. is trying to re-create my input image based on its dataset, which has been trained with various pictures, including turtles and hexagonal back shells.
It is the first time in the history of humanity that a tool is “generating” while humans are “supervising.” These reactions of mystic-lore nature are normal since we are not used to this creative process. On the other hand, those that use A.I. build with A.I. and found enterprises with A.I. need to be extra careful with their motivations and the language used to describe the production process. While a technology-centric process and language have the danger of annihilating humans, a human-centric process and language will help us offload unnecessary burdens from our shoulders.
After reading this article, I hope you are equipped with new The Good, The Bad, and The Ugly glasses when reviewing A.I. companies, articles about A.I., and your reactions.