When an artist draws a character to be animated, they would separate movable parts into different 2D layers so that they can move the parts independently. When designing the new face morpher network and constructing datasets to train it, I paid attention to a defining characteristic of drawn characters: layering. In Section 7.4, I demonstrate the new capabilities through several fanvids, including those that show characters being driven by hand-crafted motions and those that transfer human motions to the characters. Newly afforded mouth shapes allow for better imitation of speech and singing. Characters can now express emotions such as happiness, anger, sadness, disgust, and variations between them. While the old face morpher takes only 3 parameters as input, the new one takes 39, and it can move all the movable facial features (eyebrows, eyelids, irises, and mouth) that can be observed in industrial characters. In this article, I address the above shortcoming by proposing a more capable subnetwork that changes the character's facial expression (i.e., a better version of the face morpher). On the other hand, characters used professionally can not only deform their eyes and mouths into several different shapes but also move their eyebrows and irises.
A major one is that the system only knows how to close the eyes and mouth. Nevertheless, as noted in the original article, it has many limitations. The system takes as input an image of an anime character looking straight to the viewer and a 6-dimensional pose vector, and it outputs another image of the character with the specified pose. Noticing that movements of most VTubers are rather simple, I created a system that can animate faces of anime characters in single images in 2019. Seeking to combine my fandom with my computer science learnings, I started doing research on character animation with the aim of making it easier to become a VTuber. I personally have been a VTuber fan since 2018. Moreover, the Nikkei recently reported that VTubers have earned the top three places in Superchat revenue worldwide with the top earners making around 100 million yen ($\approx$ 1M USD) in less than 2 years Globally, YouTube reported significant VTuber viewership growth since October 2020.
NHK, the TV broadcaster funded by the Japanese government, has hosted several music programs where VTubers took center stage. Some hosted radio shows and performed in TV dramas and anime. In Japan, VTubers have infiltrated into established entertainment channels. They contribute contents and stream live performances to various online platforms, with YouTube being where they first gained popularity and thus becoming their namesake. These are anime characters that are performed in real time by specific actors through the help of computer graphics technology.
The year 2020 saw the rise of the virtual YouTubers (abbreviated as "VTuber" from now on). I modified the tool to record my motion and was later able make multiple characters talk and sing with more dynamic lip and face movements. I also created a real-time motion transfer tool that provides more controls over the character's face. With the new network, I can drive character illustrations with motions authored for 3D models. The expression is popularized by virtual YouTuber Tsunomaki Watame. The disconcerting, if not borderline insane, look gives the impression that the character is high on drugs (キマっている). Gangimari-Gao (ガンギマリ顔) is a facial expression where a character glares at the viewer with the eyes wide open and the irises reduced in size while smiling. Uwamedukai (上目遣い) is Japanese for the pose where a shorter person looks at another taller one with upturned eyes while tilting the face down. Here, a blendshape is a separate model that makes a specific facial expression - for example, having its mouth open - while otherwise being the same as the base model.
In 3D animation, facial expressions are often implemented by interpolating between an expressionless model (aka the "rest" model) and several blendshapes. The character is Otogibara Era (© Ichikara Inc.).