Researcher cranks up the silky smoothness with AI-assited in-between animation.
When making animation, you can broadly split character artwork into two categories: key frames and in-between frames. Key frames show the most critically expressive poses and expressions, while in-between frames consist of the incremental artwork shifts required to create the illusion of the artwork “moving” from one key frame to the next.
Really, key animation is where the glory is at, since it produces the most dynamic and memorable mental snapshots for fans’ hearts and minds to latch onto. In-betweening, on the other hand, is a far more tedious process, as it’s essentially slogging through a process of drawing almost the same thing over and over again, making only slight variations between each frame.
But Japanese researcher and programmer Yuichi Yagi is currently developing a system which uses artificial intelligence to automatically create in-between frames, and his results so far are startlingly impressive, as shown in the video below.
With the help of telecommunications company Dwango’s deep learning neural network and anime production company Mages, Yagi took a number of animated sequences from Mages’ Idol Incidents series, fed them into his program, and then had the neural network quadruple the number of frames. In the video, the original versions can be seen on the left, while the AI-enhanced versions appear on the right, as well as when there’s only one animated sequence shown on-screen.
The difference in smoothness is astounding, as Yagi’s updates move with a fluidity seen only in the most carefully created cuts from anime theatrical features. Granted, the results aren’t flawless, with occasional glitching and slips away from the character model. One could also make the argument that since already-completed animation was used as the input, the program isn’t adding in-between frames so much as in-between-in-between frames, and it’s possible the AI-created movement wouldn’t be as smooth if it was bereft of the human-made intermediary artwork.
Still, for a work in progress, Yagi’s program is producing some amazing anime art, and if the technical hiccups can be eliminated, the impact of applications such as these could be tremendous for the anime industry. In-betweening takes a considerable amount of labor hours, and automating the process could free up in-between animators to work on other aspects of production. It could also help less mainstream or more original anime projects get greenlit for production by eliminating the need to find financing and investors willing to front the money for costly man-made in-between art. On the other hand, many of the startling low paychecks for anime professionals are the ones earned by in-between animators, as their task is considered the grunt work of creating cartoons, and the potential option to hand such responsibilities off to artificial intelligence is likely to push their wages down even further.