https://youtu.be/ttP7jaGa2C4

Note: Dark mode causes some of the images to display with strange artifacts, so try switching to light mode temporarily if you have issues! (Toggle with ctrl/cmd + shift + L)

Abstract

This report serves as an in-depth survey of the paper "RigNet: Neural Rigging for Articulated Characters."1 (Xu et. al 2020) The paper assumes that the reader has knowledge of multilayer perceptrons, graph convolutional networks and the EdgeConv operation2 (Wang et. al 2019). This report provides an overview of these prerequisite topics along with a deeper understanding of the methodology of Rignet. Thus, the goal of this report is to simplify the concepts presented in paper and facilitate aspiring researchers through a gentle and thorough introduction to neural methods in computer graphics.

1. Introduction and Background

RigNet is an end-to-end method for producing animation rigs from input character models. There is a growing demand for animation ready character models, and skeletal animation is the most intuitive and ubiquitous form of character animation. However, the process of creating a skeleton for a given character model is cumbersome. It involves the meticulous painting of skinning weights for each vertex in the input model and can take anywhere from a few days for simple models and up to several months for complex models. Therefore, there is a need for an automated method for producing animation rigs directly from character models. Prior work on the automated generation of character rigs can require pre-defined skeleton templates, pre-processing, or lossy conversions between shape representations, and RigNet succeeds these methods through an end-to-end method of rig generation.

2. Methodology

2.1 Rignet Overview

The architecture of RigNet breaks up the problem of end-to-end rigging into 3 subproblems.

  1. RigNet first generates the joint locations of an input character model.
    1. A graph neural network (GMEdgeNet) predicts the displacement of vertices towards neighboring joints
    2. A separate GMEdgeNet module predicts an attention mesh function over the input mesh which attends more closely to joint locations
    3. Finally, a mesh-attention module predicts joint locations based on the displacement of vertices and the mesh attention function.
  2. The next step predicts the skeleton based on the joint locations predicted in the last step
    1. A neural module called BoneNet predicts the probability of each possible pair of joints being connected by a bone
    2. A separate neural module called RootNet predicts probability of any given joint being the root joint of a skeleton
    3. A minimum spanning tree uses the outputs from the previous step to create a skeleton
  3. The final step produces the skinning weights for the skeleton
    1. A final GMEdgeNet uses the skeleton predicted in the previous step to predict the skinning weights for the skeleton

https://lh7-us.googleusercontent.com/H7fRAfclGsTzSENcwyakDCPQdQRakreAN1Ye9huY7JVAj4wW6VZXatZv79RaZEw0-u54hT0sclCSEqCH73WpAHcM0o3yMfyvjUO8T0xYU4u0YhB94z1xJAOVFPArDFOyUFxbSSHspu9wUpT4aA0fFdw

Fig. 0. The complete rignet pipeline1

The input character model is represented by a graph and the model operates on the mesh graph using graph neural networks, and the other building block of the network is the multilayer perceptron. Before diving into each of these subproblems, the next subsection will discuss these prerequisite topics to gain a foundational understanding of them.

2.2 Prerequisites

2.2.1 Multilayer Perceptron

At its core, a neural network is a function represented by a graph of interconnected neurons in a layered structure that takes an input and produces an output. A multilayer perceptron, or MLP, is a basic and fundamental neural architecture that showcases the flow of information in a neural network. An MLP is a fully connected network, which means that every neuron in one layer is connected to every neuron in the subsequent layer. That’s a lot of jargon, so let’s use a simple visual example to understand the simplicity and beauty of an MLP.

https://lh7-us.googleusercontent.com/Umh4X7y0sEZd71aYrebTzbEGBhpsMjayPy_kLR9-Q5UuKXCjpyAJzt64mtrdC76n3ScjNWbRhLrjOl8pm9QGnt7CmY6oVNcK5abhgGCNt3fbRX792Y6cRDjD2oB5rP7SpmitDk6glAFtBeMTzWIoGE0