Some avatars, most commonly cartoony or low-poly avatars, have a sprite sheet or separate textures for "2D Visemes" for speech, rather than the more common blendshape visemes. Unlike blendshape visemes, these can't be automatically applied on import, so it requires some setup to get working.
Assumptions
This tutorial will assume the following:
- The avatar or object already has textures for visemes (this is not a tutorial on creating these textures, rather applying them).
- The avatar has already been imported and created.
Tutorial
Viseme Setup
To start, make sure that your avatar contains a VisemeAnalyzer. If viseme blendshapes were noticed when importing and creating the avatar, there will be a VisemeAnalyzer under the Head Proxy
slot, and you can proceed to #Driving The Texture.
Otherwise, navigate to the Head Proxy
slot and attach a VisemeAnalyzer component and an AvatarVoiceSourceAssigner component. Set the TargetReference
of the AvatarVoiceSourceAssigner to the Source
field of the VisemeAnalyzer.
Driving The Texture
For organizational purposes, it is recommended to have everything from now on in a new slot under the mesh that will house the viseme textures. Open a new inspector and navigate to the mesh slot containing the material for visemes. Create a child slot and give it a descriptive name. Under this slot, add a DirectVisemeDriver component and a ValueGradientDriver<int> component. In the ValueGradientDriver, set the Progress
to 1.00
and turn Interpolate
off. Add as many points as there exists viseme types in the DirectVisemeDriver (as of the time of writing, this is 16 points). For each point, drag its Position
field into the first available viseme driver on the DirectVisemeDriver.
Before moving on, it is best to look at how the avatar viseme textures are set up. There are two likely scenarios for how viseme textures are provided: as an atlased texture, where there exists one texture that houses every individual viseme state, or as separated textures, with one texture per viseme state. It doesn't necessarily matter which type is being used, but the process for implementing each is subtly different.
Atlased Texture
Attach an AtlasInfo component and a UVAtlasAnimator component to the slot. In the AtlasInfo component, set GridSize
to the width and height of the atlased texture and GridFrames
to the amount of individual viseme textures on the atlas. For example, if there are 11 viseme state textures arranged in a 4x3 grid, set GridSize
to (4, 3)
and GridFrames
to 11.
Find (or create and apply) the material that will hold your atlased viseme texture. Drag the TextureScale
label into the UVAtlasAnimator's ScaleField
field, and the TextureOffset
label into the UVAtlasAnimator's OffsetField
field. Drag the AtlasInfo component set up before into the AtlasInfo
field of the UVAtlasAnimator. Drag the Frame
label on the UVAtlasAnimator into the Target
field on the ValueGradientDriver.
Finally, for each Value
in the ValueGradientDriver, type the relevant frame on the atlas that the viseme should correspond to. To continue the example mentioned before, the topleft atlas on the grid is frame 0, and the last frame is frame 10.
Separated Textures
Attach an AssetMultiplexer<ITexture2D> component to the slot. Add and set as many textures as there exists for the avatar visemes. Drag the Index
field into the Target
field of the ValueGradientDriver.
Find (or create and apply) the material that will be your current viseme texture. Drag the AlbedoTexture
label into the Target
field of the AssetMultiplexer. Drag the Index
label of the AssetMultiplexer into the Target
field of the ValueGradientDriver.
Finally, for each Value
in the ValueGradientDriver, type the relevant index in the AssetMultiplexer of the texture you want to use for its associated viseme.
Conclusion
By now, 2D visemes should be completely setup for your avatar. For any unused visemes that don't necessarily map to any texture, you can safely delete its associated point in the ValueGradientDriver (it is highly recommended you do this from highest index to lowest index to prevent getting confused).
For more in-depth details about how certain components or steps work, it is highly recommended to view the individual wiki pages for the related topic.
Integration With Other Mouth Textures
Some avatars may have mouth expressions outside of visemes that one would want to take precedence over the normal viseme textures. Integration of these textures is simple by virtue of the fact that ValueGradientDriver prefers the point of highest index when evaluating its output. This means that one can add points to the end of the ValueGradientDriver, and make a system that sets the Position
on any relevant point to 1, thereby making it take precedence over previous points.