Two or more tracks can be “group humanized” so that their timing resembles musicians playing in the same room together. For example, a bass and a drum sequence will adapt to each others delays in a “humanized” way.
I was excited to give it a try in the Sleep Studies set, but it doesn’t work for rendered audio tracks yet. Since it just shifts the timing of the start points of audio, it’s fine for short samples, but a song-length audio file won’t benefit from this (as of version 1.3). I’m keeping my eye on this tool to see if we might be able to use it someday.