Generative Ratio Matching

Playing a zero-sum game or not? This is not a question.

In stead of fitting your generator \(q\) to your data distribution \(p\) in a zero-sum game via a discriminator, consider matching \(q\) and \(p\) via the maximum mean discrepancy (MMD) criterion in a lower dimensional space. To ensure distributions in the original space to be matched, the lower dimensional space is projected, by \(f_\theta\), while keeping the ratios between \(p\) and \(q\) on both spaces to be the same by minimizing \(\int q(x) \left( \frac{p(x)}{q(x)} - \frac{p\left(f_\theta(x)\right)}{q\left(f_\theta(x)\right)} \right)^2 dx\) w.r.t to \(\theta\). Once \(\frac{p(x)}{q(x)} = \frac{p\left(f_\theta(x)\right)}{q\left(f_\theta(x)\right)}\) is constrained, one can use MMD to match the distribution on the lower dimensional space effectively.

No zero-sum games any more :)

This yields an effective training methods called generative ratio matching (GRAM) that trains deep networks to generate data with better quality than GANs whiling being as stable as MMD networks.

Check out our paper on GRAM if you are interested in the method.