It is proved that optimizing the encoder over any class of universal approximators, such as deterministic neural networks, is enough to come arbitrarily close to the optimum, and advertised this framework, which holds for any metric space and prior, as a sweet-spot of current generative autoencoding objectives.