This work analyzes how information propagates among different information sources in a gradient-descent learning paradigm, and proposes an extendable version of the JRL framework (eJRL), which is rigorously extendable to new information sources to avoid model re-training in practice.