Generative Process of RETM. Figure 1 reveals the graphic representation of RETM.
Algorithm 1 explains the generative process of RETM. Note that each record has its own topic, and we name them local topics; one record and its "neighbor" records from the identical query list together produce some topics and we name them background topics.
Secondly, to verify the background topics distribution's impact, we can divide RETM into several models due to different test objectives.
Although three authors study very distinct approaches under the same research topic, which causes totally different word distributions, this difference could be measured through analysis of P(w | e, z) based on RETM. Now we can get the conclusion that topic model that is associated with entity distribution could be more detailed and distinguishable in topic category subdivision process.
We also need to analyze the perplexity value of RETM compared to the baseline models: LDA, Link-LDA, AM, and ATM.
This time, we compare RETM with LDA, ATM, and RETM-self.
Secondly, both RETM-self and RETM's performances are better than LDA, which further indicates that with the help of entity distributions, our model can be more accurate and convenient to categorize the given topics.
ATM performs slightly better than LDA due to the linking information gathered from entities but still cannot compare with RETM, who gains almost twice accuracy rate than others.