The ever-increasing amount of published academic results poses a challenge in interpretation and validation of these publications and rendering them to scientific facts. Despite the apparent lack of alignment between published claims and established facts, accounting for network structure enables predictive models that can assess the validity of published claims. Using pre-trained models on simulated alternative attention and local clustering distributions (which translates to modifications of funding policies) of academic publication we show that the overall knowledge of facts may be dramatically improved. We conclude by a discussion of applications of our methodology to other domains.