Discriminative language modeling (DLM) attempts to improve speech recognition performance by reranking the recognition hypotheses output from a baseline system. Most of the existing DLM methods assume that the reranking task can be treated as a linear discrimination problem and all testing utterances share the same parameter vector for reranking of hypotheses. However, the latter assumption sometimes results in a trained DLM model with weak generalizability and unsatisfactory performance. In view of this problem, we hence propose a relevance-based DLM (RDLM) framework that can efficiently infer the DLM model parameters of each testing utterance on-the-fly for better recognition performance. The structures and characteristics of the RDLM framework are extensively investigated, while the performance is thoroughly analyzed and verified by comparison with the existing DLM methods.