Extractive speech summarization, aiming to select an indicative set of sentences from a spoken document so as to concisely represent the most important aspects of the document, has emerged as an attractive area of research and experimentation. A recent school of thought is to employ the language modeling (LM) framework along with the Kullback-Leibler (KL) divergence measure for important sentence selection, which has shown preliminary promise for extractive speech summarization. Our work in this paper continues this general line of research in two significant aspects. First, we explore a novel sentence modeling approach built on top of the notion of relevance, where the relationship between a candidate summary sentence and the spoken document to be summarized is discovered through various granularities of context for relevance modeling. Second, not only lexical but also topical cues inherent in the spoken document are exploited for sentence modeling. Experiments on broadcast news summarization seem to demonstrate the performance merits of our methods when compared to several existing methods.