The task of extractive speech summarization is to select a set of salient sentences from an original spoken document and concatenate them to form a summary, facilitating users to better browse through and understand the content of the document. In this paper we present an empirical study of leveraging various supervised discriminative methods for effectively ranking important sentences of a spoken document to be summarized. In addition, we propose a novel margin-based discriminative training (MBDT) algorithm that aims to penalize non-summary sentences in an inverse proportion to their summarization evaluation scores, leading to better discrimination from the desired summary sentences. By doing so, the summarization model can be trained with an objective function that is closely coupled with the ultimate evaluation metric of extractive speech summarization. Furthermore, sentences of spoken documents are embodied by a wide range of prosodie, lexical and relevance features, whose utilities are extensively compared and analyzed. Experiments conducted on a Mandarin broadcast news summarization task demonstrate the performance merits of our summarization method when compared to several well-studied state-of-the-art supervised and unsupervised methods.