This paper investigates statistical language model adaptation for Mandarin broadcast news transcription. A topical mixture model was proposed to explore the long-span latent topical information for dynamic language model adaptation. The underlying characteristics and various kinds of model complexities were extensively investigated, while their performance was verified by comparison with the conventional MAP-based adaptation approaches, which are devoted to extracting the short-span n-gram information. The speech recognition experiments were conducted on the broadcast news collected in Taiwan. Very promising results in both perplexity and word error rate reductions were initially obtained.