In this paper, we propose to employ median filtering (MF) to the modulation domain of the complex-valued acoustic spectrogram in order to alleviate the noise effect in speech signals and thereby improve noise robustness. Median filtering is well known by its excellent capability of removing the speckle noise in data while preserving the embedded sharp contrasts. When median filtering is applied to the temporal modulation spectrum, which is the Fourier transform for either of real and imaginary acoustic spectrograms along the time axis, we find that the mismatch caused by noise can be significantly reduced, and the resulting speech features can be more noise-robust and provide better accuracy for speech recognition in comparison with the original unprocessed features. More particularly, the proposed method possesses three explicit merits. First, via the median filtering operation, the outliers residing in the modulation spectrum probably caused by noise can be substantially alleviated. Second, by virtue of the individual processing of real and imaginary acoustic spectrograms, the proposed method will not experience the knotty problem of speech-noise cross-term that usually exists in the conventional acoustic spectral enhancement methods, especially when the noise reduction process is inevitable. Third, the median filtering process is completely unsupervised and requires no prior information about the clean speech and noise. All of the evaluation experiments are conducted on the two databases, the connected-digit Aurora-2 database and the median-vocabulary Aurora-4 database. According to the recognition results, we demonstrate that the proposed MF-based method can achieve performance competitive to or better than many state-of-the-art noise robustness methods, including histogram equalization (HEQ), mean and variance normalization (MVN), MVN plus ARMA filtering (MVA), temporal structure normalization (TSN) and advanced front-end (AFE).