This study investigates constructing low-distortion utterances to benefit downstream automatic speech recognition (ASR) systems at the front-end stage based on a speech enhancement (SE) network. With the dual-path Transformer network (DPTNet) as the SE archetype, we make effective use of short-time discrete cosine transform (STDCT) features to infer the respective mask-estimation network. Furthermore, we seek to jointly optimize the spectral-distance loss and the perceptual loss for the training of the model components of our proposed SE model so as to enhance the input utterances without introducing significant distortion. Extensive evaluation experiments are conducted on the VoiceBank-DEMAND and VoiceBank-QUT tasks, containing stationary and non-stationary noises, respectively. The corresponding results show that the proposed SE method yields competitive perceptual metric scores on SE but significantly lower word error rates (WER) on ASR in relation to several top-of-the-line methods. Notably, the proposed SE method works remarkably well on the VoiceBank-QUT ASR task, thereby confirming its excellent generalization capability to unseen scenarios.