You are here


Export 5 results:
Author Title Type [ Year(Asc)]
L. Lu, Zhang, X., Cho, K. H., and Renals, S., “A Study of the Recurrent Neural Network Encoder-Decoder for Large Vocabulary Speech Recognition”, in Proc. INTERSPEECH, 2015.
P. Bell and Renals, S., “A system for automatic alignment of broadcast media captions using weighted finite-state transducers”, in Proc. ASRU, 2015.
P. Bell, Lai, C., Llewellyn, C., Birch, A., and Sinclair, M., “A system for automatic broadcast news summarisation, geolocation and translation”, in Proc. Interspeech (demo session), Dresden, Germany, 2015.
P. Lanchantin, Karanasou, P., Gales, M. J. F., Liu, X., Wang, L., Qian, Y., Woodland, P. C., and Zhang, C., “The Development of the Cambridge University Alignment Systems for the Multi-Genre Broadcast Challenge”, in Proc. of ASRU, Scottsdale, USA, 2015.
M. Wester, Corley, M., and Dall, R., “The Temporal Delay Hypothesis: Natural, Vocoded and Synthetic Speech”, in Proc. of DiSS 2015, Edinburgh, 2015.
A. Cervone, Lai, C., Pareti, S., and Bell, P., “Towards automatic detection of reported speech in dialogue using prosodic cues”, in Proc. Interspeech, Dresden, Germany, 2015.
C. Valentini-Botinhao, Wu, Z., and King, S., “Towards minimum perceptual error training for DNN-based speech synthesis”, in Proc. Interspeech, Dresden, Germany, 2015.
M. Doulaty, Saz, O., and Hain, T., “Unsupervised Domain Discovery using Latent Dirichlet Allocation for Acoustic Modelling in Speech Recognition”, in Proceedings of the 16th Annual Conference of the International Speech Communication Association (Interspeech), Dresden, Germany, 2015.
P. Karanasou, Wang, Y., Gales, M., and Woodland, P., “Adaptation of Deep Neural Network Acoustic Models Using Factorised I-vectors”, in Proceedings of Interspeech’14, 2014.
I. Casanueva, Christensen, H., Hain, T., and Green, P., “"Adaptive speech recognition and dialogue management for users with speech disorders”, in Proceedings of Interspeech'14, 2014.
H. Christensen, Casanueva, I., Cunningham, S., Green, P., and Hain, T., “Automatic Selection of Speakers for Improved Acoustic Modelling : Recognition of Disordered Speech with Sparse Data”, in Spoken Language Technology Workshop, SLT'14, Lake Tahoe, 2014.
O. Saz, Doulaty, M., and Hain, T., “Background-Tracking Acoustic Features for Genre Identification of Broadcast Shows”, in Proceedings of the 2014 Spoken Language Technology (SLT) Workshop, South Lake Tahoe NV, USA, 2014, pp. 118–123.
O. Saz, Doulaty, M., and Hain, T., “Background-tracking acoustic features for genre identification of broadcast shows”, in Proceedings of the 2014 IEEE Spoken Language Technology Workshop (SLT), South Lake Tahoe, NV, 2014, pp. 118–123.
P. Swietojanski, Ghoshal, A., and Renals, S., “Convolutional Neural Networks for Distant Speech Recognition”, Signal Processing Letters, IEEE, vol. 21, pp. 1120-1124, 2014.
P. Bell, Driesen, J., and Renals, S., “Cross-lingual adaptation with multi-task adaptive networks”, in Proc. Interspeech, 2014.
L. Lu, Ghoshal, A., and Renals, S., “Cross-lingual subspace Gaussian mixture model for low-resource speech recognition”, IEEE Transactions on Audio, Speech and Language Processing, 2014.
R. Dall, Wester, M., and Corley, M., “The Effect of Filled Pauses and Speaking Rate on Speech Comprehension in Natural, Vocoded and Synthetic Speech”, in Proceedings of Interspeech, 2014.
X. Chen, Wang, Y., Liu, X., Gales, M., and Woodland, P., “Efficient GPU-based training of recurrent neural network language models using spliced sentence bunch”, in Proc. Interspeech, Singapore, 2014.
X. Liu, Wang, Y., Chen, X., Gales, M., and Woodland, P., “EFFICIENT LATTICE RESCORING USING RECURRENT NEURAL NETWORK LANGUAGE MODELS”, in IEEE ICASSP2014, Florence, Italy, 2014.
M. P. Aylett, Dall, R., Ghoshal, A., Henter, G. Eje, and Merritt, T., “A Flexible Front-End for HTS”, in Proc. Interspeech, Singapore, 2014.
R. Dall, Tomalin, M., Wester, M., Byrne, W., and King, S., “Investigating Automatic & Human Filled Pause Insertion for Speech Synthesis”, in Proceedings of Interspeech, 2014.
T. Merritt, Raitio, T., and King, S., “Investigating source and filter contributions, and their interaction, to statistical parametric speech synthesis”, in Proc. Interspeech, Singapore, 2014, pp. 1509–1513.
P. Swietojanski and Renals, S., Learning Hidden Unit Contributions for Unsupervised Speaker Adaptation of Neural Network Acoustic Models, in Proc. IEEE Workshop on Spoken Language Technology, Lake Tahoe, USA, 2014.
G. Eje Henter, Merritt, T., Shannon, M., Mayo, C., and King, S., “Measuring the perceptual effects of modelling assumptions in speech synthesis using stimuli constructed from repeated natural speech”, in Proceedings of Interspeech, Singapore, 2014.
P. Lanchantin, Gales, M. J. F., King, S., and Yamagishi, J., “Multiple-Average-Voice-based Speech Synthesis”, in Proc. ICASSP, 2014.