You are here

Publications

Our marketing plan writing generally consists of analyzing all the materials related to the topic, summarazing the information and then creating a satisfying conclusion.

Export 5 results:
Author Title Type [ Year(Desc)]
0
L. Lu, Kong, L., Dyer, C., Smith, N. A., and Renals, S., “Segmental Recurrent Neural Networks for End-to-end Speech Recognition”, in Proc. INTERSPEECH.
L. Liang and Steve, R., “Small-footprint Deep Neural Networks with Highway Connections for Speech Recognition”, in Proc. INTERSPEECH.
L. Lu, Zhang, X., and Renais, S., “On training the recurrent neural network encoder-decoder for large vocabulary end-to-end speech recognition”, in Proc. ICASSP.
2011
L. Lu, Ghoshal, A., and Renals, S., “Regularized Subspace Gaussian Mixture Models for Cross-lingual Speech Recognition”, in Proc IEEE ASRU, 2011.
2012
H. Christensen, Cunningham, S., Fox, C., Green, P., and Hain, T., “A comparative study of adaptive, automatic recognition of disordered speech”, in Proc Interspeech 2012, Portland, Oregon, US, 2012.
D. Povey, Hannemann, M., Boulianne, G., Burget, L., Ghoshal, A., Janda, M., Karafiat, M., Kombrink, S., Motlicek, P., Qian, Y., Riedhammer, K., Vesely, K., and Vu, N. T., “Generating exact lattices in the WFST framework”, in Proc. IEEE ICASSP, 2012, pp. 4213-4216.
L. Lu, Ghoshal, A., and Renals, S., “Joint Uncertainty Decoding with Unscented Transform for Noise Robust Subspace Gaussian Mixture Models”, in Proc. SAPA-SCALE Conference, Portland, OR, 2012.
L. Lu, Ghoshal, A., and Renals, S., “Maximum a posteriori adaptation of subspace Gaussian mixture models for cross-lingual speech recognition”, in Proc. IEEE ICASSP, 2012, pp. 4877-4880.
L. Lu, Chin, K. K., Ghoshal, A., and Renals, S., “Noise Compensation for Subspace Gaussian Mixture Models”, in Proc. Interspeech, Portland, OR, 2012.
X. Liu, Gales, M., and Woodland, P., “Paraphrastic Language Models”, in ISCA Interspeech2012, Portland, Oregon, 2012.
K. Riedhammer, Bocklet, T., Ghoshal, A., and Povey, D., “Revisiting semi-continuous hidden Markov models”, in Proc. IEEE ICASSP, 2012, pp. 4271-4274.
H. Christensen, Siddharth, S., O'Neill, P., Clarke, Z., Judge, S., Cunningham, S., and Hawley, M., “SPECS - an embedded platform, speech-driven environmental control system evaluated in a virtuous circle framework”, in Proc. Workshop on Innovation and Applications in Speech Technology, 2012.
J. Yamagishi, Veaux, C., King, S., and Renals, S., “Speech synthesis technologies for individuals with vocal disabilities: Voice banking and reconstruction”, Acoustical Science and Technology, vol. 33, pp. 1-5, 2012.
C. Fox, Christensen, H., and Hain, T., “Studio report: Linux audio for multi-speaker natural speech technology.”, in Proc. Linux Audio Conference, 2012.
P. Bell, Gales, M., Lanchantin, P., Liu, X., Long, Y., Renals, S., Swietojanski, P., and Woodland, P., “Transcription of multi-genre media archives using out-of-domain data”, in Proc. IEEE Workshop on Spoken Language Technology, Miami, Florida, USA, 2012.
P. Swietojanski, Ghoshal, A., and Renals, S., “Unsupervised Cross-lingual Knowledge Transfer for DNN-based LVCSR”, in Proceedings of the IEEE Workshop on Spoken Language Technology, 2012.
H. Lu and King, S., “Using Bayesian Networks to find relevant context features for HMM-based speech synthesis”, in Proc Interspeech 2012, Portland, Oregon, US, 2012.
2013
, A., G., and S., R., “Acoustic Data-driven Pronunciation Lexicon for Large Vocabulary Speech Recognition”, in Proc. ASRU, 2013.
L. Lu, Ghoshal, A., and Renals, S., “Acoustic Data-driven Pronunciation Lexicon for Large Vocabulary Speech Recognition”, in Proc. IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), 2013.
O. Saz and Hain, T., “Asynchronous factorisation of speaker and background with feature transforms in speech recognition”, in Proceedings of Interspeech 2013, Lyon, France, 2013.
O. Saz and Hain, T., “Asynchronous Factorisation of Speaker and Background with Feature Transforms in Speech Recognition”, in {Proceedings of the 14th Annual Conference of the International Speech Communication Association (Interspeech)}, Lyon, France, 2013, pp. 1238–1242.
P. Lanchantin, Bell, P. - J., Gales, M. - J. - F., Hain, T., Liu, X., Long, Y., Quinnell, J., Renals, S., Saz, O., Seigel, M. - S., Swietojanski, P., and Woodland, P. - C., “Automatic Transcription of Multi-genre Media Archives”, in Proceedings of SLAM Workshop, Marseille, France, 2013.
P. Lanchantin, Bell, P. J., Gales, M. J. F., Hain, T., Liu, X., Long, Y., Quinnell, J., Renals, S., Saz, O., Seigel, M. S., Swietojanski, P., and Woodland, P. C., “Automatic Transcription of Multi-Genre Media Archives”, in {Proceedings of the First Workshop on Speech, Language and Audio in Multimedia}, Marseille, France, 2013, pp. 26–31.
M. Shannon, Zen, H., and Byrne, W., “Autoregressive models for statistical parametric speech synthesis”, IEEE Trans. Audio Speech Language Process., vol. 21, pp. 587–597, 2013.
H. Lu, King, S., and Watts, O., “Combining a Vector Space Representation of Linguistic Context with a Deep Neural Network for Text-To-Speech Synthesis”, in 8th ISCA Workshop on Speech Synthesis, Barcelona, Spain, 2013, pp. 281–285.

Pages

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • next ›
  • last »