This is a link to a paper on machine translation of natural languages by Ulrich Germann, Michael Jahr, Kevin Knight, Daniel Marcu, and Kenji Yamada.
The paper is highly mathematical/algorithmic, but I'm 'saving it' for future reference, as even at a 'surface scan' level the paper offers some interesting insight into the process/future of machine translation.
Download paper [PDF]
Of course, if we investigate the complexity of machine translation models, it also raises the question of how the human mind might calculate similar probabilities/whether MT is a completely distinct 'animal'/whether processes employed in MT can be back-translated or superimposed onto natural language learning processes in newer evolutionary forms of human SLL.
Wednesday, December 27, 2006
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment