Monday, December 7, 2020

12/7 Comments

 

It’s cool reading about the great strides that Google Translate has made, something I’ve definitely experienced for myself – it honestly doesn’t feel like it’s been all that long since it returned nothing but unintelligible gibberish, but nowadays, as demonstrated in the article, it seems to usually do the job well enough. I do wonder, however, if this may not be a drawback in some ways; if the output is just nonsense, it’s easy to tell, but if it seems to mostly make sense, it becomes much harder to tell when machine translation makes mistakes, as it still sometimes does.


It’s also interesting to see that the method that has seen most success being machine learning, as opposed to (presumably) attempting to approach the problem by with a more manual parsing of the syntax and lexicon of the languages in question. It reminds me of how learning a native language, as one does in their childhood through immersion, seems to come to us more naturally than learning a second language, where we’d typically study the grammar and vocabulary in a much more structured manner. In the same way, though, it might be harder to fix mistakes that are appearing consistently in a machine-learning based system, since it seems like we wouldn’t understand how it’s coming up with its answerse in nearly the same level of detail.

No comments:

Post a Comment