Sequence to Sequence Models
Kyle Polich discusses sequence to sequence models. The following are the points from the podcast
Many approaches of ML suffer from fixed-input-fixed-output Natural Language does not have fixed input and fixed output. Summarizing a paper, cross language translation does not have fixed length input-output What a word means depends on the context. There is an internal state representation that the algo is learning The encoder/decoder architecture has obvious promise for machine translation, and has been successfully applied this way.