On Eliminating Inductive Biases of Deep Language Models
Authors | |
---|---|
Year of publication | 2021 |
Type | Appeared in Conference without Proceedings |
MU Faculty or unit | |
Citation | |
Description | This poster outlines problems of modern neural language models with out-of-domain performance. It suggests that this might be a consequence of narrow model specialization. In order to eliminate this flaw, it suggests two main directions of future work: 1. Introduction of evaluative metrics can identify out-of-domain generalization abilities, while 2. Objective approach adjusts the training objective to respect the desired generalization properties of the system. |
Related projects: |