Multimodal logical inference system for visual-textual entailment

Riko Suzuki, Hitomi Yanaka, Masashi Yoshikawa, Koji Mineshima, Daisuke Bekki

研究成果: Conference contribution

3 被引用数 (Scopus)

抄録

A large amount of research about multimodal inference across text and vision has been recently developed to obtain visually grounded word and sentence representations. In this paper, we use logic-based representations as unified meaning representations for texts and images and present an unsupervised multimodal logical inference system that can effectively prove entailment relations between them. We show that by combining semantic parsing and theorem proving, the system can handle semantically complex sentences for visual-textual inference.

本文言語English
ホスト出版物のタイトルACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Student Research Workshop
出版社Association for Computational Linguistics (ACL)
ページ386-392
ページ数7
ISBN(電子版)9781950737475
出版ステータスPublished - 2019
外部発表はい
イベント57th Annual Meeting of the Association for Computational Linguistics, ACL 2019 - Student Research Workshop, SRW 2019 - Florence, Italy
継続期間: 2019 7月 282019 8月 2

出版物シリーズ

名前ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Student Research Workshop

Conference

Conference57th Annual Meeting of the Association for Computational Linguistics, ACL 2019 - Student Research Workshop, SRW 2019
国/地域Italy
CityFlorence
Period19/7/2819/8/2

ASJC Scopus subject areas

  • 言語および言語学
  • コンピュータ サイエンス(全般)
  • 言語学および言語

フィンガープリント

「Multimodal logical inference system for visual-textual entailment」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル