Publications

(2023). Prompting for explanations improves Adversarial NLI. Is this true? Yes it is true because it weakens superficial cues. Findings of the Association for Computational Linguistics: EACL 2023.

PDF Cite URL

(2023). 因果的 プロンプトによる NLI の敵対的ロバスト性の強化. 言語処理学会第 29 回年次大会.

PDF Cite

(2022). COPA-SSE: Semi-structured Explanations for Commonsense Reasoning. Proceedings of the Thirteenth Language Resources and Evaluation Conference.

PDF Cite Dataset

(2022). Are Prompt-based Models Clueless?. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).

PDF Cite DOI URL

(2022). プロンプトモデルは表面的手がかりを 利用するか. 言語処理学会第 28 回年次大会.

PDF Cite

(2021). Learning to Learn to be Right for the Right Reasons. Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.

PDF Cite DOI URL

(2021). None the wiser? Adding “None’’Mitigates Superficial Cues in Multiple-Choice Benchmarks. 言語処理学会第 27 回年次大会.

PDF Cite

(2019). When Choosing Plausible Alternatives, Clever Hans can be Clever. Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing.

PDF Cite DOI HuggingFace-Dataset

(2019). Improving Evidence Detection by Leveraging Warrants. Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER).

PDF Cite DOI URL

(2019). Exploring Supervised Learning of Hierarchical Event Embedding with Poincaré Embeddings. 言語処理学会第 25 回年次大会.

PDF Cite