(works after Tatsuki joined in MBZUAI)Refereed Papers
2025
- Transformer Key-Value Memories Are Nearly as Interpretable as Sparse Autoencoders
Mengyu Ye, Jun Suzuki, Tatsuro Inaba, Tatsuki Kuribayashi
Proceedings of The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS 2025), 2025/12
[to appear] - Can Language Models Learn Typologically Implausible Languages?
Tianyang Xu, Tatsuki Kuribayashi, Yohei Oseki, Ryan Cotterell, Alex Warstadt
Transactions of the Association for Computational Linguistics (TACL)
[arXiv] - Which Word Orders Facilitate Length Generalization in LMs? An Investigation with GCG-Based Artificial Languages
*Nadine El-Naggar, *Tatsuki Kuribayashi, Ted Briscoe
Proceedings of The 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP 2025, main long), 2025/11
[arXiv] - Large Language Models Are Human-Like Internally
Tatsuki Kuribayashi, Yohei Oseki, Souhaib Ben Taieb, Kentaro Inui, Timothy Baldwin
Transactions of the Association for Computational Linguistics (TACL) (to be presented at EMNLP 2025)
[arXiv | code] - GCG-Based Artificial Languages for Evaluating Inductive Biases of Neural Language Models
Nadine El-Naggar, Tatsuki Kuribayashi, Ted Briscoe
Proceedings of Conference on Computational Natural Language Learning (CoNLL 2025), 2025/08
[paper] - Can LLMs Simulate L2-English Dialogue? An Information-Theoretic Analysis of L1-Dependent Biases
Rena Wei Gao, Xuetong Wu, Tatsuki Kuribayashi, Mingrui Ye, Siya Qi, Carsten Roever, Yuanxing Liu, Zheng Yuan, Jey Han Lau
Proceedings of The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025, main long), 2025/08
[paper | arXiv] - Can Input Attributions Explain Inductive Reasoning in In-Context Learning?
Mengyu Ye, Tatsuki Kuribayashi, Goro Kobayashi, Jun Suzuki
Findings of the 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025, Findings long), 2025/08
[paper | arXiv] - Syntactic Learnability of Echo State Neural Language Models at Scale
Ryo Ueda, Tatsuki Kuribayashi, Shunsuke Kando, Kentaro Inui
The 14th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2025, Non-archival), 2025/05
[arXiv] - Libra-Leaderboard: Towards Responsible AI through a Balanced Leaderboard of Safety and Capability
Haonan Li, Xudong Han, ..., Tatsuki Kuribayashi, ..., Eduard Hovy, Iryna Gurevych, Preslav Nakov, Monojit Choudhury, Timothy Baldwin
Proceedings of 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics (NAACL 2025, system demonstrations track), 2025/04
[paper] | [arXiv] - Does Vision Accelerate Hierarchical Generalization of Neural Language Learners?
Tatsuki Kuribayashi, Timothy Baldwin
Proceedings of The 31st International Conference on Computational Linguistics (COLING 2025, long), 2025/01
[paper | arXiv]
2024
- CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark
David Romero, Chenyang Lyu, Haryo Akbarianto Wibowo,...,Tatsuki Kuribayashi,...,Thamar Solorio, Alham Fikri Aji
Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (NeurIPS 2024 Datasets and Benchmarks Track), 2024/12
[paper | arXiv] - First Heuristic Then Rational: Dynamic Use of Heuristics in Language Model Reasoning
Yoichi Aoki, Keito Kudo, Tatsuki Kuribayashi, Shusaku Sone, Masaya Taniguchi, Keisuke Sakaguchi, Kentaro Inui
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP 2024, main short), 2024/12
[paper | arXiv] - Emergent Word Order Universals from Cognitively-Motivated Language Models
Tatsuki Kuribayashi, Ryo Ueda, Ryo Yoshida, Yohei Oseki, Ted Briscoe, Timothy Baldwin
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024, main long), 2024/08 (acceptance rate: 940/4407=21.3%)
[paper | arXiv] - Psychometric Predictive Power of Large Language Models
Tatsuki Kuribayashi, Yohei Oseki, Timothy Baldwin
Findings of the 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2024, Findings long), 2024/06 (acceptance rate: top 869/2434=35.7%)
[paper | arXiv] - 言語モデルの第二言語獲得
大羽未悠, 栗林樹生, 大内啓樹, 渡辺太郎
自然言語処理, Volume 31, Number 2, pp.433-455, 2024/06
[paper] - To Drop or Not to Drop? Predicting Argument Ellipsis Judgments: A Case Study in Japanese
Yukiko Ishizuki, Tatsuki Kuribayashi, Yuichiroh Matsubayashi, Ryohei Sasano and Kentaro Inui
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024, long), 2024/05 (acceptance rate: 1556/3417=52%)
[paper | arXiv] - Analyzing Feed-Forward Blocks in Transformers through the Lens of Attention Maps
Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui
Proceedings of the 12th International Conference on Learning Representations (ICLR 2024, spotlight, top 5%), 2024/05 (acceptance rate: 2260/7262=31%)
[paper | arXiv]
2023
- Assessing Step-by-Step Reasoning against Lexical Negation: A Case Study on Syllogism
Mengyu Ye, Tatsuki Kuribayashi, Jun Suzuki, Hiroaki Funayama and Goro Kobayashi
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP-2023, main short), 2023/12 (acceptance rate: 146/1041=14.0%)
[paper | arXiv] - Assessing Chain-of-Thought Reasoning against Lexical Negation: A Case Study on Syllogism
Mengyu Ye, Tatsuki Kuribayashi, Jun Suzuki, Hiroaki Funayama and Goro Kobayashi
Proceedings of Student Research Workshop (SRW) at the 61st Annual Meeting of the Association for Computational Linguistics 2023 (ACL-SRW, Non-archival, best paper award), 2023/07 - Use of an AI-powered Rewriting Support Software in Context with Other Tools: A Study of Non-Native English Speakers
Takumi Ito, Naomi Yamashita, Tatsuki Kuribayashi, Masatoshi Hidaka, Jun Suzuki, Ge Gao, Jack Jamieson and Kentaro Inui
The ACM Symposium on User Interface Software and Technology 2023 (UIST 2023), 2023/10
[paper] - Second Language Acquisition of Neural Language Models
Miyu Oba, Tatsuki Kuribayashi, Hiroki Ouchi, Taro Watanabe
Findings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL-2023, Findings long), 2023/07 (acceptance rate: top 39.1%)
[paper | arXiv] - Transformer Language Models Handle Word Frequency in Prediction Head
Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi and Kentaro Inui
Findings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL-2023, Findings short), 2023/07 (acceptance rate: top 39.1%)
[paper | arXiv]
Preprints
- On Representational Dissociation of Language and Arithmetic in Large Language Models
Riku Kisako, Tatsuki Kuribayashi, Ryohei Sasano
[arXiv] - Think-to-Talk or Talk-to-Think? When LLMs Come Up with an Answer in Multi-Step Reasoning
Keito Kudo, Yoichi Aoki, Tatsuki Kuribayashi, Shusaku Sone, Masaya Taniguchi, Ana Brassard, Keisuke Sakaguchi, Kentaro Inui
[arXiv]