Picture for Yun-Nung Chen

Yun-Nung Chen

EJ

LLM Inference Enhanced by External Knowledge: A Survey

Add code
May 30, 2025
Viaarxiv icon

Augment or Not? A Comparative Study of Pure and Augmented Large Language Model Recommenders

Add code
May 29, 2025
Viaarxiv icon

Creativity in LLM-based Multi-Agent Systems: A Survey

Add code
May 27, 2025
Viaarxiv icon

Language Matters: How Do Multilingual Input and Reasoning Paths Affect Large Reasoning Models?

Add code
May 23, 2025
Viaarxiv icon

Exploring Personality-Aware Interactions in Salesperson Dialogue Agents

Add code
Apr 25, 2025
Viaarxiv icon

VisTW: Benchmarking Vision-Language Models for Traditional Chinese in Taiwan

Add code
Mar 15, 2025
Viaarxiv icon

Answer, Refuse, or Guess? Investigating Risk-Aware Decision Making in Language Models

Add code
Mar 03, 2025
Viaarxiv icon

None of the Above, Less of the Right: Parallel Patterns between Humans and LLMs on Multi-Choice Questions Answering

Add code
Mar 03, 2025
Viaarxiv icon

Transferring Textual Preferences to Vision-Language Understanding through Model Merging

Add code
Feb 19, 2025
Viaarxiv icon

Clear Minds Think Alike: What Makes LLM Fine-tuning Robust? A Study of Token Perplexity

Add code
Jan 24, 2025
Figure 1 for Clear Minds Think Alike: What Makes LLM Fine-tuning Robust? A Study of Token Perplexity
Figure 2 for Clear Minds Think Alike: What Makes LLM Fine-tuning Robust? A Study of Token Perplexity
Figure 3 for Clear Minds Think Alike: What Makes LLM Fine-tuning Robust? A Study of Token Perplexity
Figure 4 for Clear Minds Think Alike: What Makes LLM Fine-tuning Robust? A Study of Token Perplexity
Viaarxiv icon