從一本正經的胡說八道,到引經據典的句句屬實
Date:2025/12/11 14:30-16:00
Location:R103, CSIE
Speakers:Dr. Da-Cheng Juan 阮大成
Host:羅紹元教授
Abstract:
Recent advancements in Large Language Models (LLMs) have significantly expanded their capabilities, but challenges such as hallucination and factual inconsistency persist. In this talk, Da-Cheng will present a comprehensive exploration of LLMs from a factuality perspective. The session covers the architectural evolution from early neural networks to modern transformers, the potential root causes of hallucinations, and why factuality is critical for user trust and real-world applications. It introduces cutting-edge solutions including multi-agent strategies, innovative decoding techniques, and retrieval-augmented generation (RAG) systems to enhance factual grounding. The talk aims to provide both a technical deep dive and a visionary
Biography:
Dr. Da-Cheng Juan(阮大成) is a researcher and engineering manager at Google Research, where he advances factuality, robustness, and multimodal alignment in large language models. Da-Cheng has driven critical innovations across the Gemini ecosystem—from core model improvements to user-facing capabilities. He is also a co-author of the Gemini research paper.
Beyond Gemini, Da-Cheng led the development of retrieval-augmented generation (RAG) for Vertex AI on Google Cloud, enabling reliable, enterprise-grade AI solutions.
Da-Cheng maintains strong ties with academia. He holds a Ph.D. from Carnegie Mellon University, has published 80+ papers, and earned multiple patents. He also serves regularly as an Area Chair for top conferences including NeurIPS, ICML, ICLR, ACL, and CVPR.
Bridging academia and industry, Da-Cheng is passionate about mentoring students, collaborating with researchers, and advising industry leaders on the evolving generative AI landscape. He welcomes deep technical conversations and is committed to supporting the next generation of AI talent and innovators.