We are Miulab.

Machince Intelligence & Understanding Laboratory
National Taiwan University


What We Do

Our research focus is the intersection between following areas.

Deep Learning

Apply and improve the state-of-the-art deep learning models for diverse tasks.

Conversational AI

Advance dialogue systems for achieving better conversational AI.

Natural Language Processing

Improve and resolve the current language-related tasks.

Featured Works

These are some of our recent promising research projects.

Lamp

Web Design

Project Link

Vero molestiae sed aut natus excepturi. Et tempora numquam. Temporibus iusto quo.Unde dolorem corrupti neque nisi.

MUSE: Sense Embeddings

Lee and Chen (EMNLP 2017)

Project Link

Vero molestiae sed aut natus excepturi. Et tempora numquam. Temporibus iusto quo.Unde dolorem corrupti neque nisi.

Woodcraft

Branding

Project Link

Vero molestiae sed aut natus excepturi. Et tempora numquam. Temporibus iusto quo.Unde dolorem corrupti neque nisi.

Liberty

Web Development

Project Link

Vero molestiae sed aut natus excepturi. Et tempora numquam. Temporibus iusto quo.Unde dolorem corrupti neque nisi.

Fuji

Web Design

Project Link

Vero molestiae sed aut natus excepturi. Et tempora numquam. Temporibus iusto quo.Unde dolorem corrupti neque nisi.

Lady Shutterbug

Branding

Project Link

Vero molestiae sed aut natus excepturi. Et tempora numquam. Temporibus iusto quo.Unde dolorem corrupti neque nisi.

What We Have Done

    Preprint arXiv

  1. Semantically-Aligned Equation Generation for Solving and Reasoning Math Word Problems
    Ting-Rui Chiang, Yun-Nung Chen
    arXiv preprint arXiv:1811.00720
    [arXiv]
  2. Modeling Melodic Feature Dependency with Modularized Variational Auto-Encoder
    Yu-An Wang, Yu-Kai Huang, Tzu-Chuan Lin, Shang-Yu Su, Yun-Nung Chen
    arXiv preprint arXiv:1811.00162
    [arXiv]
  3. BCWS: Bilingual Contextual Word Similarity
    Ta-Chung Chi, Ching-Yen Shih, Yun-Nung Chen
    arXiv preprint arXiv:1810.08951
    [arXiv]
  4. xSense: Learning Sense-Separated Sparse Representations and Textual Definitions for Explainable Word Sense Networks
    Ting-Yun Chang, Ta-Chung Chi, Shang-Chi Tsai, Yun-Nung Chen
    arXiv preprint arXiv:1809.03348
    [arXiv]
  5. Learning Context-Sensitive Time-Decay Attention for Role-Based Dialogue Modeling
    Shang-Yu Su, Pei-Chieh Yuan, Yun-Nung Chen
    arXiv preprint arXiv:1809.01557
    [arXiv]
  6. 2018

  7. Abstractive Dialogue Summarization with Sentence-Gated Modeling Optimized by Dialogue Acts
    Chih-Wen Goo and Yun-Nung Chen
    In Proceedings of 7th IEEE Workshop on Spoken Language Technology (SLT 2018)
    [paper] [paper] [code]
  8. Investigating Linguistic Pattern Ordering in Hierarchical Natural Language Generation
    Shang-Yu Su and Yun-Nung Chen
    In Proceedings of 7th IEEE Workshop on Spoken Language Technology (SLT 2018)
    [paper] [code]
  9. CLUSE: Cross-Lingual Unsupervised Sense Embeddings (acceptance rate: 24.6%)
    Ta-Chung Chi and Yun-Nung Chen
    In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018)
    [paper] [code]
  10. Discriminative Deep Dyna-Q: Robust Planning for Dialogue Policy Learning (acceptance rate: 24.6%)
    Shang-Yu Su, Xiujun Li, Jianfeng Gao, Jingjing Liu, and Yun-Nung Chen
    In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018)
    [paper] [code]
  11. How Time Matters: Learning Time-Decay Attention for Contextual Spoken Language Understanding in Dialogues (acceptance rate: 29%) Shang-Yu Su, Pei-Chieh Yuan, and Yun-Nung Chen
    In Proceedings of 2018 Conference of the North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL-HLT 2018)
    [paper] [acl] [code]
  12. Natural Language Generation by Hierarchical Decoding with Linguistic Patterns (acceptance rate: 29%)
    Shang-Yu Su, Kai-Ling Lo, Yi-Ting Yeh, and Yun-Nung Chen
    In Proceedings of 2018 Conference of the North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL-HLT 2018)
    [paper] [acl] [code]
  13. Slot-Gated Modeling for Joint Slot Filling and Intent Prediction (acceptance rate: 29%)
    Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and Yun-Nung Chen
    In Proceedings of 2018 Conference of the North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL-HLT 2018)
    [paper] [acl] [code]
  14. 2017

  15. Dynamic Time-Aware Attention to Speaker Roles and Contexts for Spoken Language Understanding
    Po-Chun Chen, Ta-Chung Chi, Shang-Yu Su, and Yun-Nung Chen
    In Proceedings of 2017 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU 2017)
    [paper] [poster] [arXiv] [doi] [code] [bib]
  16. Speaker Role Contextual Modeling for Language Understanding and Dialogue Policy Learning
    Ta-Chung Chi, Po-Chun Chen, Shang-Yu Su, and Yun-Nung Chen
    In Proceedings of The 8th International Joint Conference on Natural Language Processing (IJCNLP 2017)
    [paper] [poster] [acl] [arXiv] [code] [bib]
  17. MUSE: Modularizing Unsupervised Sense Embeddings (acceptance rate: 22%)
    Guang-He Lee and Yun-Nung Chen
    In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017)
    [paper] [acl] [code] [poster] [bib]

Who We Are

Contact Us

Get in touch and let's make something great together.

miulab@csie.ntu.edu.tw +886 (02) 3366-4888 # 524

Where To Find Us

National Taiwan University
Taipei, Taiwan
106 TW

Follow Us