Hi, welcome!
This is Lin Canguang Lab (LCG Lab), led by Xiang LI.
Our lab focuses on Signal Processing, Interactive Media, and Affective Computing, exploring how they advance practical engineering. Our research not only improves the operation of machines in the physical world but also drives the creation of intelligent, lifelike behaviors for agents and characters in virtual environments.
Researches
Facial Expression with Modern AI
๐คจ
Mr Takahashi, et al.
Thanks to Mr Nejime, Mr Maeda, Mr Igarashi and Mr Takada
- Emotion
- AI
- Internet of Things
- Game Design
- Animation Filmmaking
- Unity
Large Language Models (LLM) AI enables machines to recognize, interpret, and generate human facial expressions, enhancing emotional interaction and digital communication.

Japanese Sign Language Recognition
๐
Mr Wang, et al.
- Accesibility
- Emotion
- Internet of Things
- AI
- Rendering
AI-powered recognition of Japanese Sign Language fosters accessibility and inclusivity by enabling real-time gesture translation and communication support.

Emotional Characters in Animation Film
๐ฌ
Miss Hayashi & Miss Hamada
- Animation Filmmaking
- Emotion
- Rendering
- Character Design
Emotional animated characters created with advanced techniques and coding bring stories to life, deepening audience connection and narrative impact.

Study-With-Me & Emotional Sound
โ๐ง๐ฎ
Miss Akutsu
- Emotion
- Music
- SFX
- Game Design
Study-With-Me games integrate emotional soundscapes to boost focus, reduce stress, and make learning more engaging and enjoyable.